 
Ryan: welcome everyone we'll be waiting two to three minutes to get started as a
courtesy to those who are still in the
middle of connecting
Ryan: welcome everyone thanks for joining us
and thanks for being part of our
community we have a great topic to cover
today but first a few reminders please
feel free to ask questions at any time
by typing them in the IM window be aware
that any questions you post will be
publicly visible however if you prefer
you can post your question anonymously
by checking the box right below where
you enter. We often get many questions
on these webinars and we will do our
best to respond to all of them in real
time but I want to provide an additional
mechanism to ensure any questions we
miss get answered if you'll visit
aka.ms/AzureSentinelCommunity you'll be able to ask
questions on our Azure Sentinel forum, if
you're listening to this after the fact
is a recording that's a great place to
ask a question we always love to hear
your feedback on whether these webinars
are valuable to you and how we can
improve them so you can give it to us at
 aka.ms/SecurityWebinarFeedback
and all these links I'm referencing here
are in the I am window for your
convenience there. Please note that this
webinar is being recorded and will be
shared publicly we will post the
recordings on our community at
aka.ms/SecurityWebinars while you are there please join our community by visiting
aka.ms/SecurityCommunity that's the
best way to ensure you don't miss any
future webinars or major announcements
on our community you can speak directly
to our engineering teams that create our
security products you'll be able to
influence our product designs and get
early access to changes by doing things
like participating in private previews
which you can sign up for at
aka.ms/SecurityWebinars
requesting features giving feedback
reviewing our product roadmaps attending
in-person events or joining webinars
like this we believe that the best way
to improve our products is by removing
any barriers between you and the people
that create them so we hope you'll join
us we have a great topic for you today
we'll be talking about Azure Sentinel
specifically we'll be focusing on
Architecture both for the Cloud and
on-premise you to our presenter Ofer Shezaf.
He's principal program manager on
the Azure Sentinel product team so he has
deep expertise in this topic without
further ado I will turn it over to him
over
/ we got it still on mute
Ofer your still on mute. Ofer: I'm better now
better now thank you yeah sorry everyone
I I did mention now that you can hear me
that you'll spend the next 90 minutes
talking about a sending architecture
architecture it's part of a series
Azure Sentinel is a SIEM product there is a lot
to learn about it and we are holding a
number of webinars the first one was
last week the recording is already
available in the URL that Ryan mentioned
and it was about the basics it was a
deep dive but if you want to learn about
the product go and visit the last week's
webinar as Ryan mentioned today will
focus on a more advanced topic and will
talk about architecture after the
holidays break after everybody enjoys
sonification we'll be back with
additional webinars one of them will be
about and shock scenario which be more
targeted towards analysts and security
people rather than architects and we'll
show you how to use sent in in real life
stock to detect triage and investigate
incidents and lastly response to that
and lastly we'll get back to a topic
we've revealed through earlier in
September threat hunting using sentinel
we're also working to schedule
additional webinars to keep tuned
without further ado Azure Sentinel architecture. We have two large topics today to
discuss in a way when preparing the
slides and preparing for this
presentation I thought that it might be
too much for a single webinar and I'll
try to make sure that we cover as much
as possible and hopefully in in a way
that would be useful to you around both
topics I may cut a few ER as a bit short
to make sure that if you came to your
boat cloud you learn and if you're more
interested in on-trend collection you
also get the information you need as
Ryan mentioned if you have any further
questions either
chat box or later on in the security
community and we'll start with
collection
Azure sentinel is a team security
information in event management a
solution it's there to serve as the
nerve center for your security
operations center for your sock the most
basic functionality provider to be that
is to collect on a tree from anywhere in
your a ite state it would be cloud
microstrip dollars whereas other clouds
as well as on Prem a devices since we
are dealing with security naturally
security devices are a important goal
but that's not the only source of events
we want to collect data from for example
finally an email activity are very
important in order to take a large
number of threats as residence supports
all of that and the discussion will show
different ways in which you can connect
a sources and get events into a larger
sense you know
largely speaking I will divide it into
cloud-based collection a non-prime
collection I think that you'll find that
the cloud-based collection is extremely
easy results are sent and we discuss how
and to do that and then I'll go into
on-prem collection content collection is
one of the main tasks you need to go for
when you implement the same and
understand there is no exception and we
have all the tools you need for that I
would not want to claim it's simple a
lot of tools out there so you want to
collect from and it require some
know-how which we'll go through today
but I'll start with cloud as a sentinel
Nick and natively collect from hey I
wanted to say any I'm probably missing a
few a marketing service whether it's
part of a sure where it's one of our
offices five services or whether it's
one of our a windows services the reason
is because a we are part of my
of management fabric Azure monitor
doctor sending is a solution on top of
Azure monitor and the markers of mine
dates all service owners within
Microsoft to send information to us so
even if there is no connector a dowser
sending the connector page which says
specific service that you're looking for
blue chances that if you go into the
instructions for this specific service
you'll find how to send data to us this
provides a very good coverage of a cloud
Microsoft cloud estate in addition we
are not any Microsoft only seen we are
there to provide you with solution to a
monitor and protect any other IT system
and that would extend to additional
clouds we already have support for AWS
our current support the built-in support
is for cloud trails we have developed a
custom connector in which we can work
with you to implement for cloud watch
and we're working to extend further
support for AWS telemetry an interesting
thing is that since AWS is a cloud
service connectivity is also easy we'll
get to all term long term will require
more but our collection from AWS is a
cloud cloud integration which means
serverless
you don't have to worry about the
connector machine or scale or anything
like that you setup the connection there
is a bit more to do because there is a
security authentication to go through
it's all outlined in the AWS paid
connector page on National Center and
within minutes you'll be connected and
you'll get your territory from AWS so I
don't want to spend a lot on the easy
stuff it's a deep dive architectural
talk you can go to documentation and
probably get all those connected very
easily so let's move on and talk about a
cloud architecture is so unfair
collection
the first thing you want to collect
would be telemetry from systems and when
I say systems actually it's true for
on-prem based systems but it's also true
for VMS in the cloud so is
infrastructure as a service the best way
to collect Emma tree from a systems with
Azure sentinel would be using our agent
our agent is it can be deployed on
windows machines and Linux machines and
collect the system telemetry
for example windows events on windows or
syslog on linux it should also enable
you to collect additional streams from
those servers or endpoints such as
windows firewalls dns logs from the main
controller or DHCP logs in general
especially when you talk about cloud
readiness using the agent is the
recommended way because it can be well
embedded into your the hopes cycle so
say that you have a deployment process
and it's true by the way for either or
an apple yes and you're spinning in an
auto scaling environment VMs or actually
it's true a for a containers as well you
don't want to worry about how those
system will deliver telemetry for
sentinel you want to make sure that the
the image itself out of the blocks is
deployed with right agent to communicate
and send events that's why in the cloud
the usual tautology or best practices is
to use an agent in order to send
telemetry to either sent you know I
think it's also a this level of quality
readiness for example using arm
templates in order to deploy the agent
on Azure is one of the things that makes
sending them very much cloud ready to
monitor your cloud a workloads but I did
say that we are talking about long term
and the agent works just the same also
for on parent VMs you may notice that
there is an optional collector proxy if
your VMs on the clouds are on Prem do
not have direct connectivity to the
cloud
you could use a collector proxy to
centralize the location in which data
gets out of the organization
if you don't want to do remote
collection or you want to collect from
sources that you cannot install an agent
on this would be a lot of security
systems that we do see slog a or common
event format which is a specialized type
of syslog very common for sending events
to sim or that you have a networking
agreement and it sends a events using
syslog or that you're using Linux
machines but do not want or can install
an agent of them you can use a syslog
connector we'll get into the internals
of the Cisco connector somewhat later in
this discussion an interesting
deployment option is to install this
system connector box in the cloud it's
very valuable if you're connecting say
for a branch office where there might
not be a location for a VM or if anyways
the branch office is not located close
to where you your main datacenter is and
then sending the through your own
premise center does not make any sense
the one thing you have to keep in mind
is this is it goes over the cloud you
will have to use encrypted syslog syslog
over TLS some source devices would
support it the other source devices may
not support it in this case deploying
the collector on the cloud might still
be an option using something like our
Express route which is a if you have
some direct in connectivity to Microsoft
Azure
so we discussed remote collection for a
for syslog and for security and
networking systems you may also want to
use remote collection for Windows to
collect remotely events for Windows
systems if you do not want to install
the agent on those systems you can use
with we can again I'll show the
internals and how you implement that a
bit later on you can use a web connector
via the ref connector VM is essentially
Windows Event collector server it uses
the standard window is an informing
mechanism and it knows how to forward
the events to send
we appreciate that many people are
accustomed to other types of connectors
so we are open and we are not supporting
just a the max of connector
infrastructure if you prefer logstash
we support that we have a plug-in for
locks that can send information to other
sense you know it it does provide I want
to say more flexibility in our connector
but certainly more people know how to
customise locks - in order to drive
things such as for example office gate
fields on the fly between the source and
70 now if you already have collection
infrastructure that would support the to
be based on log stash you can use log
stash and just added additional
destination and the events will be
available with other sentinel the same
capabilities also available for flow in
d an alternative a log processing in a
system my experience out there is that
unity is not as widely as log stash has
identical it's like with our support
forum CFCs log log star flew in the etc
we support a large number of sources we
do use it using the traditional way
using syslog and as my need for pretty
elaborate targets crew slide suggests
there is a cost to that the complexity
cost a cloud native scene would have
loved to get away without having a any
on-prem collectors we don't leave that
possible today one of the main reasons
is that it's hard to be this locus the
service given that it would require
authentication encryption many source
devices do not support it in the partial
solution system in general is a protocol
that does not support authentication
hence we would need something in between
we do have as vision a goal to work with
partners so that they send telemetry
directly to sentinel center using our
API as you can see the big names have
already
support for that so for several sources
and you know noted lightweight ones you
can already use direct connection to
Sentinel you'll go to those product UI
you'll copy two numbers from our and
from our portal our workspace key we'll
talk about it a bit later or suppose
idea then our secret key and data will
flow in we believe is a vision that
that's how connectivity to a clone and
if seam should be it's a long journey it
will take time until much of what you
need today would use that method lastly
you always need custom connectors and we
have many ways in which for you to build
custom connectors we'll talk about those
a bit later
one important thing I want to point out
we have the list of connectors in the
Azure sentinel connectors page is
limited in length it's not comprehensive
and we have in our community a series of
blog posts which are linking and
rebuilding from the PowerPoint you can
download later on which will guide your
additional connection connectivity
options the first one is obvious we
support generics F and C it's not going
to talk about the difference a bit more
later on so any device which can sense
these organs and system the common event
format overseas long we can support and
this specific blog article lists those
sources and we traditionally updated as
I also mentioned Microsoft services and
apps are mandated we send information to
us we have not in all cases edited
connector on our side you can still
usually go to this source service side
and define us as a destination the
second article here would list all those
services and apps
sometimes it also extend to some
services that you know that for example
on Prem sequel server does not send to
something but just write events to do
Windows Event log so or normal agent can
read the events which gets me to the
third bullet the agent as a series of
capabilities I described a bit of them
but there's more there so the agent
collection capabilities are described in
a third article and finally if you want
to write custom connectors the fourth
blog post would help you do that one of
the reasons we need those blog posts
certainly the last free that's the first
one is because our Azure Sentinel is part of
the larger a
ecosystem and benefits from a lot of the
underlying or a integrated services and
they're from the documentation maybe
spread we probably need to better around
that but the free blog posts here have a
lot of things to the relevant
documentation which might not be part of
our documentation set of as of today
so I discard the Logan Leafs agent I
describe the collector I describe a log
stash in custom collection let's try to
go for each one of the those and
understand a bit more about that
the Localytics agent as I mentioned you
can install it on a Windows server or on
a Linux system actually a Windows Server
any Windows system can be a client as
well it can be used for more than just
what it you know it can collect anything
which is Windows events and a lot of
things in the windows ecosystem are
windows events so for example correcting
Active Directory events would be done by
placing the agent on your Active
Directory you want to click system
events that's also Windows events we
also support collecting them and I did
mention that sequel server also logs
through that in addition to system
events windows events are syslog which
is the core collection for those from
Windows or Linux respectively we also
support few other sources DNS events
Windows Firewall events is events the
agent can collect any file data so it
can monitor a folder for new files and
get those and send them as events to set
you know and behind the scenes
internally it's a based on fluency so it
supports fluent deep plugins as I
mentioned before through the is a very
capable technology but it's not well
known so we find it most customers user
agent a season do not use too many
plugins though that capability is there
another note on longer links agent we
have used Microsoft different names for
that over the years if you know the term
or MS agent or Mme it's the same agent
moreover you need just one agent for all
the different max of services that work
off this agent it's not a new agent it
also is GA for a while which is more
than longer than Sentinel
so it's stable and put
ready if you already have it on your
systems then collecting to Sentinel is
just turning on that on the sentinel
side
deployment I mean different Linux it's
very easy to add it to your automating
install in a sharor for moore's Azure
VMs it was there by default they or at
least will be available to you through
the virtual machines interface so when
you press connect you essentially make
sure that the agent is available there
it provides central management so once
you install the agent you can install
many aspects of it from a central
location and any measure already
provides proxy support so if your
virtual machines are not connected
directly to the internet a proxy will be
able to be there between them and
internet it also support a central
management it's not just for sending
events
moving beyond the agent let's talk about
safe and seasonal collection which is a
core capability that's used for a lot of
it's it's sort of the basics of security
information event management or sim
let's look at what is our collector our
collector is a VM or a host that you can
deploy I'll show this one is an on-prem
option and essentially inside it has two
services or two dimensions we have one
of them would be a standard system
server they want that you'll get when
you install a Linux the other would be
the same Logan links agent which I
described just a minute ago together on
a box this would be the collector VM it
enables the delonix agent enables is the
piece that sends events upwards to the
sense in the workspace in the cloud of
course it so it's not by night this is
done over HTTP it's encrypted and the
system Devon would be the element a
collective or listening to the
information from the syslog sources
together on the single box they are they
are collector VM in between them they
communicate using a syslog as well over
TCP I mentioned you the two ports
because that's the difference between
itself and syslog if the system demon
sends data or four to five two to four
to the agent it is considered syslog
information if over to five two to six
it's considered self information
the same collector as I mentioned can be
also deployed in in the cloud when it
works the technology is exactly the same
the difference is that now you send
syslog over the internet rather than TLS
over into that rather than restful api
retailers over the Internet and it's
very good when you don't have footprint
from a VM on your own perm location but
this it does require making sure that
the lag between the syslog source and
the collector VM which is system is well
protected might not be trivial
how do you deploy this collector so what
you'd do to do that and these charges
are already the documentation in way
more detail but you deploy a Linux game
of your choice if on Azure because you
want to do remote collection or because
you collect information from Azure
resources from as Raveena it can be any
algebra Linux VM if on Prem whatever
Linux VM you like there is a set of
requirements set up in the documentation
but it covers most Linux flavors as well
as I mentioned the two popular syslog
demons whilst you do it all you need to
do is run a deployment script
it's linked from the collector patron
sense you know what the deployment
script does it's installed the other one
at the long losing agent it configures
it and configure your system demon so
even though the system demon was there
as part of your installation it's very
much part of the area that we configure
for you also later on I will not get
into details today through the UI for
Sentinel you can configure this
collector VM and you configure both
things which are within the agent and
both things which are within the daemon
they work as a single system to make
sure that you get your safe for system
units an important element to stress is
that a as the dagger mentioned for you
can use a single safe collector machine
to use any number of sources so we will
talk about scaling and size in the next
slide but it's a starting point a if you
have multiple machines you don't need to
create a VM to collect from each one of
those even if you have multiple log
types a cisco firewalls and Palo Alto
firewalls you don't need a separate
collector for it you can use a single
collector machine to collect from both
for any source up to capacity to be
discussed in us in a minute
you can also use two sorry more direct
advanced a configuration that will be
outside of the scope of the deployment
script and the central management by
manual modifying configuration file
mostly for the system server and that
will enable you to do three additional
things probably more the abilities of
the system are pretty significant but
free very managing first of all if you
do want to use the same collector to
collect those syslog SF events so
sources that send self and sort of send
system you'll need to do some
configuration changes on the collector
box to enable that you can also filter
events that might be a very important to
keep really you don't want everything to
go into Sentinel mostly by using the
filtering capabilities of the system
server or assist of the orbs is no Kenji
you can filter events and make sure that
you don't get too much events you don't
need and lastly you'll have to go into a
direct into direct a demo configuration
to use TLS by following the guidelines
for supporting TLS with the relevant
syslog server for you our Cisco course
is no Kenji
the deployment procedure outlined before
was about a deploying a safe collector
syslog collector as I mentioned above is
nearly the same the procedure outlined
in our documentation is a bit more
cumbersome we did not create the same
deployment script that we have for set
also for syslog but you can and I
probably need to talk about the details
of that are beyond a webinar use the
same procedure for self and then modify
the self connector to do see snug should
be a small chain
scaling sizes so our sets or seasonal
collector is pretty capable we scale to
as much as 10 ke PS using a very cheap
hardware actually the one we tested
these are specific they are VM which
cost around 20 bucks per month at least
in a u.s. data center the reason we can
do that is because the processing itself
is not done on the collector box it's
done in the cloud so the collector box
is essentially just a proxy just receive
data in syslog and make sure that
transfers that using restful api to the
cloud as i mentioned you can use the
same collector for a lot of sources in
10k eps would give you a lot of flack
for that one question is often asked is
how do i cluster clustering would enable
you to have more than take 10k eps so
you know more bandwidth if you know if
you need that
it also will provide with a level of
failover
there are a few ways to do that all of
them are not based on Microsoft
Technology I am now suggesting things
that you do using standard open source
technology a simple way to do clustering
will be starting distribution deciding
that a certain time of events go to a
specific a collector that's of course
easy but it doesn't provide all the
requirements you may have one one option
that you have which I feel might be the
best today is this.h a proxy it's a very
common open source a little bouncer that
exists today on your Linux distribution
it's simple it's free it provides both
a clustering so it will enable you to
use multiple a collectors without caring
about study distribution it also enables
failover so you can have 288 proxies
showing the same ip address and if one
fails data the other ones captured IP
address and start receiving the a the
event data one thing to note about HM
proxy is that it supports on TCP proxies
not UDP processing so you need to make
sure that all your syslog sources are
capable of setting system an alternate
relationship proximately commercial or
bouncer if you everyone its it has a
cost it do roughly the same
I discussed self and syslog in length
and I sometimes differentiated I
sometimes just add self or say stock
it's worth noting the difference and
what each one is it's sort of important
for the discussion system is the most
common transport for sending for
managing events originally the links
were later on in networking and security
systems safe common event format is a
structure over syslog so it essentially
standardized the payload which is
commonly used in the security world it
was developed by ArcSight around 20
years ago and as you can see below it
provides a key value format that's used
with the safe data the main distinction
is that self is by definition pre parsed
it has a very structured way of
presenting name fields and values when
we get information yourself
it is stored internally in Sentinel in
that in the safe table and it's stored
using field names so it's indexed as it
gets it when we get system data on the
other hand we don't know what the star
structure here and will store it with
minimal parsing the few headers
available with system will be parsed but
the main area here between I guess fake
user and then the divine would be stored
as the system message it doesn't mean
that it's just that
senton allows you to parse on the fly as
data is queried so the fact that it was
not personal interest is not a blocker
okay but if it means that any parser is
required if the data is already
available in the known format it's
always been
is safe this is an example and since we
are not getting into parcels today I'll
just presented sort of phases of how you
can parse a known syslog stream into
fields and make it into a a well
understood and used a data
not accustomed to query time parsing
which is sort of a newer part I'm within
the same world think about is creating
views you create a function the function
behaves as a view or a virtual table and
when you query when you search next time
use the view rather than the original
table and with it you get all the parts
fields as if they were real fields
I did mention before that we collect
remotely noticed syslog or CEF but we
do roughly the same also for news event
forwarding this feature is in private
previous you need to apply in order to
use it the reason is that this collector
does not have the performance
requirements that we are targeting
unlike the safe connector which we feel
is capable of providing the value you
need so when will add you to the private
preview we will discuss with you your a
a speed requirements the structure is
roughly the same it's a bit similar we
have the long release agent the Windows
version installed on a windows VM this
server is services windows event
forwarding collector or web and the
local newsagent
knows how to forward the forward events to Azure Sentinel
Logstash - we discussed a lot about the
agent either by the local newsagent
either as a direct agent installed on a
server or an endpoint or when combined
with a quick or syslog server to be a
connector we are fully aware that you
might be even more comfortable with
logstash which is a very common way to
connect events and therefore we allow
you to use log stash as the collector as
you can see if you are into log stash on
the right side we have an outfit plug-in
that can talk with other logon leagues
or hunters and you know and can send the
data to us so if you today have a log
stash infrastructure you can add sending
events to Sentinel very very easily one
advantage is it has sometimes your only
choice today would be more input sources
if you want to write to read files or
databases or rescues such as Kafka using
log stash would be the preferable way
today also if you want to event
processing it can be done with the agent
using fluidy capabilities more people
know how to use filters and rocks etc
with these logs - in order to process
events example will be some pre parsing
or office getting a field you don't want
to get to the cloud without being hidden
before
lastly custom connectors you can't by
the way create a custom connector using
Logstash I described it event
transformation capabilities as an
option here, there are other options custom
connectors are used to get events from
sources that we don't support in any
other way but they're also very common
yet in order to get enrichment data you
want to get GIP information or customer
intelligence I described last week our
built-in or a supported for intelligence
such as a miss four sticks and taxi but
if you have your custom intelligence you
want to build the connector to get it in
a or asset information all those can be
connected and connected using a custom
connector how do you go about creating
custom connector partial would be all
the materials they use the API so behind
the scenes they're all API we have a
partial a partial model that no that
knows how to send information to
Azure Sentinel
it's from your easiest way to automate
or to one-time send in a file into
Azure Sentinel keep in mind that since
Azure Sentinel supports late parsing query
time it doesn't really matter what the
format is will will support it if it
sees V or JSON or XML and the parsing is
built-in anything else we saw earlier on
how to create a parser you can also use
logic apps which is as you may recall if
you've been to the last week webinar or
automation engine when we use it as an
automation engine we trigger it off an
alert when you use it it to import
information there's a custom connector
it may be either scheduled to run every
few every few minutes or hours or
whatever makes sense for the source or
triggered off an HTTP request so if the
source knows how to trigger sending
events it can trigger the logic of
automation it has the capability to read
files or databases or API so it's an
alternative to
Lokesh to read all those and while logic
apps is a cloud creature it has an
on-premise create way so you can use it
to create a custom connector to collect
data from long time as well and of
course you can use dark api used to
create your custom connector if the need
arises if you're free comfortable who's
that well if you keep in mind that
direct API use and programming doesn't
mean you need the server now all those
are supported Roger function so it can
use server list capabilities
so summarizing our a
on-prem collection capabilities and I
actually go to sort of this summary
slide again No
how do I get my slides nevermind I won't
go back I'll summarize over this slide
we discussed that the
after discussing cloud collection most
imaging is very easy go to the
documentation I discussed in live or is
and on Prem collection capabilities
I mentioned installing agents on servers
which with the cloud becomes more the
preferred way to do things because it
ensures that your servers are monitored
well also when there are Auto scale you
know we're no more temporal but we also
have remote collection capabilities such
as using a seasonal collector which is
based off the agent and a syslog server
or web form or Windows Event collection
as an alternative I suggested log stash
which we support and which will enable
you to use technology you may know
better to do additional work or use
existing collection capabilities lastly
I discussed custom connectors which is
if nothing works otherwise your way to
make sure that you get the day gets in
lastly is to summarize keep in mind that
not having a building connector is
rarely a blocker because you don't have
to pre parse the data which in
traditional seem world is a major
blocker trust they get in and change to
support it would be changing and on the
fly crate and parser which can be done
after the fact once it is already with
that said I'll move from collection
focusing on from collection into our
cloud architecture it's a very different
topic it's sort of moving into another
world it has some touch points but not a
lot if you're a sim guy you might need
an introduction to a bathroom to
understand the rest I didn't share my
background I am a same guy I've spent
many years the dark side starting as a
researcher then a product manager a
focusing on products and then I was
across director for amia
working all these customers in Europe
making sure that hopefully they're
successful in the nursing
so I know a lot about see a syslog I
know you know the beats since joining
banks of last year I had to go through a
journey learn learning ah sure because
it's cloud world and you need to
understand the cloud in order to not
just run sense you know in order to
monitor the cloud to start with you need
to learn the cloud I'll give you an
example sort of not directly related
here but if you're into same you know
that the asset model is a very important
part of sin and it's about IP addresses
and host names and subnets and it's all
your relevant to the cloud in the world
where computing is either serverless or
temporal the way you define your assets
and prove them is very different while
it be different between the cloud
vendors resource groups will be more
important than minutes
so keep that in mind you need to
understand Alger not just transcendence
but in order to protect you and you need
to understand another class in order to
protect them when you try to do that for
you as well so let's start and again
it's not a comprehensive in Microsoft
Azure a introduction it's here to serve
to describe how we manage a cloud state
we said the first concept to understand
would be the max of tenant a tenant is
an authentication and any management
domain it is actually a same as already
a instance usually a tenant would
correspond to an email suffix so everybody at microsoft.com have the same suffix
email address because we long to same
terror it doesn't mean that all
Microsoft the resources are belong to
the same tenant it means that all the
mailboxes
so that office 365 is within the same
tenant with the tenant which is you know
at any management space we have
something we refer to as subscriptions
some of the solutions will represent SAS
solutions office 365 would be a
subscription within a tenant dynamics
Microsoft CRM solution would be another
one and in tune well not a solution also
fits in within this model those will be
usually one to wonder is one obvious we
see five subscription within a tenant on
the azure cloud platform you can have a
lot of subscriptions
subscriptions are the paid the payment
level for a forager to have a
subscription you need either in the
present agreement with Microsoft or a credit card
or a free subscription there are few
options available around that so that's
the level that has a paying owner
subscriptions include resources as you
see in the bottom resources are the
things you manage I just mentioned them
when I talked about an asset model
resources might be VMs that the simple
resources demanding network elements in
your fee net there might be past
services whether around IT
infrastructure so a firewall or you know
a pass database etc all those are
resources resources are managed in
resource groups which are an interim
level between a subscription and
resources there are softer than a
subscription because they are not
touched to money money is always a very
hard factor but they enable grouping
resources and managing them together in
one room as folders of resources one
area where resource groups are very
useful is in a permission management
permission he's inherited in this model
you assign permissions subscription
level you can modify them on the
resource tab group level and again on
the resource level very much as you
would in Windows files in step for
example I am ignoring through now the
managing group level it's another level
of abstraction to manage multiple
subscriptions together it's not heavily
used sentinel and the discussion is not
a generic other discussion we want to
discuss Azure Sentinel
after sentinel is a resource it's a
resource within said using Azure
you can figure it off as a database or a
we call it a work space where everything
to do with a single sentinel deployment
is consolidated but it's managed in
terms of either as a resource so it's
part of a resource group it has
permissions its entirety part of a
subscription which is whoever pays for
it
one important aspect will have to
discuss when we talk about a Azure and
about other Azure Sentinel will be regions and Geo's, Azure is deployed in data centers
those data centers are mostly
transparent to all of you
they are grouped into regions which are
sets of one or a bit more one to three
as I recall data centers that are very
close to each other and that provide
redundancy internally the map you see
here is a map of other regions you can
deploy essentially in any other region
that supports in Logan Linux there are
exceptions I probably should not have
had this slide without listing the
exceptions specifically because it tends
to be a bit misleading we are still not
available in what we call a national
clouds so China or use federal one and
there are a few other ones which are
still missing
naturally if longley is not supported
their long leases if you've been to my
last term in our or the infrastructure
layer but we have with a few exceptions
we support very long leases supported we
at this point in time do not commit to
the data staying just at the region
we're working on a plan to have better
support for that but at this point in
time we can we just it EU data doesn't
leave the EU and that American than a
u.s. date I should say it does not leave
the u.s. we do not data from the rest of
the world Israel through the u.s. we
understand that that's limitation that
in many countries you'll want to have
the data not living your specific
country and we are working a plan to
provide it but we are not there yet
also the you
if you select any a workspace in any a
region data will still be shared with
Westar which is physically Amsterdam
also while I said data as you can see in
the slide this is data at rest data in
motion is a much bigger challenge
especially since we collect data we
found that it's pretty hard to ensure
that if data flows between say a UK West
if you sold there and a EU West which is
Amsterdam to know exactly what are the
route it falls routes where their motion
will ever leave the e or not it's
something we work to investigate it's a
general a other question which are we
are looking into also keep in mind that
we have a lot of collectors those are
collecting for external services and
that makes the question of word that
goes before it gets who said you know
even more complex so our commitments as
I described it more for data at rest
as I mentioned in the u.s. it stays in
the u.s. in the EU it says edu but may
not stay in your country in the rest of
the world today goes to the US
so that was another introduction with a
bit of discussion of sending around
genes I mentioned that the main
as your Sentinel entity or resource with
your workspace the workspace is a
container it includes most of what you
think of as Azure Sentinel keep in mind as I
mentioned before today we don't it's not
a physical disk so it may not imply that
it's only one place it's the resource as
I said in Europe it's part of it will be
in Amsterdam even if you place it
otherwise but what's belong to that
what's part of that event database rules
incidence a lot of data some other
elements that we use as part of sentinel
are standalone other entities or
resources and can be managed separately
as far as far as a permissions for
example or deleting etc two examples of
those will be playable switch used for
information and workbooks which we use
for - morning as I mentioned our
workspace is the same workspace that you
may have been using these long analytics
in practice certainly the solution over
long analytics so any sending long space
is basically the log analytics workspace
with the security inside solution
implemented it does mean that whenever
something is possible for all analytics
we are upward compatible it's possible
for us you're sensitive it's important
you summon someone sometimes you'll go
through documentation you see that it
can be done with some analytics you say
but what about Sentinel let's crunch
I measure the workspace and what we do
find out and a core of the rest of the
discussion they would be around that is
that customers as many times cannot have
a single workspace for reason that we'll
discuss in a moment the rest of the
discussion will be around how why you
will need multiple workspaces how many
you need and how to manage multiple
workspace is a single Sentinel entity
why multiple workspaces so if you have
multiple other tenants you will probably
need a workspace for sentinel NH the
reason is that the a our monitoring
framework is very tenant centric so most
sources would know how to send
information only within the tenant there
is a good reason for that tenant is you
know education domain and cross turn
authentication
maybe charging and open to exploit so
that's that's nearly mandates a creating
multiple workspaces
another Asian reason would be if you
operate in multiple regions there are
tourism that may want you to separate
workspaces if you have multiple regions
one is compliance you work across the
ocean you want to have the data in the
you thing the you in the u.s. staying in
the US and therefore you have to
workspaces one created in unit in an EU
data center and data in a US data center
it also reduces call a literally cost
and latency because if your sources
whether on tram or in in Azure data
center or even in AWS data center are
closer to some place it would imply that
you have lower latency in pain less than
two in cost if you have a natural
setting the workspace closed by a last
reason which is sour between strong and
soft would be subsidiaries we have
solutions and we'll talk about later to
have data for multiple subsidiaries in
the single workspace sometimes an
organization would prefer to have
workspaces or pairs of scenery there are
other reasons that would drive you to
have multiple workspaces but we think
that they are avoidable and this move or
face may need to be consolidated one is
separate billing a workspace is part of
a subscription solution is the payment
unit which sometimes these people to
want to separate workspaces
into the free subscription we do not
think this is best practice it makes too
many workspaces which even given the
tools I'll present later on to manage
those might become just too much next
there is fine-grained retention sending
and fine grained access control in both
cases it's sort of a legacy reason until
probably sometime last year retention
setting and access control were sort of
pair workspace and therefore you would
separate between workspaces to provide
access to different people or set
different retention settings we've
reduced a number of capabilities around
retention setting and access from within
a workspace that made those reasons much
less important in legacy architecture
through Logan Linux which is this for a
while the guidelines initially were more
workspaces and with the introduction of
fine-grained control the tendency is to
try to conserve as workspaces so you
just may come to an organization where
or your organization may have a large
number of orc spaces or return for
different reasons
so as a rule of thumb sort of how many
and how much I would urge you to use one
workspace for each tenant you have to
monitor for each other region and for
each subsidiary it's still considerably
might be a large matrix but not as large
as we sometimes see today where every
subscription is a workspace also if you
don't have compliance reasons if you
know you may want to reduce further this
would imply depending on the size of
your organization and I'm going to
succeed how complex is the organization
anything between well one in the best
case if you in a reasonably mentioned
but it still can get to fifteen or
twenty workspaces easily so the next
step would be how to implement sensing
across workspaces
first step would be to conserve
workspaces very getting rid of the
legacy reasons
next I will mention our through
lighthouse which will have to implement
to provide cross tenant access it's not
that if almost bases are witnessing your
dinner then we'll discuss how to work
across workspace with Sentinel
talking about queries workbooks
configuration in a access control
concerning workspaces so the important
thing to keep in mind is that given how
monitoring works just moving things to
Center workspaces is usually cost less
and easy it's just a procedure in the
organization so taking the pipe from one
pool and putting it in another pool
would not cost you any different and
will not affect your operations in any
meaningful way so it just has to be done
a special case I want to mention would
be a from security center historically
out rescue center is many times used
with default workspaces which I created
fair subscription it's not needed in any
way it's not important
and you can use the data collection
screen to use to to point all rescue
center work a match servers to send the
information to specifically selected
workspace
as your lighthouse archivist is a new
technology that Microsoft introduced
this summer and it enables cross tenant
a access behind the scene it's a it's a
user management scheme but we support it
and we allow you using one to implement
lighthouse and you see on the right if
you know Roger that and that's myself
that I have now access to two tenants at
the same time it means that every do do
we search the lock screen in Sentinel
and work groups and hunting can be done
across workspaces as a tip the query has
to take into consideration the multiple
workspaces and easy way around that I
mentioned before a view the cost of a
view for parsing which we call functions
in Sentinel those can be used also for
multi workspaces so you can create a
function that will represent any schema
that uses Sentinel but built-in would do
it across all your work spaces across
tenants so any search any work group so
dashboard or hunting activity can be
done across workspaces within the same
tenant or across tenants I did mention a
cross or crosswalk Facebooks
so all your a monitoring a-fib functions
can be a extend across multiple
workspaces as I mentioned as of today it
probably will require the our alpha box
dashboards work with the current
workspace and you have to go and modify
them and this would be to create
a function that will place any schema
with your extended cross workspace
source and then go to each one of the
workbooks and change the queries to
adjust some things that you may want to
look into is cross
a workspace a alert monitoring and
connector status things that we are
starting to provide in a single one
space and have to be extended across
workspaces
one thing I do want to mention a and
I'll hint that the roadmap even though
it's a public discussion there's one
very important a feature within a
sentinel which is not a workbook it
stays in screen it's the central screen
that a analyst uses it can be replaced
by a workbook but it has finality would
want to use we are working to have a
version of the incident screen that
works across workspaces and tenants
another challenge that multiple
workspaces will will present would be
cross a workspace a management we are in
the cloud world we are very much API
driven if you know either you know that
arm is our way to provide DevOps
abilities to scale so our templates our
way to create any resource in Azure
between API is an arm templates we
provide you all the tools to automate a
to replicate a content configuration
across workspaces so get it from one a
workspace the master one will you work
and replicate to all the others illness
for example for alerts rules and hunting
queries and big things to one of our
partners mortal that created a sea
Sentinel which is a partial model that
does that for you without you having to
understand the API you you want to if
you have played books and workbooks that
are specific to a into a workspace you
will be able to deploy them using arm
and it carries also to things more
granular such as permissions
sum it up we don't provide today a UI
based option to manage across workspaces
but we do provide all the ways to
automate or the replication of
coordination across workspaces which we
find in the cloud world is usually the
norm as it has to be part of a DevOps
cycle
lastly an interesting a an important
aspect one of the reasons for multiple
workspaces as I mentioned in the past
was a access control because he wanted
to the original resource owners to have
access to their data it's most notably
important in the cloud world you have
people that managing a Windows server
farm
and you take the data from them but they
want to also have access to their
Windows server logs it's an alternative
to using multiple workspaces we have
been introduced data always ask on
control and it has two variants that
might be useful to you I start with the
second one it's more powerful but it has
its limits resource centric RBAC
resources are RBAC able you to allow
access to specific IT owners just to the
data collected for their own resources
they don't have access to your seem to
17o but rather through the resource
search screen itself in the resource
screen they have a logs tab in the logs
tab they can search through their data
that care is also also for resource
group it doesn't have to be resource or
resources into a subscription so if they
own the resource group they can search
through all the data that we know that
resource group it's a very powerful
feature until recently it was available
only for our resources
last week at Ignite we introduced max we
introduced a technological Azure Arc
Azure Arc enables you to extend these
management features into on-prem or
higher servers in other clouds now any
server can be assigned as a sentence or
as a natural resource and therefore be
managed from Azure as well as have
access control to the data to the
resource owners so now you have VMs on
trim the same capability applies to them
you can provide
taxes to their owners to their local
data without providing access to any
other data or to the same itself the
second option table Westerberg enables
you to control access to specific types
of data within sentinel a good use case
would be to limit access to office
activity logs office activities tend to
be more sensitive in the firewall logs
and you may want to say that these data
you want only a fraction of the users to
have access to
trying to summarize the discussion
around a Azure Sentinel cloud
architecture Sentinel is a workspace the
basic entity resource which forms that
Hill is the workspace its leaves within
a region in Azure and while you would
want to reduce the number of walk spaces
because it's just always harder to
manage more and more resources there are
factors that would drive you to have
multiple workspaces even in your
organization those would be geographies
subsidiaries or tenants you may find
yourself in a situation where because
you have some a few tenants a few
geographies and a few subsidiaries you
get to substantial number of Azure Sentinel
workspaces whether 5 or 15 no training
would be very specific to your
organization what I provided you today
was we first the best practices what -
you should split workspaces because of
and what you should probably avoid the
news are more advanced capabilities to
do using a single workspace as well as a
set of steps in order to manage a large
set of workspaces or a set of workspaces
as a single Azure Sentinel entity I focused on
a using lighthouse to enable crosswalk
space a query search workbooks and
hunting a I mentioned that we will
develop also a crosswalks is incident
screen which is not available today and
then I mentioned how to centrally manage
the contents for those workspaces and
lastly how to manage a data access in
order to provide with fear workspaces
still a access to the data for resource
orders
before I before I move to questions I
want to remind you again that's a
starting point actually it's the second
iteration we discussed the basic
functionality of sent in last week and
there's more that we'll share in
upcoming webinars there's a time until
that next webinar it's after the
holidays meanwhile we do urge you to
join the community week to answer it's a
very active community also follow our
tech blogs we publish multiple times a
week a lot of new data it's many people
who do that we have a growing community
keeping the other links on this slide I
talked about community you'll see that
there's well I've been working with
scene for a while the rate in which
we're building our community and Ryan
thanks for your team for making that
happen it's very fast and you see
meetups and blogs by others all the time
and we'd love to support you so blogging
is for you well but we'd love that if
you haven't made up if you want to
people to hear about sending L don't is
it to contact us and we'll find the
right way to support you on that thank
you all Ryan do we have questions. Ryan: yes, so I want to remind people here now is
the chance to submit additional questions
that we've been answering him in the I
IM window all along any of the ones
submitted from now on we'll be reading
out so if you have a question that's a
maybe more easily explained by a quick
demo than by something we can type in a
response now is a great time for that
question and we can get out I need to go
to the demo mode that's where is my
browser you put me on the spot for that
anyways all right
so can one of the questions we had come
in is how does as your sentinel monitor
SAS and path services
so a great question first roar demo
because you asked for that so III did
discussing then of how we collect from
on tram because there's a lot of
subsidies there a lot of details in a
way I sort of jumped through Sasson pass
services because it's easy but it's
worthwhile so for example let's take
Azure Active Directory it's somewhere between
passes us it's very fundamental but it's
a good starting point I don't have
permissions but you see that all you
need to do here is just to connect and
that's all it's because we have the
connector is behind the scenes we are in
charge of everything there you have to
do nothing let's move to something a bit
more complex well promise will be the
wrong way let's look into Microsoft 12 I
tell you that I'm pretty fond of this
one because it uses a rule set that I
started developing as an open source
project 15 years ago I'm not that young
but let's go to the connector page you
see that this connector page is actually
very typical of what you can do with
most a path services Microsoft you do
nothing here what you do here is you
open the application get a resource and
you tell it to send data to us that's
the instruction here there's nothing you
actually do here the only thing we give
you here is monitoring on the left side
whether we actually go to events from
there or not so that will be a second
one the last one I'll show you talking
about paths and SAS will be Amazon Web
Services that's not within Microsoft
anymore it's a different a cloud service
but it's not more complicated there is
not a lot more the main thing here would
be that you need to take our values
interesting enough my AWS best practices
would for Microsoft to open an account
because you know we read the data for
you
but you have to also provide your role
the one you created there and together
with those three details we will collect
the data for you so yes it's a bit of a
magic it's much simpler that's why I
didn't go into details here and it's the
same for you let us know if it's
different for you
great thank you we've got a question
here for Windows machines that are not
managed by a d-- how would someone
deploy an agent on those
prerequisite or do we have a solution
for that
so is told there's no requirement for
being ADA connected to install the agent
there is always when something is not
ADA managed there's always the question
is how to manage it software life cycle
in the first place how do you install an
anti-virus update for that matter so the
agent itself would work very easily
it doesn't require any ad connection on
any any a Microsoft Windows machine how
you actually go on installing would be
depending on your alternative first
officer distribution even you're not
using GPO
okay got it what guidance do you have
for people trying to use the API do we
have any public documentation on that or
should people just be using the RM API
how do they access that so API is a
large word I mentioned API in many
different aspects in in this
presentation I think that the two main
ones would be in a API as part of custom
collection for data ingest and the
second one would be API for remote for
central management the ingestion API is
fully documented and heavily used they a
cook the configuration management API
some of it is still a in public preview
so might not be as documented I would
like it to be for all there is a API
definition files but sometimes on the
level of documentation you would want
between API and arm usually as your
Sentinel services are controlled by API
things which we use from the larger as a
record system such as more groups or
play books are managed using arm usually
it also implies better documentation
they are less new than sending itself
got it thank you we've got a question
here can we also include Intune related
logs in Sentinel Anderson yes it's a
good opportunity to go into our
community blog and mentioned a blog post
that I listed somewhere there and by the
way talking about vlog you'll see that
we published on the 20th or the 19 from
the 18th we skip the weekend run so it's
a very active blog that's why I'm
starting to show
thank you all blocked contributors if I
take the late the connectors a label and
look just into that I did mention that
we have a blog post about collecting
logs from Microsoft services because
many of those know how to send to us and
you need the instructions on where to
know how to do that in this page you'll
find things such as well ain't you down
there and that will lead you to the page
in the Engine documentation tells you
how to send to us information and you
see in tune is not a natural service but
still all of our services that you would
know otherwise out to connect us as a
key vault front door which is pretty new
etc ok great
another question here is that best
practice to have different syslog and
set Linux VMs on Prem instead of
combined for example we have many Cisco
devices sending in syslog and many Palo
Alto firewalls sending incest so
it's a bit I put this way okay today
it'd be easier for you to maintain TV
ends because you have to manually change
configuration in the collector in order
to support both on the same box it's
doable it's supported but it adds a
minimal level complexity I for me it's
easy so it's not fair
therefore I'll have to pick go is one
I'll tell you how to if your
organization is very structured and very
cards just what's on the documentation
goes to
okay great what are the activities and
data types collected by Azure Sentinel
once I connect a source to Sentinel any
specific keyword that I should query in
the log search
so I'll try first of all even though
it's not a part it's important to know
that Sentinel does support search so you
can actually use the word search in
order to do a cross source Crossfield
freeform search it's not every you asked
for but it's worth knowing that you
don't need the schema to start you have
an IP address you want to look for a
username you want to look for you can do
a search and find them across sources
secondly the question is good and the
question is about a document in the
schema or the events that you may get we
are working on that there is a link
which I don't remember by heart it's on
the github where we collect all those as
usual there's a lot that we get the
events from sources so a lot of the
information about what you get is
incorporated in the documentation for
the sources and we are doing the work of
a consoling death and providing it to
you I hope they'll be ready shortly
excellent thank you is there any way to
hide sensitive information and the
lobster events say passwords or keeper
values that could catch up so I assume
I'll say two things one of them would be
as a seam is you know talking about
something another is same guy
to do that you'll have to use a a
collector it works it with fluidly using
a plugin but there's less know how about
it which locks - we have many customers
as we've done them so by processing the
events before nurse and to Sentinel that
actually works only for on tram
collection there is a gap there and we
don't have a way to do it for the
simpler cloud-based Connect connectors
so that's that the other thing that I do
want to stress is that in general it's
very hard to ensure that when you
collect logs in large volumes for
multiple sources there is no credentials
in them if the source does not take care
of that
assuming that you always know where a
password is is hard
Ryan: got it Thank You. Can Sentinel run in one of the commercial Azure clouds but
connect to and get log and security data
from one of the government clouds
Ofer: I don't know I it's an interesting
prospect it would potentially work if a
domain so a tenant can cross between the
government cloud and a commercial cloud
I'm not sure that's the case
so I would say that for now I would use
noise and answer but it's probably worth
looking into it a bit more whether
there's some way around it
yeah add to that um you know a lot of
the government clouds are very much
their own specialty because you know
that each one can be so different and
it's hard to speak about them all in
general but we will be focusing more
attention to resources on the government
clouds that speaking to that community
so watch our community closely over the
next a little bit for some government
related discussions and that sort of
thing I'll keep everyone posted another
two that I mentioned in the two are not
supporting government cloud today we do
work on that
so US federal cloud to be more specific
perfect thank you will lighthouse
eventually allow a single sent no
instance to perform cross Tennant
correlation and alerting so we are
looking to Det we understand that's a
requirement that's needed it's a missing
piece in supporting multiple workspaces
I don't have a timeline for that okay
great thanks
when see when will the Sentinel logic
app trigger work on the rule templates
I'm sorry well when will the Sentinel
logic AK app trigger work with the rule
templates so I assume that the question
is and translated today you can trigger
off
with drool type types you can trigger
off scheduled alerts but you can trigger
from the other types of alerts I assume
that's the question we are working to
ensure that marks of security one the
fourth one which is the most important
one most missing one today is supported
it's working progress if you know I
provide relative information I am NOT
allowed to do dates but it's closer than
others great thank you
we're about out of time here folks so
I'm gonna give you a few links before we
say goodbye
let's move Vic to delink stood that you
can talk about him
paste him in here also if we did not get
to your question somehow are we missed
it or you're listening to this after the
fact is it recording please go to a kms
as your central community that is a
great place to ask a question you're
gonna get all the same people who are
answering on this call so that goes
directly to our engineering team we
always love to hear your feedback on the
webinars whether they're valuable to you
and what we can do to improve them so
you can give that at aka.ms/SecurityWebinarFeeback if you're
looking for recordings of any of our
webinars and a list of upcoming ones you
can find that at a kms security webinars
you can join our community at
aka.ms/SecurityCommunity and the difference
between that and the link I gave earlier
about the Sentinel community is that
central one goes directly to our
Sentinel group the security one at large
is all of our security products so
you'll find links to all the security
products there if you are interested in
getting in on some of the private
previews you can sign up for at
aka.ms/SecurityPrivatePreview I want to
thank all the folks on our team who were
answering questions today in the i''m
window we really appreciate your help of
course I want to thank over for a
fantastic presentation that was very
informative most of all I want to thank
all of you for joining us and being a
part of our community we will see you
next time
