Welcome everyone. I just grabbed a laser
pointer okay.
Welcome everyone. My name is Adam Zsarnóczay.
I'm a postdoctoral researcher
at the SimCenter
and in the next about 30 minutes I
will introduce you to
our vision of damage and loss assessment
and how we apply that using pelicun.
And then we hope that in the 20 minutes
that follows
we will be able to have a lively Q & A
with you.
Please, Tracy already mentioned, submit
your questions through the chat. She will
triage them and then
we will try to answer them to the best
of our abilities.
So let me start first I would like to start from a
little bit further away, step back. You see you
saw in the previous two days that the SimCenter
and we all are developing open-source
tools for you to facilitate your research and
help you create complex simulation
workflows
from individual building level all the
way up to a regional level that is
demonstrated by these
testbeds that you've seen particularly
Atlantic City in Tracy's presentation.
yesterday. One of the big advantages of
what we are doing
is that we are developing these these
workflows
to be extensible and modular so
depending on your research needs you can
pick
the modules that you would like to use
and then assemble a workflow that
suits your your research needs and then
effortlessly
bring it to a super computer
DesignSafe, in particular and
run it there and that gives you the possibility to run
expensive
computational fluid dynamics simulations
on buildings and then potentially do
loss assessment
damage assessment there. This is what I'm
going to talk about today the damage and
loss assessment
part of the workflow and how we bring
the same extensible and modular
approach to that area using pelicun.
I will limit the scope of this
presentation to
wind hazards mostly. So although we come
or I come at least from a from an
earthquake engineering
background, I try to expand
pelicun and the damage and loss
assessment approach we have
to different types of hazards,
as you will see. I will focus on how it applies to
wind engineering and what opportunities do I see
for bringing the current
level of analysis to something higher
and potentially achieving
a tool that has capabilities to run
performance-based wind engineering
in the near future. Tracy has shown
that we we've done this level one type
of analysis in the Atlantic
City testbed and we use the tool called
pelicun for that and then we will use
the same tool
to bring it to a higher level and I'm
going to show you now
how we can use one tool to do
different types of analysis
at the regional scale. So first let me
talk about what we
aspire pelicun to be. What does pelicun
offer
to the community? When we when we think
about
damage and loss assessment tools, we like
to think in a three-dimensional space
that you can see here. We think about
different types of hazards, different
types of assets such as building,
bridges,
infrastructure, and then different types
of resolution
such as going from the building level
all the way to the component level.
Qnd with pelicun, we would like to create
a unifying framework so that you don't
need to use different tools for the
different
parts of this space, but we would have
an open-source tool
that provides you multi-hazard and
multi-fidelity modeling
that can handle different types of
assets within the same
framework. This is different from what is
currently available
in the earthquake world and also in the
in the wind
engineering domain and we believe that
it's really
helping collaboration between different
domains and also within the wind
engineering
itself. Now how does this fit into
our regional simulation workflow that we
called rWhale, and we have mentioned that a couple of
times previously?
Let me start from the from the end
result and we will work our way
back and show you where pelicun is in
this workflow.
So starting from the end, what we want is
to have decision variables available at the
regional scale. These decision variables are something
like the expected
repair cost or repair time or the
injuries in buildings
for each building, for each asset to be
even more
generic. Assets again are just a
word we use for buildings, pipelines
power stations -- all kinds of things
components in the built environment.
Ao for each of these we would like to
have these decision variables available
to help us
understand the consequences of a
particular
event. Aow this is not enough in itself.
We also need to know
other pieces of information about
these assets that are not
directly caused or consequences
of the earthquake, but more like
descriptive features
such as, if you are thinking about
residential buildings, the number of
stories that they have, the type of
material they are made of, or
even the social economic background of
the inhabitants.
These pieces of data are often
not available in a single database. So
one thing that we do
is that we have the publicly available
asset information
and then we have a so-called exposure
model that is also referred to here in
Tracy's steps. The asset description
where we process this information and
enhance it.
We try to fill in the gaps to have an
asset information model
that is very similar to a building
information model or BIM that you must
have heard of
before. So have this asset information
model available
for every asset in our analysis.
Once we have that and if we have the
decision variables, we are good to
go and discuss decision making or policy
making and recovery simulation. Now how do we get to
the decision variables?
For that, we need to understand the
hazard or different scenarios of the
of a particular type of event, say
hurricanes in Atlantic City. And to do that we
usually have an understanding of the
sources of the hazard and then we have a
hazard model like
as shown here, and it's also here in
in the more high-level overview. So
we have that hazard model and that
hazard model produces us
intensity measures that describe the
intensity of the hazard at various
places
in our region. This is just an example of that. This is a
storm surge description in Atlantic City.
So once we have that for each asset, we
have some understanding of its characteristics
through the asset information model
and we have an understanding of the
intensity of the particular
scenario at that location. And then,
what we are doing, and this is a key
thing
in our workflow, is that we link
the intensity measure and the asset
information and then do a damage 
and loss assessment
step to arrive at the decision variables. We do this
independently for each asset
because once we know these input data,
the buildings don't need to know about
each other to arrive at the consequences
immediate consequences of an earthquake
or a hurricane.
And this allows us to run these
simulations in parallel, embarrassingly
parallel and bring it to say DesignSafe and then
save a lot of time and make these
simulations
easy or feasible to run, even at
a regional scale even with two million
buildings.
So what pelicun does is
it does this connection between the
asset information, the intensity measure,
and then provides a decision variable, 
if I'm trying to be very
simplistic in its description. Now what
we would like
to achieve, because in wind hazard
or wind risk assessment, looking at
hazards for example,
this is currently the level of the
analysis. What we are
aiming for is to include a response
model here. So given the intensity, have some kind of
estimation of the response,
have some EDPs that are not wind speed
but rather some kind of pressure or other thing
that probably you are much better aware
of what we should use there,
and use those as descriptions of the
intensity of the response of the
building.
And then, come up with damages based on
those results. I will talk about both
approaches, in the next step. But first, let's look at
under the hood of pelicun,
and what happens in there. How do we get
from intensity measures or EDPs
to decision variables? What are the
components that we need there?
So if we look at only the independent
part of the workflow, first
we can do this very simple assessment
type,
going from intensity measure using a
loss model that is informed by the asset
information,
to arrive at the decision variable. This
is what's done
in the flood damage and loss assessment
in HAZUS. And actually, it's not really
a damage and loss assessment. It's just
loss assessment. Usually, they use
insurance claims data.
So they know the losses, they fit some
functions on top of that
and then depending on the type of
building we are looking at and depending
on the
height of the water, we can get the
damage
or the the repair costs, or the losses
in the building. So that's the simplest
possible approach.
We can go one step above this
by introducing damages in the process. So this is
something like the earthquake damage and
loss model in HAZUS.
We start with an intensity measure, have
a damage model. First, we have some damage
states identified,
and then given the damage states, we
arrive at losses
which is going to be described by
decision variables.
And the, the highest level that we are
aware of at the moment
is something like FEMA P-58 for
earthquakes
where as I mentioned we have a response
model first.
They use engineering demand parameters
to describe the damage. So we have a more
accurate higher fidelity description of
the damage
and then go to losses and the decision
variables.
So these are the three different types
of analysis that we are thinking about
and what's an advantage of pelicun
is as I mentioned it's unifying these
three. So
you can choose which one you want to use
but you don't need to switch tools,
if you want to use a different one which
means that when you are running a
regional assessment,
you can use pelicun and you can choose
the level of fidelity you want to use
for each particular building.
So if you have a tall building, for
example in Atlantic City,
you can choose to use response
estimation and the high fidelity model
there.
While the buildings that are shorter and
larger in number, you can still keep using a simplified
model for the sake of efficiency or
maybe you just believe that that's
sufficient for your research purposes. So
this multi-fidelity analysis is a big
advantage of this approach in our
understanding.
Now what is pelicun? I told you what it
can do, how it fits in the workflow but
what it what is
really this, what does this acronym mean
and what do we produce here? So
I already told you,  it's an acronym. So it's
probabilistic estimation of losses
injuries and community resilience under
natural disasters.
That's what it stands for and when we
write it with all caps, we mean
a conceptual framework for damage and
loss assessment.
So at that point, it's not really a tool
it's more like a theoretical
logic behind damage and loss assessment.
And we have a flowchart for that, and
there's a paper I'm going to reference
at the end of the presentation if you
are interested.
I encourage you to to read it to learn
more about
this part of pelicun. The other pelicun
with small letters is an open-source
Python
library that's the implementation of
this conceptual
framework. When I say Python library, those of you who
might not be familiar with Python might
not know what that means. So
here is an explanation of that. A library
is something like a set of components or
building blocks.
In case of damage and loss assessment, we
can think about those as like fragility
models or
loss functions or population distributions.
And then, you can pick the ones that you like.
So that's the modular fashion that 
I mentioned at the beginning of the presentation .
You can pick the ones you like 
and then put together an application
that will fit your research needs that's still
using pelicun, using this conceptual
framework that I mentioned in the
previous slide. Once you put that together, you can use
it in the workflow that we have either at the regional
scale or at the individual building level.
It's up to you how you use it, and
what kind of applications you put
together. You are not all alone in this.
We already have some
predefined methods that are implemented
in this framework, and those are the
existing capabilities
that are there. So once you download
pelicun, you can already use this
to run certain types of assessments.
That's what I'm going to talk about next.
When we started developing this about
two years ago, we started with FEMA P-58 because
starting from a higher fidelity it's
much easier to go down and increase the
efficiency by simplifying
certain things. So we started with P-58.
That's an earthquake-focused 
methodology. That is only for
buildings and it provides a high-fidelity approach,
as I mentioned. We have
all the damage and loss data related to
P-58 included in pelicun. So we have fragility
curves on sequence functions all of that
that comes with
pelicun if you want to do a female p58
style assessment.
You can run it just by describing the building
and then see the results. That's how it started. Then
we expanded in that space that I mentioned earlier.
So, first we introduced different levels of
resolution or fidelity by going
to the HAZUS earthquake
assessment approach for buildings and that
is a building level approach. So it's a
bit different, much faster
much less computationally intensive.
So that's how we first expanded in
fidelity. Then we went to wind hazards and
introduced the HAZUS wind damage
and loss assessment approach,
and then we also included different
assets by introducing pipelines
which is very different in terms of
the description of the assets and
buildings. So that's where we are at the moment.
And then I would like to emphasize that
for all of these, we provide the
supporting data. So if you want to do
HAZUS style wind damage and loss
assessment right now, you can download pelicun. You
have more than three or four thousand different
fragility functions there for
wooden houses, and you can run the
analysis right now without providing any extra
data besides the type of the house that
you want to look at.
The reason I'm talking to you now and
why we have this extra 20 minutes
at the end of the presentation for
questions and answers
is that we would like to expand this
even further this year
and we would like to include water types
of hazards including
storm surge in particular. And
we would like to improve this part as
well, but that's not really interesting for you.
And then, wind hazards we would like to
bring them up to a higher level
in resolution and infidelity, and I think
there are a lot of opportunities there
for us to work with you and to learn from you
how you would like to do these assessments. And
based on your experience with
experimental testing and also virtual
experiments and numerical analysis,
what are the typical EDPs and how you
would describe
damage and how we can help you the most
to do your research. So I would like to talk about
my understanding of the opportunities
in wind engineering and then I'm
interested in your opinion of these
suggestions. And if you have some others,
I would be very happy to hear them.
So there are three topics that I would like to touch on.
Increasing precision, so that's like
increasing the resolution.
Improving fidelity which is something
like improving the accuracy at a
particular resolution,
and then improving how wind hazards
and wind damage assessment 
works with other hazards,
storm surge, in particular. So let's start
with precision. In order to
talk about this, first I need to give you a description of
where we are at the moment because I'm
not sure all of you are running
a HAZUS style wind assessment. So
HAZUS works in a very
strange way when it comes to wind hazards.
It starts with the same type of
information that I mentioned before. So
we have some intensity measures
that we need for to describe the
intensity of the hazard. That's in
particular in HAZUS we have peak wind
gust speed at 10 meters height in open terrain. So
that's what comes in and then we need
asset information. For those of
you who might have an earthquake
background and so has this
earthquake methodology, this is a bit
different because we have a lot of information
that we need about the building. It's not
enough to know that it's a wooden house
with one story, but we need to know the structure type,
the roof shape, the terrain roughness around
the building and then a lot of other
parameters. I did not even list them all.
You can see like secondary water
resistance or the area windows, tie downs
and so on. We need to know all of this to
be able to identify which fragility
curve to use. And then, once we do that we can pick one
out of the more than 20,000 fragility curves
in HAZUS. Each or not for that those
are damage models I would say because
each of them has four fragility curves as you can see on
the right side here for the four
different damage states
that HAZUS uses. Now these are not
functions. They look like lognormal CDF,
but they are discrete points that come
from a synthetic
computer simulation that was done more
than a decade ago,
and they provide information about the
probability of exceeding a particular
damage state at each of five miles per hour
increments in wind speed. And in between, you do linear
interpolation. So that's how the damage is
identified. And then the losses
come from a different model. So the
interesting thing about HAZUS, 
if you look at this figure at the top, is
that we go to the damage model. We get
the damage measure and then we stop
there and then when we want to have
losses, we go back to the intensity
measure, go through a loss model and then get
losses, but the damages and the losses
are not coupled. So the loss model will
give you a particular
mean loss ratio, again in five mile per
hour increments, based on the same
computational simulation that they did
about a decade ago.
But those losses are not linked to the
damage models. So if you have
say 150 miles per hour wind, you see that
we are looking at about 0.65
loss ratio here in this particular
building, and then
that could correspond to any of these
damaged states. So there is no link between them.
Okay. So that's where we are now. And
how can we improve on this? Well
this is a building level loss, building
level damage. That's what HAZUS provides.
Ine way to improve on this is just by
looking at what computational
simulations they did.
I'm sure that it's possible to do this
at a higher resolution, not at the
building level, but at least at the
sub-assembly level. That's what they did.
That's how they got
the building level damage information.
You can see that they are linking different
sub-assemblies and the damage in those
sub-assemblies
to arrive at building level damage
states.
So if we had such data available
for a large number of buildings, then we
could provide such fragility curves for different
sub-assemblies and then
describe the damage rather than
in a building level we can go and describe
for different pieces within the building.
This is very advantageous not only
because we know better what happens in
the building. That's trivial,
but as you will see later, it is also
very good for working with other types
of hazards because when you have a flood,
for example, it will
damage different parts of the building
than the wind.
And it's much easier to avoid double
counting there. I will return to this
a little bit later. And then, this is just one step.
If you go further than that, we can
introduce component level
damage assessment which is, for example
in this exterior wall case,
we can disassemble it into a window,
a stucco, and a door and who knows what
else is important there and estimate the
damages for each of those and this is how we
arrive at performance-based wind engineering,
much like how performance-based
earthquake engineering
works today. So it's important to emphasize that the
pelicun already supports
all of these levels of analysis because
it has been designed to do that for
earthquakes,
and all we need is the data which is not a
small thing to ask, I know, but but at
least the the methods and
the framework is there. So if you have done
or are planning to do experiments that
would provide this kind of data,
then we would love to hear that from you
and work with you
to bring that to this platform so that
we have something to start with
and then grow from there.
Now these steps provide us higher
resolution but higher resolution doesn't mean that
the data that we get out of it is
actually accurate.
So how do we improve the fidelity? I also
have a couple of examples for that.
I mentioned that HAZUS has these two sets of
models, one for the damage and one for
the losses. One thing that we did already
last year is that we used these raw data
and coupled the damage and the losses. So
you can see that here these models were
decoupled and then our approach channels
the flow through the damage measure and
then given the damage state
we evaluate the losses. Actually,
you can see that these are very nicely
fitting to the data the, lognormal functions that we fit there.
But more interestingly, given the damage
states, we could calibrate loss ratios.
So for each damage state, we have a particular
amount of loss that we expect in the
building, and using those loss ratios, we
get back this loss function pretty accurately.
So this is already available in pelicun.
This is a coupled approach but it's
still a building level assessment and it's still using
wind speeds to describe the damage and then
the damage states to the losses.
In order to get to a higher resolution,
we also need to think about using something more
descriptive of the damage, and that's gonna be
the engineering demand 
parameters. In particular, I
as far as I know it's the pressure
the wind pressure at different parts of
the building. So instead of having one
wind speed value, we go down to
more localized description of the hazard of the event.
I think this is much similar to using PGA
deep ground acceleration for earthquakes
and replacing that with
drifts or peak floor accelerations in
the earthquake world.
So this is an important step and I think
we have the experimental facilities and
the virtual experiment capabilities
to support this step and to run such
analysis and have such data available so that we can
have a better model for damage and a higher
fidelity result at a higher resolution. And then
finally, the next step. This is something Tracy
is working on, she mentioned it yesterday, is to go
beyond just using the simplistic damage
models and fragility
functions and then connecting the
different components within the building
and allowing them to influence each
other by having first
a very detailed description of what is
in the building,
and how they are interacting with each
other. So that would be
the highest level I can see at the moment.
All right. So we've seen precision, we've
seen fidelity. The last step is
how to improve the combination with other hazards.
Again, the baseline is HAZUS. So what HAZUS does
is you calculate wind losses and storm surge losses
independently, and then the 
challenge is how to make sure
that you are not double counting damage.
Let's say you have 60 percent
loss in wind and you have 80 percent
loss from flooding.
So what is your loss from the two? 
It's definitely not 140 percent.
And what they do since their loss
calculation is based on
subassembly level computer simulations, they
can disassemble these losses into
each sub-assembly, disaggregate them.
And at that level, they can easily see how they are
interacting with each other and then
come up with such
tables for each building configuration
where you look up your wind-only loss,
you look up your flood-only loss and then see
what it translates to when the two
things are acting together.
Now this is quite rigid, and
whenever you introduce a new building
type,or you want to introduce something
custom, this this is not possible to
adjust to it. So what we suggest
is that going to the sub assembly level,
so having everything available at this level
rather than at the building level,
would make it much easier and much more
flexible to directly couple the wind hazard and
its consequences and the storm 
surge or flooding and its
consequences. And also,
another important hazard is rain.
If I know that the building roof cover
is damaged,
or I know that the windows are broken,
it's much easier to anticipate
rain loss losses due to rain, to the
components to the
contents of the building. But if I only
know a building level loss,
it's very or damaged, it's very
difficult to make
accurate assumptions
about how that will influence rain
damage. So I believe we can go beyond this
once we take a step to subassembly level
analysis. So I hope this shows you how close we are
to improving what is available in
HAZUS. And another last piece of information I
would like to share with you is that most of this will
be available by the end of 2020,
most of the framework I mean. Of course
the data is something we would like to get to
by working with you. But Frank has shown
you the WE-UQ application just
an hour ago or so, and that application
already provides you
the capability to run a particular wind event
and see how it affects a structure
in terms of the response. So you can get
say the pressure at different places in
the structure.
This will be complemented with the
pelicun-based damage and loss assessment
in the Performance Based Engineering Application. This is an application that
supports earthquake related calculations at the
moment but it will support wind events by the
end of 2020. So you don't need to run regional
simulations to make this useful for your research. You
can run a single building or even other like
communication towers or any kind of
asset that you that you have
a good model for so that you can
estimate the responses,
and then use pelicun to calibrate fragility
and consequence functions, so that those
could be used later even at the regional
analysis. And with that, I have a slide with
different resources. 
So I think these these slides will be shared
with you. I encourage you to check them out
if you have time, and I thank you for your attention.
Tracy: All right. Thanks Adam. 
So as Adam suggested, we
had allowed a little bit more time in this
presentation to open the door for some discussion
about the types of capabilities that one
would like to see, that you would like to
see in a tool like pelicun to move us toward this ability
to do these performance-based assessments of
wind effects and structures,
and so we'd open the the chat now to
conversation.
What kind of capabilities would you like
to see? Or as we build up the
level of fidelity within the various
modules that define this workflow,
where do you think our most immediate
effort should focus, or possible ways that your work might be
able to contribute to those
advancements? So we'll kind of pause
there and let people reflect and possibly share.
And as they're pondering over that a bit,
here we go.
First one comes in. So shielding and channeling effects
of buildings close together. So this idea
of trying to capture some of the
site-specific effects. That is one, it's a question of
whether we have that in there
and the short answer is not to the
level that that one might hope for meaning we
account for the effects of exposure
but getting that kind of in-depth
site-specific description of how the neighboring
buildings, for example,
modify the flow and the effects on the
structure -- Oops, sorry! My dog is yappy 
-- are not explicitly
included. Adam, I'm going to mute while
he's barking. Did you want to jump in and
maybe engage?
Adam: Yeah. Okay, sure. So let me just go back
here a little bit to explain this. So yeah here we are.
So I think when it comes to the interaction of or how
one building affects the other at the region,
right now the intensity measures
are evaluated before we go to individual
building assessments. So those
intensity measures can possibly
capture these types of interactions. It depends
on the hazard model that we use,
and what we know about the environment.
So if we know everything about the
buildings and how the geometry in 3D
and let's say we have the computational
capacity to run a CFD for the entire city,
we can use that and then our intensity
measures suddenly
become the pressures and we need we
don't need to do this step.
So that's a possibility. I don't see that
as something feasible today, but
maybe some people would disagree.
What we do today is that we have these
intensity measures that are
typically wind speeds on open terrain,
and then when we come
to pelicun, either straight or
through these steps, we consider the
terrain roughness around the building
which is I know it's
simplistic but that's how we consider if
it's in an urban region or in the middle of nowhere. 
Tracy: Thank you, Adam.
for fielding that while my dog went a
little bit crazy, but I think this also
represents a really excellent opportunity right?
As Adam suggested, being able to run full
CFD on the regional scale that we're
looking at with these kinds of testbeds could be
challenging, but I think one
interesting area of opportunity is to
think about how we can
make a kind of hybrid approach that
meshes, for example, on these
 things together so that you're
running the test bed with a more
simplified description
of the exposure. We are currently able to
extract that from land
use land cover data, but you're also
aware from our presentations yesterday
how tools like BRAILS allow us to extract
imagery about all the buildings for
which we have parcel information and
building photographs from things like street view
to capture a geometric description of
everyone next to you.
So in theory, if you wanted to pick a
footprint a set of footprints of
buildings in the immediate vicinity and run a
higher-fidelity simulation
locally there to capture local wind
effects and nest that within the larger
simulation where we're using
a more crude approach to account for
that kind of interaction effect, that
kind of local topography using land useline cover,
that I think that kind of nested simulation
creates an interesting opportunity that is currently not
a framework but could be a way to
contribute. Sorry, we have a repairman
working at the house, and
for some reason my dog forgets that he's
met him 15 times today, but each
interaction is apparently novel.
So I hope that answered that first
question. There's another question that's
come in. Can you clarify pelicun works with
damage assessment imagery data
or just structural response from
numerical and experimental simulations?
So Adam if you want to jump in and I
could jump on it too.
Adam: I'm trying to
understand the question to make sure I
answer the right question. So
Tracy, if you have if you have
something to say maybe you should start
Tracy: Yeah, so what I would say right now is
that the way it's it's structured currently is that all the
representations of the structure and the
simulation of damage
happen in an entirely computational
manner. The pristine structure from the
building inventory no damage
is pushed through the simulation, exposed
to the hazard intensity.
Its corresponding level of response
or the the exposure then results in 
some level of damage,
and it moves through the workflow
completely using computational
simulation and not mashing up any kind of actual
damage that may have occurred
in the event that you were trying to hindcast
let's say performance based on
some full scale observations after an
event. Now with that being said, there is an
exciting opportunity to look at how the
BRAILS tool that's currently being
used to extract various geometric and building
information from photographs
could be used as a part of a more
integrated loop that takes real damage
photographs from field reconnaissance,
processes them to get some indicators of
component level losses,
and then compares that against the
result of simulation, or can be used to
calibrate as Greg suggested in the conversations
yesterday, the model. So currently right now there
is no infusion of full-scale observations or field
observations of damage. It is
entirely a simulated workflow but it
does seem interesting to consider how we
could
take some of the capabilities used to
digitize our inventory
and use them as part of a model
validation framework. Adam I don't
know if you want to jump in.
Adam: Yeah, I agree completely.
I think there -- what I describe today is how we do
the simulation. I think those those imagery that is
available from past disasters 
would be useful for calibration as Tracy
mentioned. So in that case, we have to think about
the computational costs because
calibration at the city scale, running
this simulation possibly thousands or tens of thousands
of times, could be very expensive.
But yeah, I think it's possible.
There are solutions, there are surrogate
models, there are ways to work around
that and then and then that would inform the
damage models. So that could help us
develop better ones, and
pelicun can help with the calibration once the
technique available. But as Tracy
mentioned, first we need something like
BRAILS to extract the data from the
images.
Tracy: All right, Greg has just jumped in.
So basically systematic, so Greg has
asked: are you aware of any efforts to
systematically collect damage and loss
fragility models for wind or storm surge
for various types of
components and buildings?
I think to answer this question,
and perhaps I'll nest it in another angle,
there was a great benefit as Adam
mentioned of having
P-58 as a kind of a one-stop shop that
was already
nicely organized and advanced through
the work
that previously occurred in the
earthquake engineering community to
bring that directly in and have
all of that at our disposal. And now
the question becomes
is there a effort,a similar effort, in
the wind community that would be collecting or
developing these necessary damage and
loss fragility models so that we could
get to the level of fidelity we are
hoping for?
I'm so I think we're going to open this
up more for a point of conversation.
A lot of my instincts tell me this is
going to be mine from the individual
efforts of researchers like you
and the work they're publishing and some
of the work that's being derived by
looking at these large-scale sampling
efforts after major disasters to
document performance and try to
reconstruct some of this from those
observations.
But outside of the public loss models, is
anyone aware of an
open effort if you will to compile and
aggregate this kind of information?
And I will pause and allow some pondering of that
but I think that the word being
systematic there, I mean I can say for
our part in StEER a lot of what we're trying to
do is through having standards on the way that we are
assessing component level losses. We're able to
generate data that could be used for
that purpose.
One of the downsides of our StEER
approach, even though we systematically
sample after the events,
is you're dependent on the occurrence of
these events and having the chance to
sample or collect enough data to
statistically, you know capture performance over a
large range of intensities and different
typologies with various underlying vulnerabilities.
And it just turns out you need a lot of
data to do that well.
I think the Harvey data collection was
probably the most ambitious
in this regard in that it worked along
the gradient of hazard intensity and
therefore could start to try to
reconstruct this from a singular event.
But we are still probably a ways away
from having enough data to do a good job
of this at least through the hurricanes we've
seen to date. I'm gonna let Greg's question
float there because I think it's notable
and we welcome in the chat any
continued engagement on that. Another question --
Amal asks can users define the wind
field of interest including bringing in
things like tornadoes and downbursts?
So this is an area we had not officially
ventured into. You know we started by taking the
hurricane testbed as its name suggests
and bringing that type of you know
of wind field if you will into the simulation.
The framework itself in theory would not have
a problem accommodating other types of hazards
or other types of you know wind events
like a downburst or tornado.
Interestingly, those are also probably
more extensible to a
more efficient simulation as they're
more localized. Part of the reason we
have these nice big hurricane testbed
is the spatial extent of that wind field
and allowing us to do you know eighty
thousand, hundred a thousand or more,
buildings to show that scalability.
But I think it is interesting to
consider how we could pick a much more
compact path and use this to examine
these more localized win events. Adam,
what's your reaction to thinking about
extending this to other
wind classes that are a little more
concentrated?
Adam: I would use again this figure that I have here
to show that whenever you are talking
about a different hazard. That comes here.
So before the actual damage and loss assessment. If
you have a hazard model for that
hurricane or downburst and you can
provide intensity measures, then
we can accommodate that because as far
as pelicun is concerned it needs
some description of the intensity and then
use the damage models to arrive at
the damage and loss. So it's
very generic. So it depends on
if you have damage models, I mean the
parameters work for those models or you have some
kind of experimental data so that we can
work on calibrating models to that,
and I'm pretty sure you have it. I
mean there are so many experiments done.
There are so many labs. It's just connecting the dots
and connecting the people
and then doing these calibrations and we
are happy to help you with that. Or
I speak for myself. I am very happy to
help you with that. 
Tracy: And I think this raises a 
very important point, right. The
whole idea of lifting up these testbeds
is to create like I called it yesterday the
computational scaffolding that you could
use
to bring in what you want to study, right.
And so the idea is the inputs outputs
that have to flow between each of these you know boxes in
Adam's diagram, once they're constructed and the
workflow is erected and shared
and these are all open and available in
GitHub, then you have the ability as a user to come
and and shift the direction of that testbed
and bring in some some different
features. And that kind of parlays into
the question, the question that Maria asked
about going to the Caribbean.
There should be no reason that the testbed
couldn't be extended to other regions.
And here's a couple things to think
about when you're thinking of extending
it somewhere else. Number one: 
do we have good inventory
data? One of the strategic reasons of picking
the Atlantic City testbed or New Jersey
is they had a lot of great open data.
For the earthquake testbeds, they ran
they had the Urban Sim results.
The reason I point this out is you need
to have some good
data on the inventory, ideally high
quality data or fairly complete data,
to be able to start generating
descriptions of these structures. That's
the first thing I point out.
The second is do we have meaningful
hazard simulations
for these areas which would again
require having a bathymetry and topography on
Land use/Land cover -- all the things you
need to drive that wind field or storm
surge in that area.
For the Caribbean and knowing where
Maria is from, you know some of that already exists
at least hindcast results from Maria
that are available including the ARA
wind fields which are in DesignSafe
now and some of the ADCIRC
simulations I believe are also
posted. So you next want to look -- for do I
have a good way to either simulate 
the hazard with the right
you know data that I would need for
the exposure and other features that i
want to simulate? Or are those runs
already done somewhere and I'm going to
do a historical event or a synthetic event
but that date is available. Once you kind
of get over those two hurdles, I'd say
the last hurdle you have to jump through is
are there appropriate descriptions then?
You know, do I have the fragilities I
need in other words? And that's where
things get tricky, right.
One of the tricks that we're doing with
the New Jersey testbed
is because we can enforce code
compliance and they have a high rate of
code compliance at least
according to IBHS. We are able to take
some rule sets and prescribe how we
think they're going to perform
and the topologies are well documented
in HAZUS. And therefore, we have ways of
describing their likely performance
levels of damage and associated loss.
In the Caribbean, we'd have to start
asking questions about does that data
exist that captures how
the state of construction in these areas
would perform? Do we have damage and loss data?
Do we think there's a high level of
compliance? And I know Maria we struggled
with that in the Caribbean consistently because
now all of a sudden you might have to
propagate some different levels of
uncertainty through that simulation,
knowing that all the buildings may not
be to a set code or may not perform
in a prescriptive way
because they're not complying with some
minimum standard.
Right -- we agree with each other and the
typologies are dramatically different.
It's not to say it's not doable but a
different set of assumptions
and even more importantly decisions
about uncertainty propagation would be
necessary. But I think that's the cool part is
that once again the scaffolding's there.
You can go do this kind of work and as a
researcher show some real expertise and
how you would model a unique environment,
and as Maria's saying one that has a
multi-hazard, you know. So that makes it
really exciting. Adam, I kind of 
grabbed that one because
I could sense a lot with Maria coming
from Puerto Rico some of the things she
might be interested in.
Yeah and so there's a couple other questions. Sanjay
asked a question about:
is it any different from running the
Atlantic City testbed?
I think for example -- yeah do you want to
jump on that one? Adam: Yeah, I think I can
answer that real quick. I just wanted to be like careful
about the damages from tornadoes and
downbursts versus the damages from
hurricanes, although it's both wind. I can imagine
that the damage at least when we
look at it look at it at the building level
would be described by different
fragility functions. So you cannot just
take the hurricane fragilities and apply
them in a tornado.
I'm not sure. So at least when we are at
the building level. If we go to a
higher resolution, then suddenly we don't
have to be so careful
I think once we have pressures or once
we have something even more granular.
Tracy: But it kind of it represents a really
good point. As you imagine uses and
extensions of the testbed,
you kind of have to step through the
chain. You can go through the you know
the five colored boxes at a meta-level
and just start to think about whether
the necessary data or models are
available for that.
And there are some that have been
developed especially in the forensic
evaluations after tornadoes that help to
capture those those levels of damage. Another question:
industrial systems,
open structures with equipment pipes,
cables transmission offshore.
Yeah, I mean this kind of was like your
pipelines block but this other type of distributed
infrastructure which we know also has challenges.
Adam what are your dreams for that in pelicun?
Adam: I would love to so whenever, when I show this
one of the things I miss from here
is having a box in the background here,
like wind and pipelines or wind and not
not really pipelines because that's
don't really expect, but
let's say wind and transmission towers.
I don't know about any fragilities there.
If there is data...
Tracy: There is, yeah.
Yeah and there definitely is. So the
lifelines and trends and you know
electrical power systems and other things.
They're definitely ones that could 
come in into these
network models. Adam: we can support it. As long
as you have some kind of a
logic that starts with an intensity, then you have a
function. It doesn't have to be lognormal.
It doesn't have to be normal. It
can be anything. You have 
some model that describes the
damage state as a function of the intensity. If there
is something like that,
I can bring it in pelicun, basically it's
that simple. It doesn't have to be a building.
It doesn't have to be even a structure. It can be
anything. Tracy: Well, I think that's awesome. 
We are out of
time for this session but it's
a perfect segue into the next section.
Imagine if you will
how you could bring your research in.
Make your next proposal so much more competitive
because you're possibly not only
leveraging the Wall of Winds EF
capabilities and writing an excellent proposal which
is what our next session is all about,
but even showing how you can parlay that
into the workflows of the SimCenter.
For example, into these regional testbeds
to really amplify the reach
and impact. The chat showed how specific
questions or applications could really
shine lights that the community hasn't
really looked at before. Now with this
available as a scaffolding to build off
of, it might help you to reimagine ways to
actually recraft your proposal to have even
greater broader impacts by working with both facilities.
So with that, I'll thank you Adam for the
presentation and I will transition now. I
think we are going to Arindam who's going to talk about
how to go win that NSF proposal and he
knows how to get this done. He's very
skilled in that area.
