Great thanks so much, Kyle. I'm Wendell, I'm a
Master's student at the Nicholas School
studying energy & environment. Our teammates
and I are very excited to be here today
to tell you about our work the past year.
I'm going to let them introduce themselves starting
with Ben. Yes, hi everyone. My name's Ben Alexander. I'm
studying-- I'm a junior studying
statistics and computer science.
Hi, I'm Atsushi Hu, a sophomore studying mathematics and philosophy.
Hi, I'm Varun Nair. I am a sophomore studying
computer science and math.
Hi everyone I'm
Lin Zuo and I'm a junior studying
statistics and computer science.
Fantastic, so our work is focused on
training deep learning models to
identify electricity infrastructure and
aerial imagery to help with electricity
access. In this first section I'm
going to give us the background of
electricity access the topic at hand
really, how electricity access planning
is carried out, and then how our deep
learning methods can really fit into a
larger pipeline. So first electricity access
is really the backdrop this whole
project, about a billion people worldwide
lack access to electricity which has major
implications for their well-being, social,
economic, development. Just to kind of
hit home points a bit, we've got a few
statistics so with increasing access to
electricity, more maternal mortality
rates decrease.
Similarly, there are implications for
educational attainment with higher access
to electricity there's greater
attainment. And then metrics for
poverty also decrease with electricity as
electricity access increases. So there are correlations but they hopefully give you a sense of the
urgency of addressing this problem. Now
I'm going to be shifting gears to talking
about how electricity access planners
face challenges on the ground and what
those challenges include. So
first off, this scale of the problem is
hopefully pretty clear. About a billion
people across two continents. You'll
look at these actual populations they're
overwhelmingly rural, which adds an
additional element of the challenge. So
we look at this diagram on the right for
the country Angola, about sixty
percent of the population is rural and
only about three percent of that
population has access to electricity.
Versus about 50 percent of the urban
population has access. And this trend is
true for many of the countries that are seen
here and so how... that really raises the
question for planners:
How do you extend access to these
overwhelmingly rural populations, spread
across large swaths of land?
Are grid or off-grid technologies better
suited to this challenge?
To help address this question there are, what is called,
electricity access optimization toolkits.
These toolkits look at the feasibility
of grid and off-grid power solutions
such as micro-grids using hydro, solar,
wind, to extend access to 100% of a study  region.
So in this case, the team of the
World Bank looked at the country of
Zambia and carried out one such study
using the electricity infrastructure.
You can see these dark, black/green lines here.
They represent high-voltage transmission
towers and power lines. 
Along with another geospatial data to
determine the least-cost pathways for
every part of the map. So just unpack
this map a bit more: in the northeast you
see the light yellow, which represents a
recommendation for home scale solar
systems, versus you can see in a very
large portion of the Southwest is darker
yellow, which represents the community
scale solar mini-grid solutions. And
along the transmission infrastructure
you see the recommendation grid
expansion. So grid expansion is clearly
dependent on the availability of data on
the grid infrastructure itself, which
can be surprisingly difficult to obtain,
or non-existent for many developing
countries. We don't know where the
electricity infrastructure, is we don't
know if a grid extension is a feasible
option for these populations. Our team
believes that using modern computing
techniques in remote sensing data in
just aerial it satellite imagery we can
help some fill this information gap and
automatically identify this
infrastructure. So now to give you
another big picture look at the goals
for this project,
it's a conceptual pipeline. So if we have a
high-resolution imagery for an area, we
have a computer model that can identify
transmission and distribution towers.
In this case these smaller blue towers or
blocks represent the
distribution network and these magenta
that once represent the larger
transmission towers. Then we can return
this data into geospatial data that can
be handed off to policymakers, planners,
maybe people using a similar modeling
but I've described it for the country of
Zambia. Our work focuses on this center
step: creating and training a deep
learning model to identify
infrastructure itself.
So we chose deep learning because of its
ability to... to find abstract meaning
in images. So to give you a sense
of this-- I'm not going to get into the weeds
explaining deep learning-- but a
conceptual understanding of how deep
learning works flows as such: we want to
train the deep learning model to
identify faces and images. First we need
a large data set of images with faces
in it to train the model so it can learn
important features. Which it learns
starting small. So it's a hierarchical model,
it starts small and it works its way
up to more and more abstract components
of an image that give it the, you know,
insight there's a face there or not. So
walking through this diagram the first
layers of the model you might... the model
might extract small edges, dots, and
curves in the images themselves that get
used in later... in the layers of the
model to determine a more... slightly more abstract features. In this case you
can think of features of a face: eyes,
nose, mouth. But, critically we don't know,
and don't get to tell the model, what
features to find important. The model
determines this by itself. Later layers
then combine these medium layer features
into what really determines and allows it
to it accurately identify faces. And
critically, it is able to identify faces of
people that is not seen in any photos
whatsoever, so this is really the power
of the abstraction the capability of
deep learning models. Deep learning goes
beyond
recognition, another exciting field and
application for it is self-driving cars.
This example shows a model that is able to
detect number of objects that are
relevant to a safe driving car
software system. So, in our our tools
actually, or work, actually has more
object detection component to it than
this simple binary classification. This is
something that Varun will explain more
so in the next sections.
So now that
Wendell has given you idea of what
exactly... why exactly deep learning might
be a suitable approach, let's take a look
at some of the approaches that we took
over the course of the semester.
Initially we took a binary
classification approach, and in this
approach we want to know in a piece of
imagery: is a piece of energy
infrastructure in it or not? In
these four examples here we train a
model to be able to tell us that it has
a piece of infrastructure present in the
left two images and no infrastructure
found in the right two. But, this isn't
enough for us to be able to generate a
full-blown map of energy infrastructure.
For us to people to do that we need to
know precisely where the image a piece
of infrastructure might lie, and for that
we need object detection. And so if we
take the same four pieces of imagery, we
can actually use an object detection model,
which is what we used in our
project this semester, to actually draw
bounding boxes around where it sees
different types of infrastructure in the
image. As you see here, in the left two
images we've actually drawn boxes around
where it sees different types of energy
infrastructure. And it's important to
note that in this image here we have two
boxes because we have two pieces of
infrastructure there. And similarly for
images where there are no boxes
president we don't want the model to
draw anything there.
So this is the first step that we ended up
using going forward because that would
be able to tell us precisely where in an
image infrastructure might lie.
You might be wondering, where are we getting all of our data from? Well our data was taken,
compiled, as a part of a prior Data+
project and contains imagery from all
over the world. In particular, we chose to
focus on these four locations that you
see here. Primarily because they
represent a wide range of geographies
and because they contain the most amount
of entity examples in the dataset.
To give you an idea of what exactly that
data looks like
consider the following typical energy
infrastructure pipeline. Energy might
be generated in a power plant, and is
then scaled up, and then transmitted
across long distances by transmission
towers, and then scaled down and
distributed to a residential, commercial,
and industrial entities through a
distribution network. On ground level view
we're all familiar with something that
looks like this. If any of you haven't
seen a substation before there's
actually one right behind Gross Hall that
you can go take a look after the
presentation. From an aerial view this is
what we're--our team is more familiar
with looking at. These are
what each of these three types of
infrastructure might look like from an
aerial perspective. And this dataset
actually contains bounding box
annotations for where the different
types of infrastructure lie, so you can see them as
three types of bounding boxes drawn
there. Our project in particular chose
to focus on the latter two that you see
here. Primarily because they contain the
most amount of examples in our dataset.
So we thought that if we could map out
most of the transmission distribution
infrastructure we could then be able to
extrapolate where the substations lie. With that I'll hand it over Atsushi who is
going to talk more about the research
questions that we explored.
As my colleagues Wendell and Varun have
have introduced, what is working and what is our dataset. Now it is time to
zoom in a little bit more and see how
specific research questions we were
investigating. First of all, we know
deep learning works and it works on
data sets that is contained, like common objects in daily life such as
cars, chairs, cats and the average things
that you've seen daily life. However we are
not 100% sure that it will work on the
dataset, and we have no idea how well on
our dataset. So therefore, our
first research question to is to find
what is the best model to work for our
dataset. The second research
question is: how are we going to face datasets from very different geographies?
Especially, as we introduced before, we
hope to investigate how both electricity
access situation in rural areas and the
less developed countries. The
consequences that we will be having- less training
for those regions and the that
means we'll have less training data.
Ideally were able to trim our existing
data and have a model that is able to
generalize on other
geographies. So our hope is that our model is
generalizable, therefore we're going to
experiment--
carry out an experiment with data sets
featuring different geographies.
For example where... we may train on Arizona dataset and there see how well it's going to
predict on North Carolina dataset. And
there's a next problem we may be facing,
is that, we may not have high resolution
image for many parts of the world.
The high resolution images will be really
expensive to obtain and here is some
example the the image at different
resolution looks like. 0.15 meter
resolution image will be very expensive
where as a 1.0 meter resolution image
will be publicly available. So obviously
as we lower... as we lower the resolution of
the image, how much power... How much the performance will decrease and what is
the requirement for the resolution for us
to obtain a model set to perform well?
Next I would be handing over to
Ben to talk about the methods.
Now that we've seen why this topic is
important and why we're working on it as well
also seen some of the specific research questions that we're trying to answer, we want to
just go a little bit through some of the
specific methods that we've been using
for the last few semesters to answer a
lot of these questions. So first we'll start
with just a high level overview of the
overall pipeline and then we'll step
through it in a little more detail. But
overall, the first step we had to do was
image processing. We heard a lot about
the dataset a few minutes ago, but
the data have to be prepared in certain
ways so that it would be be ready to be put into
the model for training. The first step
was processing. Secondly, we were able to
train the actual model using the image
that we had just pre-processed.
After training, we were able to evaluate the
performance of our model by testing on a
test set of imagery that had not been
trained on. So the model had never
seen it before. Then once we have done these
three steps and we had a model that
we're happy with, then we evaluated. We
can start mapping out grid networks in
the areas that we're actually trying to
study. And then we could then provide
this information to policymakers and
other groups who may need it for various
reasons. So this is again, a high-level
overview, but going to a little more
detail on the initial image processing.
As I mentioned, the data that
we originally had was collected from various sources, so it wasn't in exactly the
right format that's needed for training.
Also, I can give you an example one of the
main things we had to do to get it ready
for training was to make the images
smaller. So the original images, as you
can see, are very large. They're about
10,000 by 10,000 pixels which is
actually quite a large image. So the
issue with that is, that when you're
training deep learning models, you need
to use a special piece of hardware known as GPU, or graphics processing unit, which
is really optimized for a lot of different
matrix operations that are sort of the
core operation needed five deep learning
models. In order to fit these images
into the GPU they have to not exceed the
GPU's memory requirement. We have to cut
these large images into these smaller
patches over here on the right which are
512 by 512 pixels. And one
other thing, we did overlap the patches
slightly when you're slicing up this
image to try to prevent things from being
cut off on the edges of the patches. So
once we have prepared the data we can
begin model training. The way that
training a deep learning
model works is you take your data, and
you feed it into the model, and by giving
this data to the model, it starts to
look at the images and find patterns
that it associates with the types of objects that you're looking for.
So for example here,
we have this transmission tower right
here. So in this case by feeding in this image
into the model the model would begin to
look around the place where you pulled
it at a transmission tower. And it would
start to learn patterns in the pixels.
Here you would see these types of grey
lines in this particular pattern often
look often associated with transmission
towers. Interestingly, they may also look at
things like shadows. So in this case looking
at the shadows actually may be more
useful than the the part that's
in the box because the shadow kind of
actually looks like a transmission tower
that they may have seen from the slide
a few slides ago. I may be picking up
on other cues like that. Similarly, they
also learn from the background, so here
there's nothing here, so it'll actually
learn, not only from whether it is an
object, but also learn that these things
that look just sort of like brown it's
probably not a tower. This
model, just as a reminder, this is the
same type of stuff that Wendell was
talking about that's being used for all
sorts of really exciting things like
self-driving cars and other other types
of things like that. This is just the
same types of techniques, but applied to
satellite imagery and aerial imagery.
Once we've trained the model, we can
start to evaluate it to see how
good of a job it did.
A set of test imageries, which is a
bunch of images that we set aside from
the beginning and we intentionally did
not train on this imagery so that the
model will have never seen it. Then when
we test on it it's sort of a fair
comparison. It doesn't have any knowledge
from seeing it before, that's brand new.
So in this case we've seen or we've
basically given this image to the model
and we said, "Now that you're trained, tell
us, where are the towers in this image?"
In this case the model said there are
three towers which are right there.
Then we take those labels that come out
of the model and we map them onto the
original coordinates of the large image.
So that we can move on to the next step
which is evaluating this performance. When we saw on the previous slide,
when we come out with the output from
the model we can of course look at it
and see in this case it seems to have
done a pretty good job, but we wanted to
come up with an actual... we want to find a
more rigorous way of quantifying how
good of a job the model really did. So we
designed the scoring metric for this.
The way this parametric works is that
for each tower, we say that the tower has
been correctly identified if it's within
a 2.25 meter radius from
the center of the true tower. So to make
that clearer-- so let's look at this one
up here. So this white dot here in the
middle of the circle, it is the center of
the true tower. The one that you can
see an image. In this case we would say
that this have been correctly identified
because of this green box up here is
a tower that the model has identified.
We say that because that tower is
within the radius, which is the circle
from the actual tower. Then that was
correctly identified or a true positive.
Similarly, here we have a false negative
or a missed detection because there's
clearly a tower here but the model has
not labeled anything there. And then over
here we have a false positive which is...
means that the model said that there's a
tower there, but there definitely is not a
tower there. So once we decided whether
each tower was identified or missed we
can calculate the precision and recall.
Which are the metrics that are commonly
used for this type of task. The
precision is the number of correctly
predicted towers over the number of
predicted towers and the recall is the
number of correctly predicted towers or
the number of actual towers. So to make
that a little more intuitive the
precision is basically telling us all of
the things that the model said is the
tower how many of them really are a tower.
And the recall is saying of the actual
powers there how many of those did
the model find? So you can see they're
both very important things they're
slightly different, but the main thing to
keep in mind is that you want both your
precision and your recall to be as large
as possible. That means as close to one as possible.
Also to show an example of a PR curve
this is a precision recall curve.
This is a way of visualizing precision and
recall. The ideal performance target here...
the perfect performance would be, as I
mentioned before, having a really high
precision and the high recall. Which in
this case means you know high the
vertical direction and far to the right, so
that would be perfect performance over
there, but that's not... probably would
never really happen in reality. Some more
realistic curves look like this blue
line and this red line. And again as I
mentioned being farther to the top right
is better so in this case the blue line
would be considered to be better than
the red line in terms of performance.
We're gonna be seeing a bunch of these
PR curves later especially when we're in
examining the results from our
experiments. When you see those
the main thing to keep in mind is that
though the lines that are farther to the
top right are the better ones. Alright so
lastly, once we have the model that we've
trained and that we're pretty happy with
this performance, we can start using it
to identify grid networks. This
involves taking data imagery from the
location that we're trying to study and
feeding it through the model,
having it tell us where the towers are,
and then we can take those... those labels
that it outputs and convert them into
geospatial data. Which can then be turned
into things that sort of look like Google Maps, but with towers labeled on and that
could then be provided to policymakers
in a, hopefully, convenient format for them.
Who's ready to see the results of the model?
Let's see what model preforms best for our problem.
In order to figure out what is the best
model for our problem, we really ought to
make maximize views of our data. So we
trained three of those which are: Faster
R-CNN, YOLOv2, and RetinaNet
and these are the three models that
really gives excellent performance where
they are tested on more commonly seen data. So we carried out an
experiment on these three models and the
feed all the images we have from
the four different locations throughout the
United States. And so from the models we
carry our predictions on our test
images. And here are two examples of the
..how the models perform. So you can see from the image, the image on the right both of
them both are all three models are able
to identify both of the transmission
towers. However, we have some other images that will tell us "No, the three models
are not all equally capable." For example
you see YOLO version two is
only able to find one distribution tower from the image on the left.
While Faster R-CNN will do better and
find out
two transmission towers. However, RetinaNet is able to perform better
than the other two even though it is
still not perfect performance. However
it is still too early to say that RetinaNet is the best because we want a more
qualitative measure of the performance
of the three models. So we generated
a PR curve. And from the PR curve you realize that
Faster R-CNN will be able to achieve a higher recovery which means out of all of the
transmission tower and the distribution
towers, it is able to find more of them.
However, RetinaNet is always having a
high precision. So when is... when we are
making a decision to pick between RetinaNet and Faster R-CNN
we have to go one step further to
consider our future investigation which
is to map out the grid pattern.
So if we see the image on the right
would be familiar to you since we have
seen it before, so let's take some... let's
investigate some hypothetical
scenario of what happens if we do not
have perfect precision and recall. So
first for example, I see how the image on
the right is different from what we see
just now.
Well, the distribution power is missing
and that happens when we don't have a
perfect... perfect recall value. However, if
you are trying to figure out the grid
network, but are still able to connect
the dots because it will have
distribution towers on the left and
right to it and we are still able to
draw a line.
Then what happens if we... what happens with
we do not have perfect precision? We have
a distribution towers as of out of
nowhere and it is very difficult to
connect it to the existing lines. And
but it will be less of the
problem... if you have higher
recall and able to drop clean lines in
other places, so we are being able to
filter it out. So, ideally we want
some models to give out high recall and
high precision at the same time. And then
the problem is Faster R-CNN that is it
has slightly higher values than RetinaNet, however expense of it is
the precision because it goes really low. It can go to as low as 0.2 and the
recall is about 0.8. That means for every one tower that it detects correctly, it is
going to be about 4 false positives. And
this will be really problematic when
we're trying to draw out the grid pattern. Therefore, we think that RetinaNet
is the best model we have to... for our problem.
So as my colleague Atsushi has mentioned, the generalizability of models are
really important so this is our second
experiment. After identifying that RetinaNet
is the best performing model we
went on with that architecture and
trained five models. These five
models are trained on Connecticut data,
Kansas data, North
Carolina data, Arizona data, and USA
data. So when we say USA data, it is
actually referring to the data from all
these four locations that are lumped
together for training purposes. So after
training these five models, what we did
was to test on images from different
locations. So here's an example where we
test a model performances on images from
Arizona. We can see that here are 5 PR
curves, which are of evaluation of...
evaluation of model performances that my
colleague Ben has just talked about.
We can see that USA model has the best
performance and then comes
model that is trained on Arizona, and
then the model that's trained on North
Carolina. and then Kansas, and Connecticut.
So, after that we also tested these 5
models on the other three locations that
we have mentioned. And from these
performing... these PR curves, we
have found two really interesting
findings. The first one is for models
that are trained on very different
geographies, they cannot perform well
enough on each other. So for example here
for a model that is trained
on Connecticut, actually
performance the worst for images from
Arizona and then vice versa.
For model data that is trained on Arizona images also
performs the worst for images from
Connecticut. We speculate that it is
possible... that it's possible that on most
images in Arizona represent the
desert while images from Connecticut
represents suburban areas. That might
cause the difference. Then another
interesting finding is that the USA
model performs either as well as the
model that is trained on the test
location, or even better sometimes.
This is an example where on the USA
model performs even better than the Arizona
model for images from Arizona. And here
is another example where USA model
performs better than the Kansas model,
which is training on images from Kansas.
So just to give you some concrete idea
of what that looks like,
Here are two example images from Arizona.
And here are the prediction results from
Connecticut model you can see that in this
image
Connecticut model wasn't able to detect
any towers, although they're pretty
obvious there. And then this example, Connecticut model was able
to detect a wrong object, which is
actually a roof corner which is not the
transmission towers that we are looking for. Then this is
the example where USA model performed
better than... this is an image
from Kansas. The USA model appeared to
perform better than the Kansas model.
Here the black boxes indicated results
from USA model and USA model was able
to capture all the towers available in
the image, but the Kansas model is only
able to find two towers among three of
them.
Yeah, alright and so for our
third and final experiment we wanted to know, how exactly will model performance differ
at varying resolutions of the imagery
that we use? So to give you an idea of
what imagery looks like a different
resolutions, we take an image from
Arizona and we've down sampled it to
kind of display it in several different
resolutions. So as you can see the energy...
piece of energy infrastructure,
a transmission tower, is most clearly
visible in the 0.15 meter data. Then it gets
harder to see as you go down in resolution. The important thing to
note for each of these pictures is how
exactly each of the images were captured
and how easy they are to access. For
the 0.15 meter data, this type
...type of data is only
available if you capture it from an
airplane. You need to fly with very
expensive equipment over location and
capture it that way. And of course, that
type of imagery is very proprietary.
Similarly with 0.3 and 0.5 meter data,
you can access that data by accessing
satellite imagery, but once again that's
proprietary data.
That's very expensive to obtain from
very select amount of sources.
Now when we get to 1, 3, and 10 meter data you'll see that it's much more widely available.
For example we have 1 meter data for all
the US and we have 10 meter data for the
entire world.
However, the trade-off that comes there
is that the tower itself becomes very
difficult to see. In this problem
we're trying to explore: is there a
balance that we can strike between
accessing imagery that's of low cost and
easy to access and can we also have that
imagery give us good model performance?
We actually train imagery from all
the six types of resolutions and tried
to see how exactly model performance
varied. And as expected, the orange curve
which represents 0.15 meter
data, gave us the best performance. And
this is as expected because the towers
are most clearly seen in that resolution.
Similarly 0.3 meter gave
us give us fairly good performance,
however any resolutions of 0.5 or
coarser, didn't give us very good results
at all. In fact you won't even see the
curves from 3 and 10 meter data because they didn't perform. They didn't
actually output any results at all so,
that just seemed to indicate that the
model didn't learn anything about energy
infrastructure. To give you an idea of
some examples that the model
outputted, you can see here on top
examples of imagery at different
resolutions. Here we see two
transmission towers and the 0.15 and
0.3 and 0.5 meter data
all found those transmission towers, but
when we get to the one meter data, we
were only able to find one of the two
towers.
similarly for distribution towers that cutoff of almost happens... that cutoff
happens much earlier in fact. In
0.15 meter data, we are
able to find three of the the total
three distribution towers there.
When we get to 0.3 meter data
we found two of the three, and when we
get to 0.5 and worse, we don't fight any
towers at all. That kind of gives you
an example of at what resolution
we're able to find a lot of types of
infrastructure. Moving into sort of
conclusions from our project. What exactly
did we find? Well, in our first experiment
we explored, was this approach, this deep
learning approach, even feasible? We
found that at the RetinaNet model,
of the three models that we tested, gave
us the best performance of a precision
and recall of about 0.65. Now we think these results are pretty encouraging.
There are perhaps some more hyper-paramter tuning that we done with the models and
then of course adding more data will
improve the model performance as well.
Secondly, for the experiment which we
explore the geographic generalizability
of our model, we found that if you train
a model with several different
geographies is actually able to
aggregate knowledge about energy
infrastructure in those geographies and
apply it to improve performance.
So in the example that we saw here from USA & Arizona. When tested on Arizona, the USA
model actually combined its knowledge of
energy infrastructure and other
geographies to do better than the Arizona model itself.
And then finally we explored the resolution performance
and saw that 0.15 meter and
0.3 meter data were the best
resolution, but were the only types of
imagery that we were able to get good
results with.
Anything on 0.5 meter data and
worse, you aren't able to get the results
with. It's important to note that these
are our particular results for the
particular geography that we tested,
and that was Arizona, and that testing
in other geographies might change this
threshold a little bit. So what's next
for us? Well we are actually releasing
our data, our code, on GitHub
perhaps if any of you in the room would
like to play around with our data set
and some of our code, you're more than
happy to do so and we hope to have that
out fairly soon.
The second is to sort of continue
research in this space. The main
challenges going forward will be how
exactly do we apply these deep learning
models at scale? We saw earlier the
example of Zambia. We don't quite know
what sort of challenges will come up
when applying this sort of model to an
entire country. We anticipated that some
of those would be how exactly will the
model generalize to different types of
geographic domains as we mentioned here.
And also being able to identify
different rare objects. Being able to
identify every type of a piece of energy
infrastructure will be able to give us
that full picture of "what does a
country's energy landscape look like?" And
then finally we wanted to further
collaborate with some of our partners.
The lab has worked with different NGOs
and development organizations including
the World Resources Institute and we
want to continue to share our data with
them and some of our findings so that we
can give to them a potential solution 
to closing the gap of over a billion
people that still lack access to
electricity.
So a couple people we'd
like to thank: a special shout-out to Dr.
Kyle Bradbury for his tireless effort in
helping us this semester. He's been a
really good mentor for all of us and I don't
think we have gotten anywhere close as
we did without him, so a special thank
you to him. We'd also like to thank Dr.
Leslie Collins, Dr. Mark Jeuland,
Dr. Jordan Malof, Dr. Robyn Meeks, Artem Streltsov. and Bohao Huang for their help as well and in
particular we also like to thank the
Bass Connections and the Energy
Initiative for their financial support.
Without them this wouldn't have been possible.
That concludes our talk today, I would
like thank you for listening if you have
any questions, we'd be happy to take them,
but thank you again for coming.
