I'm Sara Vaughn and I'm your host for
this afternoon and I'm a creator of
brands with purpose and we've got a
really fascinating session this
afternoon looking at the future of tech
for the 22nd century now that's a very
interesting kind of thing to consider
given that we've got IPCC and various
other organizations saying that we've
got a climate catastrophe heading
towards this in only 11 years so I'm
very interested to hear what our amazing
panel of speakers have got to say to us
on that matter so first up is going to
be Sims welcome since Sims is from deep
mind and she's going to be talking to us
about the future of energy right thank
you very much for being with us today um
you know climate is one of the most
definitive issues of our time and at
deep mind we are working on two projects
that have proved that you can actually
apply machine learning to help solve
some of the challenges that contribute
to climate change so there are lots of
ways then lots of areas where machine
learning can be applied to climate now
this slide is on mitigation strategies
many of you will be aware there's also
adaptation but within all of these areas
and there are more our team focuses
specifically within energy now even
within this sector there are lots of
different places where we could apply
machine learning there's generation
there's consumption they're scheduling
there's dish this dispatch there's
fusion so you can see there's a huge
opportunity here and our team that has
focused on consumption and generation
now on the consumption side of things
we're working on increasing the
efficiency of existing systems because
of course we can't just wipe out all of
our infrastructure and start over
immediately we need to be optimizing our
current processes while we're building
new solutions on the generation side of
things were working on increasing the
value of renewable energy specifically
wind power and we've done both of these
by piloting with Google so first in the
data centers for the industrial controls
project
next in the Google of contracted wind
farms for our renewable energy or our
wind power project first we're going to
deep dive into the consumption side of
things and take a look at what it is
meant to increase the efficiency of
industrial systems now if we think about
an industrial system let's like simplify
it just for for the sake of easy maths
and say assume a system has ten pieces
of equipment now many of you know that
complex industrial systems have many
more than 10 pieces of equipment but
imagine on each of those 10 pieces of
equipment there are 10 set points or
things 10 things you can change to
adjust for energy efficiency in a system
now that's already 10 to the 10 or 10
billion possible configurations of that
plant now I don't know if that sounds
overwhelming to anyone else but that's a
lot of options if I'm a facility manager
and I'm imagining having to go through
all of those to find the most energy
efficient setups in a system those are
just so many permutations so but what
are the things that this that these
kinds of system have even though they're
immensely complex they're also very data
rich environment so all of a sudden
we're starting to kind of see the
perfect testbed for AI we've got a data
rich environment we've got measurable
rewards so energy efficiency and we've
got specific actions or what we call an
AI in action space specific things we
can control to adjust for efficiency now
on top of all of this why don't we start
with data centers you know data centers
consume 3% of the world's energy and
they're also again data rich
environments but the cooling systems are
particularly interesting because if you
think about industrial process you kind
of have to drill down into one and so we
were like why don't we start with HVAC
systems now for data centers cooling is
incredibly important because many of you
imagine your how hot your laptop gets
when you are streaming videos or you
know watching cat and dog videos on
YouTube now imagine that your laptop is
the size of a small village
I mean how hot that that would get there
are literally chillers the size of buses
in these data centers so being able to
control the cooling is just an immense
project so what did our team actually do
to do this well we took over 2,500 data
streams from across Google data centers
and an action space or things we could
control about 20 different actions we
built an AI system then
took every five minutes a snapshot of
the Google Data Center so we understood
the state of the system every five
minutes
we then clean that data and got it
prepared to be fed into the models our
models then generated a set of
recommendations and in an hourly
forecast on energy efficiency so it said
what set points do we change in what
pieces of equipment and by how much in
order to increase energy efficiency for
the next hour now it also satisfied some
very rigorous safety constraints because
obviously we want to make sure we're
protecting the equipment as well as the
safety of the entire system we then gave
those recommendations to a human
operator so the data center managers and
the facility managers to manually
implement those recommendations every
hour on the hour and this is what it
looked like on the average day at a
Google Data Center you can see where we
turn the ml system on there's an immense
drop in the amount of energy we're using
and when we turn it back on if that
rises again for those of you who want to
know what that Delta actually is it is
40% now this is an amazing amount of
savings we were incredibly excited by
this and not only the increased energy
efficiency we were able to achieve with
our ml system but also the fact that the
AI was able to teach us new information
this is a quote from dan fun finger
who's one of the Google data center
operators that worked with us very very
closely with our AI system and he notes
here I won't read the quote because you
all can do that but he know it's here
that the AI system was actually able to
not only deliver savings but teach us
new information about how to control the
system and to take advantage of what
it's learned in the data but he did have
some feedback for us he said okay well
if you guys before you came in you know
these systems are usually run by
pre-programmed rules and heuristics like
we get to exist in a very supervisory
capacity and use our domain experience
to make sure the system is running
safely but with this AI optimization
it's great we have 40% energy savings
but I'm having to go in and manually
change every hour on the hour all of
these set points that is a lot of work
so they asked if we could build a direct
control system we said yeah okay we
think we can do that so if you note the
first part of the system looks exactly
the same
we took a snapshot of they did use
Center from all of those sensors every
five minutes
clean the data we fed it into the models
the models again generated a set of
recommendations on what set points to
change it by how much based on improving
the energy efficiency over the next hour
but it's the next part that changes
because although we continue to go
through the safety constraints of the of
the cloud-based infrastructure we also
went through safety constraints of the
physical infrastructure before directly
implementing those changes in the data
center system now this is huge because
it from our knowledge this is the first
time we've seen an industrial facility
of this size and having the cooling
directly controlled by AI and because of
that we implemented no less than eight
new safety mechanisms in order to ensure
the system operated exactly as we
intended it to now for the sake of time
I don't have time to go through each one
of these individually but they if you're
curious please talk to me after I'm
happy to geek out with you over them
later so um the safety first system
delivered 30 percent energy savings and
that number is growing now some of you
might say oh hey forty percent with the
human implementations and 30 percent
with direct control like what's going on
with that ten percent well you have to
remember that we we constrain the system
with additional safety requirements and
this number is growing these are our
initial findings but the more we allow
the system to explore and the more time
passes and we're able to loosen those
safety constraints the the higher this
number will get in fact this is my
favorite graph because you can actually
see the blue line as the training data
we've had and then you can see the
amount of energy we actually have used
decrease in other words AI is getting
better over time with the the more data
we have the better our system gets and
the more it learns
so now hopping over the generation side
that was how we were improving the
efficiency of existing systems using an
AI system in industrial in an industrial
setting now we'll pop over the
generation side and say talk a little
I'll talk a little bit about how we have
increased the value of renewable energy
specifically wind power now one of the
problems for wind power is the
intermittency those of you are familiar
with renewables will know that you know
you can't schedule renewable energy the
way you can fossil fuels because we're
simply sometimes in this case at the
mercy of the wind you know sometimes the
win
blows and you have power other times it
doesn't and you know you're stuck and so
with whereas with fossil fuels obviously
they can deliver a set amount of power
at a set time they're just a bit more
reliable so our solution was what if
that wasn't the case what if we could
use AI for predicting energy wind energy
output and then use that those
predictions to schedule and make more
reliable commitments to the grid this is
what our system actually looked like we
took the inputs of global numerical
weather forecasts as well as historical
power production data from the wind
farms we then fed that into a neural net
and if you I need to mention we're
actually controlling 700 megawatts I'm
applying this to 700 megawatts of wind
power so that's enough to power a small
city so this is a medium small and
medium sized city this is a massive
amount of wind power that we're applying
on this system too and what the output
from the neural net is for all of that
power is a future wind power output so
the predictions that would say okay this
is much how much power we think that
you're going to have at each hour during
the day and we're doing this about a day
or so in advance so this is important
because then what we can do is we can
make commitments to the grid in advance
and there is a premium for that
dependability and that reliability and
you can see here the blue line are our
prediction so what we were giving about
a day or two in advance and in the gray
lines of the ground true so you can see
that they follow each other pretty
closely and what I also love about this
graph is that you can see the immense
volatility of the wind in just a matter
of hours and some of these spikes you
can see we go from producing 0 megawatts
of wind energy all the way up to 200
which is a massive amount of power and
so you know the truth of the matter is
we're not ever going to be able to
completely eliminate the variability of
wind but with an AI system we may be
able to make it a bit more we may be
able to make it a bit more predictable
and remove a bit of the complexity and
we have indeed seen that we were able to
increase the value of renewable energy
by about 20 percent and this is compared
to the baseline of not being able to
make day ahead commitments to the grid
now 20% that's that's really exciting
for us and if we're able to increase the
economic
value of wind power and make it more you
know competitive with fossil fuels on
the energy markets what we're hoping is
that we can also speed the adoption of
renewable energy and help to decarbonize
the grid so since this track is about
the future of of technology and this is
a technology that we have today that is
immensely impactful for the future I
just want you to put on your thinking
hats for a minute and think about that
industrial controls project and the
increase in the efficiency of industrial
systems now I mentioned that there right
now consuming 54 percent of global
energy imagine if we could achieve that
30 percent savings on 54 percent of
global energy and what that would mean
for carbon emissions reduction the same
thing with renewables if we were able to
adopt this technology on more wind farms
worldwide what that would mean for
making wind power globally more
competitive with fossil fuels and
hopefully decarbonizing grids worldwide
and like I mentioned earlier you know AI
gets better over time so I think we have
a lot to look forward to thank you very
much
that is a fantastic question
um I there you know real applying
machine learning in the real world means
that you're working with real world data
and for those of you in this industry
you know that real world data is very
very messy so I would say data quality
is our biggest challenge you know we
spend 80 percent of our time cleaning
data and you know if we could you know
we're already working with um you know
policy makers there's some really
interesting work going on in the UK
actually with the energy data task force
to work on data standards to improve the
quality of data in this industry um we
think that that work is fantastic and
we're really looking forward to seeing
those recommendations when it comes out
because yeah data quality is definitely
our biggest challenge thank you thank
you
so gonna have more of a transportation
focus here now in ways that AI can help
us with that now before I start talking
about autonomous vehicles and
intelligent machines
I'd like to talk a little bit maybe the
first slide oh sorry it's up to me
there's no way I running this thing I
don't know what's going on here so I'd
like to talk a little bit about the
image on the right-hand side of that
screen so that's an output from our
software tools called poplar and that is
basically consumed by our intelligence
processing unit or IP u and while this
is in detail comprised of millions of
parameters and relationships and tens of
thousands of programming tasks or
programs it's kind of misleading to
describe this as simply a program we
actually refer to refer to this more as
a knowledge model now graph core is
implementing Hardware solutions for the
IP u and software tools to basically
provide a brain for computers to process
information more in a way that a human
does and my rola graph core is to work
with the automotive industry and this
new processing approach to enable
autonomous vehicles so we're basically
teaching machines how to drive which
kind of leads me to a a little bit of a
personal story just to give you some
personal background before I got into AI
and into automotive and into
semiconductor even before I got my PhD
in plant physiology of all things I
don't know how I came from there but
anyway before that I was a driving
instructor so in my early 20s for about
three years or so I taught humans how to
drive and I think it kind of gives me an
interesting perspective on what it takes
to teach a machine how to drive and it
was kind of a crazy job but what was
interesting was this learning process
that the student would go through when
they first got into the car the driving
tasks were sort of an assemblage of
disconnected tasks that they needed to
put some conscious attention to whether
shifting gears or managing the clutch
versus the gas pedal or
during being aware of where their hands
were on the steering wheel when they
turned a corner signaling looking at
their mirrors but at some point in time
magic happened and they would transition
from the point where this was just an
assemblage of separate tasks to
integrated skill and they and they
became drivers when that happened how
much time that took was very dependent
on the students and it was very age
dependent actually but regardless over
some amount of time somewhat of learning
I guess and experience they became
drivers so when we look at the same
process with machines it's it's it's
it's a very different sort of task and
before we get into more of the task of
teaching machines how to drive is
important to sort of associate ourselves
with some lingo so um this is a standard
way of looking at the different levels
of autonomy beginning from above level
zero up to level five where it's fully
fully autonomous driving so from level
zero to about level two it's really more
about driver assist functions so
emergency braking Lane keep assist is
where the car helps keep you in the lane
and actually these are all things that I
used to do as a driving instructor to
keep the student in the lane and I had a
brake so I could hit the brake if there
were problems now the car is taking over
those kinds of driver assist functions
for you it's level four level four in
level five where that magic now needs to
happen and the Machine needs to begin to
take over
especially level five where this is
really car as a service or for Robo taxi
where there may not even be controls for
a human driver to take over from so how
close are we to that magic happening how
close are we to that level five well
it's actually taking a little bit longer
than we thought so we're probably
probably about a couple years away from
where we thought we would be a couple
years ago and a lot of that has to do
with the fact that the industry has been
in a state of where we just don't know
what we don't know
the focus largely has been just on
perception on
teaching the machines in the vehicle to
actually build a three-dimensional map
of what's around them and it's only now
that we're beginning to get into more
sophisticated
Tasik reinforcement learning and
predictive algorithms so that the
machine can begin to anticipate what's
going to happen around them so just to
go into a little bit more detail around
what we don't know what we don't know we
don't know about is we can one thing we
do know is that there will be non going
need for legacy processing so we'll need
conventional processing to continue to
interface with the vehicle to run
low-level application processing but on
the AI processing front up to now it's
been mostly about about acceleration and
mostly accelerating perception tasks we
need now to move away from just
accelerating perception processing to
really beginning to develop the machine
that learns and anticipates its
environment so really moving towards a
learning machine and we don't know what
we don't know in terms of what those
final algorithms will look like in the
processing requirements that they're
going to require as far as the sensors
go again there's lots that we don't know
about the final mix of sensors what we
do know is it continues to increase
there's an increasing amount of radar
and more raw data with radar increasing
number and resolution of cameras 8
megapixel 12 megapixel cameras
surrounding the vehicle and in the case
of Robo taxi and l5 particularly some
sort of scanning lidar horrendously
expensive tens of thousands of dollars
scanning lighter on top of the vehicle
spinning around constructing a high
resolution point cloud to help us map
around the vehicle and the whole purpose
of all of these sensors is to build a
very detailed three-dimensional map an
environmental model it's called around
the vehicle and I find this a very
interesting differentiator between the
machine student as it were and the human
driver because when a human driver jumps
in in the car
they've got one camera it's a
forward-facing binocular camera that's
supplemented by three mirrors really
small mirrors with big blind spots
and some ears to kind of listen for
issues and the ability to rotate that
camera 90 degrees one way or the other
using this really cool feature called
peripheral vision to check the blind
spots and that's it because the human
driver is not concerned about building
an elaborate extent extensive exhaustive
three-dimensional map they're trying to
build an extended kind of situational
awareness which is different than a
detailed 3d map so situational awareness
in the autonomous vehicle translates
into this detailed 3d map which is very
different than situational awareness
this student human needs to make
decisions based on very imprecise sparse
data so situational awareness then means
learning from experience and where that
experience happens to be very sparse and
imprecise data so we really need a
machine that can learn from experience
we're already beginning to do that with
algorithms but really build a machine
that is good at that that's actually
designed from the ground up to do that
so how difficult is it to build an
intelligent machine well it's actually
pretty difficult and people have been
trying to do it since the 40s but just
in the last five years we've been seeing
real advances one of the things helping
us is that now we have increasing
amounts of data the right kind of data
and the more data you have the better
the machine learns we also have better
compute infrastructures to allow us to
actually run this amounted at massive
amounts of data to actually learn from
that experience but the problem is that
you really need the right sort of
compute platform a new kind of computer
form for what we would call compute 2.0
because the problem is that processors
really are not getting any faster up
until about 2008 or so we were on a
really nice ride
nice ride where as we went down in
processed geometry and our transistors
got smaller and smaller we could run our
processors faster and faster and for
free there's no increase in power but
all
sudden in 2008 that changed and it began
to flatten note we can't run faster
without incurring big power costs since
then the industry has played with
different kinds of parallel processing
techniques but what is really required
is an entirely new way to approach the
problem and it means that sometimes it's
not just evolutionary it's it's a real
jump so if you for example if you look
at propeller driven airplanes doesn't
matter how much horsepower you put in
that engine or how aerodynamic you make
the airplane you can polish it all you
want there's only so fast you can go you
need to make a jump you need an entirely
different kind of technology to go
faster and to break the sound barrier
and beyond and likewise with the
processor architecture your graph core
began with a clean sheet of paper and
looked at what we was quired what was
required for machine intelligence
processing if you were designed to
design this from scratch what would that
look like well to answer that question
we have to look a little bit more
closely at what machine intelligence
processing is all about well what we're
trying to do is learn from data so we
want to be able to extract parameters
features find the relationships the
strengths of those relationships amongst
those parameters and extract that from
data we also we do that by constructing
graphs in high level machine learning
tools and those graphs then extract the
parameters from training data we're able
to construct knowledge models like this
now these knowledge models are naturally
represented as hyper dense
hyper dimensional high dimensional
graphs of tensors when these high
dimensional graphs are saved into
computer memory they're really spread
out in a very sparse manner across
memory so what we need is a computer for
that's able to work with that kind of
sparse data massive memory bandwidth to
support tens of thousands of processing
tasks running in parallel and the
ability to scatter gather data from
widely spaced areas in memory very
rapidly and in parallel and that is
essentially what's been built with
the intelligence processor unit so it's
a machine intelligence graph processor
this becomes even more important as we
begin to move to more advanced
algorithmic approaches like for
reinforcement learning where you're
working with things like long term short
term memory models or LST M they're even
more sparsely populated within memory
they're even more complex and
conventional processor architectures
have an even tougher time with them so
as we move towards reinforcement
learning and machines that learn this
kind of processing architecture will
enable us to really sort of innovate in
ways we haven't been able to do before
so where does that take us back to the
original question about well when will
we have machines that actually have that
transition and are able to drive the way
that I would define driving well if I
put my driver instruct your hat back on
for a second I would say it's pretty
hard to tell because like I said it
really varies from student to student
and this is a kind of student a machine
student that I really don't have any
experience with but one thing I can say
one thing I do know is that if it's able
to learn from experience if it's able to
make that transition where it's able to
anticipate rather than react to what's
going on around it but anticipate what's
going on to some extent take control of
the driving environment around it then
there's hope that will that it will be
able to make that kind of transition and
with our processing architecture and our
approach a graph core we hope to make
that happen thank you very much
yeah so the question I know if you heard
is is when do we expect to see them
actually driving around where we have to
avoid them well I get that asked a lot
and I've been thinking about it within
the context again of driving instruction
and I remember students would sometimes
and I couldn't help it they would just
learn what they needed to learn to pass
the test and they would pay the cost to
go out on lessons and just learn what
they need to learn to pass the driving
test but they weren't drivers but they
passed the driving test and then we were
legally allowed to go out in the road I
think the first wave of autonomous
vehicles will basically pass the driving
test but I think it's going to take some
time before they really sort of
transitioned to what we might call
drivers that can anticipate what's going
on and you can actually kind of sit back
and relax Thanks thank you
is there a clicker here okay thank you
okay here's the situation the world's
population is about to explode we're
currently seven point seven billion
people on the planet and within only
thirty years we're going to hit 10
billion people now that's a faster
growth than we've ever experienced
before
and nearly all of those people they will
want to live inside cities but our
cities they're nowhere near ready for
this think about it it's it's like most
of us will experience more than 20%
people
in our cities in our neighborhoods in
our lifetime and here's a dilemma we're
facing you know there's no denying
what's happening with the climate the
data is clear the last five years
stretch has been the warmest since the
records began and we've you know we're
seeing these pictures of huge fires and
and you know historic storms and
flooding but here's something you
probably don't know is that more than
40% of carbon emissions comes from
buildings so we you know buildings
demand for energy keeps growing on a
global scale so and we know we need to
reduce carbon emissions but at the same
time we need to build more and how do we
how do we tackle this how do we make
room for two billion more people in 30
years
you know without making our planet
inhospitable you know we have to find a
way of designing better smarter and more
sustainable cities and we need to do it
now and this means that urban planners
and architects and real estate
developers they need to be more creative
than ever before they need to find new
ways of meeting this growth of building
denser without reducing the quality of
life so they need to maximize the use of
space while minimizing environmental
impact and that's a tall order but
there's a big problem because the
demands of new development are more
complex than ever before but our tools
they haven't kept up and before I give
you the full picture of how I got here
I want to take you a few years back in
time five years ago when I was working
as an architect myself and I was asked
to dream
to design my dream project it was a few
hundred apartments family-friendly
apartments in an urban center and since
the site was was at an urban context
there was all these complexity all these
things that I needed to take into
consideration so I did what everybody
does in that situation I started
consulting the experts and all of these
experts they knew a tremendous amount
about their own fields I had meetings
with non structural engineers or
daylight calculation experts or you know
no acoustic engineers or geotechnics or
experts within accessibility or fire
protection even grocery store experts I
think I counted more than 15 different
experts who all kind of chimed in with
advice and you know how to solve this
but the problem was each and one of them
they could take weeks or even you know
days or even weeks for them to just you
know to just get back to me and the
daylight expert could advise me to put
buildings in a way that would maximize
daylight but it would you know open up
for more traffic noise and then the
acoustic engineer would say you know
here's how you should put the buildings
and then you'll solve the traffic noise
but then left the area for you know in
shade for most of the day so it it just
didn't add up and you know what I needed
was I needed something that could you
know show me the advantages of each
approach and also how they all added up
so that I could kind of determine the
best overall design now
everywhere else I looked I I mean I was
I was in a situation where I kind of
felt like this system is just outdated
and I and then I saw in other industries
and I saw that digitization was boosting
productivity and construction was kind
of falling behind
it was slow moving and it was a slow
moving animal was actually the last
living dinosaur I think actually
agriculture and hunting is even more
digitized than in construction and I
told my my friend honors because it was
actually here tonight because today
because I was up against time I mean
time was running out and I couldn't see
the full picture here and it was really
frustrating for me and we started
discussing this and in for a fleeting
moment we thought that we were might you
know we could solve this by putting it
into a spreadsheet but we soon realized
that we had to build something far more
advanced now luckily Anders had a friend
who was working with building those kind
of technologies to help solve large
organizational problems and his name was
Karl and Karl thought we were crazy he
said you know this is probably
impossible but if it is possible is
going to change everything so he wanted
to give it a try and our idea gave us a
grant which made it possible for us to
build a prototype and guess what it
worked and gave us our first customer
and then another customer and all of a
sudden we were working with the most you
know creative and credential
architectural firms in Scandinavia and
also some of the largest residential
developers in the region and some of
them even global players and I'm not
going to you know give you a picture
that this all happened overnight it's
been it's been three years of hard work
and building but that far-fetched idea
is is now a real product and a company
and it's called space maker
and space maker is we believe the most
advanced computational design platform
using artificial intelligence to solve
real-world construction projects across
the globe and for the first time in a
public forum like this I'm going to show
you how it works
so first you input all the available
data and design information related to
your project things like where it is the
location the kind of zoning height
restrictions 2d and 3d versions of your
current design and then you start
setting the criteria what's most
important for you you might want to
maximize daylight and minimum minimize
you know traffic noise and maximize
energy efficiency and what happens is
that the system starts processing
through billions of possible design
options and then returns the best
possible layouts so there's more than 80
thousand computers starting to work in
parallel in the cloud and then
processing all of these design options
that you can just click on and each and
one of them you can open them in 3d and
to the side here you can click on close
to a hundred different analysis that
each and one of them would take hours or
days or even weeks to get with with just
a click of a button and what's even
better is that what you see at the
bottom here is that you can see all of
the best options within one subject
overlapping so it's actually showing you
where the best options is for each for
each layout or for each design variable
but also how it all kind of comes
together so this is this is actually
showing you the big picture in a way
that simply wasn't possible before now
isn't this cool
I think so and I should say that a big
part of this success you know that these
three years has been resolved from the
people behind this and we have had a
headquarter in Oslo but we have people
in in in Sweden in Barcelona in here in
UK and also in Cambridge Massachusetts
right next door to MIT and MIT is really
acknowledging the importance of
computational tools in urban planning
there's a renowned professor called only
the whack who once said that the only
thing harder than launching a rocket is
building a city and at space maker we
believe that by using artificial
intelligence we can give architects
powers that they never had before and
unleash a guy called under Strang and
who's who's an architect and a
co-founder of a one of the most
credential architectural firms in
Scandinavia called art architects he's
using space maker and he's sharing here
his one moment his experience of using
it it's all about the right analysis
from the start it it helped us to see if
you combine to give things here with the
free goods things over here then you get
a masterpiece and this is an important
point because space maker exists to
empower architects so it means that they
can answer briefs and and and you know
with more confidence in a less time but
it by allowing computers to do the
computation Architects that can be more
creative they can have more time to be
creative and as an architect myself I
have to say that you know this is a tool
that I dreamt that I had and I'm very
excited to see that more and more
architects and companies not only in the
Nordics but also outside Nordics is
using this and it's been a phenomenal
journey and I'm also thrilled to share
that we just raised 25 million dollars
from
you venture capital firms atomic oh and
North Sun to help us realize this
potential so that means we're growing
we're hiring and we're designing space
maker to be to play an important role in
shaping the future that we want I'm here
now so if you know if you have if you
want to get in contact and hear more
just reach out or you can contact us at
space maker a I thank you for listening
of course yes that's a good question
one of the largest construction
companies in the world ax is actually
using this and I just finished a project
where they're designing I think it's
around 600 apartments and they were able
and they used it together with their
architect so they were able to discover
more than 70 extra apartments which is
30 million in value but what's even more
exciting is that they actually increase
the living qualities of the overall
project and also on the worst apartments
in the project
you
