>>Ryan: Hi,
my name is Ryan Brucks,
and I am the Principal
Technical Artist at Epic Games.
Today we're going to be talking
about a Fortnite and some
of the ways that it uses new
and upcoming Unreal Engine
features.
We will take a look at what it's
like to use some of these tools
to make changes inside
the Fortnite world,
and that should help
give an idea of what
the tools are capable of.
And in some cases, we
will talk about how
these tools might apply
for different cases outside
of just Fortnite.
Let's jump in.
Before we talk about
any specific tools,
I'd like to talk about the
role that we see tools playing
in general, and that's
to help empower artists
and designers implement
their vision into the Engine.
So some of our tools
goals are to help
with consistency and
managing of the boring stuff,
while streamlining
the creative process.
So, with that in mind, what
does this mean for Fortnite?
Well, Fortnite has a unique
set of development challenges,
such as it has a large open
world, it's frequently updated,
and we want those updates
to feel meaningful.
The Fortnite map was
pretty consistently
updated every season
between one through ten.
Often large parts of the island
were redesigned or changed
to a new biome type,
such as snow or desert.
However, for Chapter 2
the development challenges
were on another scale.
We needed to create a
completely new island
while dealing with
new tech challenges,
like a more diverse terrain with
a gameplay-interactive water
system, all in a
time frame not very
different from one of our
usual seasonal update cycles.
And a quick disclaimer that
Fortnite's world is still
largely crafted with love
by hand by our designers.
The open world tools
here are designed
to help them get their
vision into the Engine
more efficiently.
So we'll start with
a topic breakdown
so you can hopefully find your
way through this recording
a little bit more easily.
We're going to start
with landscape.
We're going to talk about
the new virtual texturing
system used in Fortnite, the
new landscape layer system,
the new landscape
splines allowing
non-destructive editing, and
custom Blueprint Brushes.
Then we are going to talk
about the new water system.
This is a new system
coming an Unreal Engine
4.26 that allows integrated
terrain carving and fluid
simulation, gameplay
interaction, including
waves and flowmaps.
Next up we will talk about how
Fortnite is using the new
SkyAtmosphere component
from Unreal Engine 4.24,
as well as give a quick
preview of the upcoming
volumetric cloud system that
will ship with Unreal Engine 4.26.
And finally we
will end by talking
about some general
editing tools,
and that includes things
like placing objects
in the world such as trees,
roads, and buildings,
or breaking up the world
for streaming, et cetera.
Really the bread and
butter of level design.
Some of the tools we will show
are available in the Engine
today, and others are
still a work in progress,
but we wanted to show
some early examples.
Our first feature
category is landscape.
The first feature
to talk about here
is runtime virtual texturing.
Now, this is not strictly a
landscape feature by any means,
but in Fortnite we
are primarily using it
for landscape purposes.
Before jumping into
virtual texturing,
I'd like to point out the
distinction between two
completely different
types of virtual texturing
systems in Unreal Engine.
We have both the streaming
VT and the runtime VT.
The streaming VT on one
hand is streamed from disk
and generally saves memory at
the cost of some performance.
It is not currently
used in Fortnite.
The runtime VT, on the other
hand, is generated by the GPU
at runtime, and it
saves performance
at the cost of some
memory, and it's
generally beneficial for
complex, layered materials.
It is now used in Fortnite
as of chapter two.
The streaming VT which, as
before, is not currently used
in Fortnite is basically a way
to have more granular streaming
of large textures.
Different regions of one
texture can be streamed
in at different resolutions.
This is basically just a texture
setting on the texture assets
and doesn't involve any
sort of workflow changes
or added capabilities beyond
the increased resolution
that you can load.
The runtime VT,
on the other hand,
allows different
objects, including
both static mesh and
terrain, to combine into one
single virtual texture.
This is ideal for very
expensive multilayered materials
or where custom blend
effects are needed.
Now let's take a look at
how Fortnite implements
the runtime virtual texturing.
The first thing required
are runtime virtual texture
volumes.
These are volumes that
tell the virtual texturing
system how large each
virtual texture needs
to be in the world.
Now, in Battle Royale, since we
have a lobby island so far away
from the main island,
we decided to give it
its own separate virtual
texturing setup so as not
to waste memory storing
virtual texturing
where we don't need it.
And the second thing to
note is that we actually
have two volumes
per virtual texture.
One is the default
virtual texture
type, which includes base color,
normal, roughness, and secular,
and the other is
world height, which
includes the world
height of anything
inside the virtual texture.
Here we see the main volume
for the main island, and then
the second volume
for the main island,
which includes the terrain
and virtual texture height.
These are the asset
details for each
of the runtime virtual
texture assets.
Notice the one on the left
is set to base color, normal,
roughness, and secular.
This is like having
a simplified material
attribute set all in one asset.
The one on the right is
set the world height,
which gives access to the height
of anything in the runtime
virtual texture to any
material in the world that
reads the RVT.
This shows the implementation
of virtual texturing
inside of a terrain material.
On the left the material
outputs to the virtual texture
using the Runtime Virtual
Texture Output Node.
This is what actually renders to
the RVT, and then on the right
the virtual texture is sampled
using the RVT Sample Target.
Downstream from this
material attributes here,
any custom blending
effects can happen.
For example, here we have a
rock blending into the terrain
seamlessly by referencing both
the virtual texture material
attributes as well
as the RVT height.
This shows the setup
to get a material blend
from the runtime virtual
texture world height.
First, we sample world position,
the blue channel, or the Z,
and then we sample the runtime
virtual texture world height,
then it's a simple matter
of subtracting them, adding
a bias, and dividing
by our desired length
before clamping
that result.
Now we have an alpha that we
can use to blend multiple materials.
This slide shows the results of
roads blending into landscape
using virtual texturing.
In this image there's
no road geometry shown.
The road texture
has been written
into the runtime virtual
texture and seamlessly blended
into the terrain.
Now, in this slide,
we're showing
the geometry for the road
on top of the terrain.
It's not a very big
change, and, in fact,
it's not really needed
for rendering anymore.
The only reason
we keep it around
is because the road
collision gave nicer results
and feel to the vehicle
driving physics,
and of course if we're
going to have the collision,
we need to have the
geometry match as well.
This shows the wireframe
of the spline mesh.
A useful debug that the level
designers on Fortnite added
is the ability to look at
the runtime virtual texture
in a split-screen
comparison mode.
This basically allows
outputting both
the runtime virtual texture
and original material
using a split-screen blend.
Here we are visualizing the
split-screen comparison.
On the left we're showing
the virtual texture
on with the seamless
blend to the terrain,
and on the right the
virtual texture is off,
showing the geometry itself
with a separate material
and no seamless blend.
Let's go ahead and take
a look at the differences
in shader complexity with the
virtual texture on and off.
So we'll be looking at the
whole island from an above view.
With the virtual
texture disabled,
we can see a fairly
high shader complexity.
Now, these red values start
around 500, 550 instructions.
In those sections of pink we
see over the water somewhere
over 1,000 instructions.
With the runtime
virtual texture enabled,
we have much lower
shader complexity
and a much more
consistent result.
We don't have some
terrain components
with a higher complexity
because they happen
to have more layers painted.
For reference, green indicates
an instruction count around
under 200.
Now, for performance, we don't
have exact metrics for you
today, but we have
some approximate cost
benefit on the Playstation 4.
It costs us roughly
one millisecond
to update the runtime
virtual texture,
but in a typical view with
a lot of terrain on screen
it can save us around
three, milliseconds
and that's view-dependent.
It goes up a lot when we're
in the skydiving view covering
the whole screen
with terrain, and it
goes down obviously
when indoors and not
at very much terrain on screen.
Another useful debug
mode is to display
the runtime virtual
texture output level.
This is kind of like looking
at the MIPS of the RVT.
This is built into the
Fortnite terrain material
as an optional debug
display mode, shown here.
The next landscape
feature to discuss
is the new landscape
layer system.
This adds a stack
of landscape layers,
where each layer contains
a full terrain data set.
That means a set of height
maps and weight map layers
in each layer that you add.
The layer stack
adds flexibility,
and it's great
for transient work
in testing one-offs, or
doing changes that might need
to be undone in the future.
This video shows what it's like
to work with landscape layers.
Notice in the landscape tool
we now have an edit layers
group with different layers.
Here we have a base layer
forming the general shape
of the island, and then we
have a details layer on top,
allowing additional
modifications.
Now we can paint changes
inside of this detail layer
and then toggle them
afterwards to see the terrains
without those changes.
And one of the great
things about these layers
is you can simply erase a layer
that's higher up to reveal
what was beneath in the stack.
So if you needed to move those
buildings somewhere else,
you don't have to
worry about the fact
that you destructively
flattened out
the terrain underneath
the building.
You can just erase
those layer edits,
so that's why it's great
for things like integrating
man-made structures
into a natural terrain,
and you can also, as
shown here, perform
smoothing and other features
inside of a terrain layer.
Next we're going to be talking
about the new landscapes
splines.
These allow us to
non-destructively carve
the terrain, and
these are actually
built into the new
landscape layer system.
In the previous
video, you might have
noticed that there was a splines
layer that was grayed out,
and that's because the
new landscape layer
system reserves a layer for
splines if you so choose.
And this video shows
what it's like to work
with the new non-destructive
landscapes splines.
We select the landscape
splined layer,
and by holding Control and
clicking across the landscape,
we add new spline
points to our road.
Notice that as we add
points, the terrain
is non-destructively
carved in real-time.
I'm going to apply
a road mesh so we
have something a little bit
more interesting to look at.
And now we can select splines
points, and as we move them,
the results update the
non-destructive carving
immediately.
This is really useful for making
small tweaks without having
to erase, and try redoing over
and over again, and destroying
your previous work.
The landscapes splines can also
write to weight map layers.
And we can toggle the effect
of the landscape spline layer
just like the other layers.
Custom Blueprint Brushes
is another feature
that was added along with
the landscape edit layer
system in 4.24.
These are material-based
terrain modifiers that
use the GPU to generate data.
Under the sculpt and
paint modes of landscape,
there is a new
Blueprint Brushes tool.
This is how you select
and add brushes.
Brushes in the world
show up in the list
on the right under Edit
Layer Blueprint Brushes.
An experimental plugin called
Landmass offers some flexible
example brushes, but you can
also derive from the parent C++
class to make a custom
brush from a blank slate.
The class for that is
LandscapeBlueprintBrush.
The image on the right shows
the default brush types included
in the Landmass plugin.
This is an example of the
kind of flexible effects
that are possible with
the custom brush system.
Use shapes to define
biomes and manipulate
the terrain in various ways.
Both height maps and weight
maps can be modified,
and the brushes can be
defined in countless ways
from splines to static
measures as inputs.
This slide shows an example of
the data flow from landscape
to custom brushes and back.
Landscape initiates a render
function on a custom brush
with the current
terrain data as input.
The brush then performs
GPU material renders
using this data, which results
in render targets as output.
These render targets are then
passed back to the landscape.
This is an example
implementation
of that render function
inside of a custom brush.
The current terrain
data will be supplied,
which can then be
modified using a material
and written to a
render target, then
that modified render
target is returned back
to the landscape
on the return node.
The output of a custom brush
is completely customizable,
but here is a look at how
the Landmass plugin generates
terrain formations
from simple shapes.
The data flow involved
is to first generate
a mask of a shape
defined by a spline.
An edge detection
pass is then run
on this mask, which stores
edge seed locations.
The Jump Flood algorithm is
then run on the seed locations
to generate a Voronoi diagram,
which can be converted
into assigned distance field.
Noise can be added to break
up the shape inside the seed
by adding noise to
the seed locations.
This is just an example
of how the distance
field from a spline can generate
different terrain shapes.
Different effects result
from capping the interior
and raising or
lowering the brush.
This video shows a prototype
of a shape-drawing method
the Landmass uses.
Presets like mountain,
landmass, road, or river
can be selected and
drawn, and then the points
can afterwards be selected
and modified as normal.
Since custom brushes
exist in a stack,
the stack can be reordered
so different objects have
different priority.
If we put the canyon
higher in the stack,
it will carve out the mountain.
If the mountain is
higher, the mountain
will fill up the valley instead.
So one caveat about
custom brushes
is that they are not currently
used in Fortnite yet, outside
of some experimentation,
but they did form the basis
for part of another system that
we're about to talk about next,
and that's the water system.
For more detail
on custom brushes,
you can see the talk Unreal
Engine Open World Preview
and Landscape Tools from
Unreal Dev Days 2019.
And now for a topic that I've
been waiting for a long time
to talk about.
Water was one of the
biggest new additions
to Fortnite in
chapter two, and we
are excited to be bringing
the set of water editing
and interaction tools
to Unreal Engine in 4.26
as a unified water system.
The water system lets you define
the lakes, rivers, oceans,
and islands using splines.
It includes gameplay interaction
and fluid simulation.
The image here shows how we
broke down the water object
types for the initial version
of water in chapter two.
At the core of the water system
is a new water spline type.
The water spline allows
customizable water properties
to be edited at
each spline point
with interactive
visualization gizmos.
This means we can adjust
things like the depth, width,
velocity, and audio parameters
of the river at each point.
This video shows what it's like
to work with the water body
system.
Water bodies are Actors just
like any other in the world,
and they can be manipulated,
scaled, rotated, or duplicated.
So we can take this river
here, and I'll drag it out
to make a copy of it, and see
how it might look somewhere
else in the world.
You can use that as a foundation
to form another river, for example.
Or we can take a
piece of a new river
that I created there
and I'll drag off
one of its points
to create a fork in
the river, and we'll
have this fork go ahead and
extend out to meet the ocean.
And on the right on
the Details panel,
you'll actually see some of
the terrain carving settings
of the water bodies, which come
from the Landmass plugin that
was mentioned previously.
So the first thing
we'll do here is
select these water
points and move
them down flush with the ocean,
because while the system does
blend the velocity and height.
You still want to get
the height of the river
to be close to the ocean.
Now what we did here is select
and show the river width gizmo.
That allows us to select and
modify the width of the river
at each point, and
then here we chose
to modify the edge offset of the
river to give a little bit more
of a shore width.
So the next visualizer
we'll show here
is the depth visualizer.
So that adds a new little
gizmo under each spline point
that we can select and then
drag to modify the depth
of the river at every point.
So I'm going to go
ahead here and modify
all these points to get the
river to be a little bit
more shallow.
And then finally we're going
to show the last visualization
type, which is velocity.
When we turn on velocity, we
get these new arrow gizmos
that we can click and drag
to increase or decrease
the velocity.
And the material
is automatically
set up to render and
generate flow maps
for the whole world, which
allows us to see foam
in the river as we modify it.
And then that also has an
impact on both the gameplay
and the fluid simulation.
So now that we have these river
points looking pretty decent
like we wanted, we might
want to maybe change
the slope of the terrain
outside of the river.
We can do that as
well using some
of the terrain carving options.
So now we'll look at how the
velocity texture is generated.
We render a combined
water texture atlas,
which includes all of
the water information
for a single terrain.
This includes water
velocity, water height,
and terrain height.
This unified texture
is very useful
to have for water materials
and simulation purposes.
The velocity has a built-in flow
map based on the spline data,
and that's what's used to
generate the white foam that we
saw in the previous video.
To render all this
water, we needed
an efficient and
scalable solution.
We wanted a detailed
surface up close,
but we also needed
it to run fast,
so it needed to simplify
aggressively in the distance.
This is handled by the
new Water Mesh Actor.
The Water Mesh Actor
builds a quad tree grid
around all water
features, allowing detail
up close and
simplification far away,
with smooth
transitions in between.
This example shows how the
quad tree grid refines with distance
and gets more detailed as the
camera approaches the surface.
Since we have a
global water texture,
it's possible to
render the whole water
surface with one shader,
but this wouldn't
be ideal for performance.
For example, oceans
and lakes have waves
while rivers have flow maps.
It would be a bit wasteful
to pay the cost of flow maps
on the whole ocean.
We separate the functionality
unique to oceans, lakes,
and rivers into
separate materials.
We then expose
transition types that
enable blending between
those materials.
Transitions are always
between rivers and oceans
or rivers and lakes.
Transitions are the most
expensive material types.
This shows a debug mode of the
water mesh tiles from the water
mesh quad tree.
The tile color shows
the type of tile.
Blue represents ocean,
green is for lakes,
and red is for rivers.
Purple represents a
river-to-ocean transition,
and yellow represents a
river-to-lake transition.
Note that we also
fade out the water
using the water depth
texture to prevent
water geometry from clipping.
Notice that the
swamp is green even
though it's at ocean level.
This is because we nested
a lake into the ocean
and used that to restrict the
swamp's additional material
complexity only to
the swamp region.
With all this detail, we can now
render some pretty nice waves.
We decided to implement
Gerstner waves, which
are a well-known analytical
wave formula that
is based on sine waves.
They're useful for being
able to easily match
the same result
on the CPU and GPU
so that gameplay and rendering
can both match each other.
This video shows the effect
of combining 16 Gerstner
waves together.
Once you get to this
number of waves,
it starts to build a pretty
realistic water surface.
But of course
Fortnite is stylized,
and we don't
necessarily want to be
too realistic with our water.
We found a solution there by
limiting the number of waves
and carefully controlling
the wave parameters.
Fortnite only uses six
waves, and mobile is further
limited to four waves.
We sort the waves from
biggest to smallest
so that the two waves dropped
on mobile are the smallest two
and do not have much impact
on the gameplay visual match.
Each water body has its
own set of wave parameters
that we refer to as a spectrum.
This gives a range
of values for things
like amplitude and wavelength
in the distribution
between values.
This is just an example
of how two water
bodies near each other can
have completely different wave
spectrum values, the one on
the left having moderately
choppy waves, and the one on the
right having almost no waves.
This video shows what it's
like to edit the wave spectrum
values for the Fortnite ocean.
First, we will set
the number of waves
to zero to show a
calm ocean, then
we will slowly re-enable
the waves one at a time.
You can see how each
wave after the first few
has a relatively small
impact, but they still
help in the final result.
Next we can adjust
the amplitudes to make
the waves larger or smaller.
We have to be careful to test
after playing with these values
because the waves tend
to look a lot bigger
from the player perspective.
Adjusting the
fall-off allows you
to adjust the distribution
between your small and large
waves.
This lets you say
whether you'd rather
have more of the small waves,
more of the large waves,
or an equal distribution
if the fall-off is one,
and the same goes for
wavelengths as well.
We typically use a
wavelength fall-off
of somewhere around
3 or 4
because we tend to want more
of the small waves versus more
of the large waves.
We can also adjust
the wave steepness.
The dominant wave direction
determines the direction
of the first and largest
wave, which is also
the starting point from which
all the other random waves are
generated from.
You can also adjust the amount
of angular random spread
for those secondary
waves as well.
To help improve the quality
of the water rendering,
a new shading model
called SingleLayerWater
has been added.
This has already made its way
into the public release in 4.24,
actually.
This adds a
SingleLayerWater material node
to the graph that allows colored
absorption based on depth,
as well as scattering
and anisotropy controls.
The water renders
as opaque and is
rendered in a separate water
pass that performs screen space
reflections.
This screenshot
shows the benefit
of having colored absorption,
screen space reflections,
and standard material controls
like roughness and specular all
working properly together now.
And our last water topic
is fluid simulation.
The water system includes
a built-in fluid simulation
tool that allows
character, vehicle,
and weapon interactions.
The fluid also responds
to the terrain,
such as reflecting
ripples off the shore
and being affected
by river flow maps.
This video demonstrates some
of the fluid interactions
in Fortnite.
Weapons and character movements
disturb the water and the foam
on the surface of the water.
The simulation is performed in
a local range around the player,
and it can be affected by a set
number of other nearby players
as well.
The simulation can also be
affected by the river flow
maps that are generated by
editing the water spline
points.
This causes the ripples
to flow downstream,
as you would expect.
Here a debug command is entered
to show the character fluid
force.
We basically draw a
stick figure shape
by supplying bone locations
and rendering them
as capsules in a material.
Boats use a simpler
fluid force type
that is a simple
texture-based shape defined
by an effects artist that is
attached to the boat mesh.
With the boat, we can
cause much larger ripples
and clear larger
paths in the foam.
The fluid simulation
helps add to the feel
of the player interacting
with the world
as they drive the vehicle.
Here we take a look at how the
foam erasing effect is done.
Our fluid sim actually has
two channels, a red channel
and a green channel.
The red channel is a
standard fluid solver.
You can actually find a similar
example in the content examples
project by loading the
Blueprint Render to Target map.
For the green channel,
we perform a diffusion
or a blur step that
has a minor feedback
from the red channel applied.
This green channel is then used
to subtract from the opacity
of foam in the water.
The swamp biome also
makes interesting use
of the fluid diffusion channel
by making moving objects
clear a path in the pond scum.
And this can provide
some visual interest
when wading through
the swamp as a player
or driving through in the boat.
It also uses the same method
to create a glow effect
in the slurp juice that is
spilled throughout the swamp.
And now we will talk
about Sky and Atmosphere.
As of 4.24, UE4 has a new
robust SkyAtmosphere model.
It's a multi-bounce
Rayleigh scattering model,
and while it's
physically based, it's
still highly art
directable, which
was a big requirement to be
able to use it in Fortnite.
And it also scales all
the way from high-end PCs
down to mobile devices.
And as of chapter
two, Fortnite is now
using the new SkyAtmosphere.
Now, it's worth pointing
out that physically-based
doesn't have to be boring.
And what I mean by
that is you can still
just use it as a starting point
for your artistic expression.
I'd like to share my own
experience when I first
started learning about
physically-based rendering
for materials when Disney first
popularized it years back.
Initially, it sounded both very
cool and a bit intimidating.
At the time, making materials
look nice was basically
what tech artists
spent time doing.
So it felt a bit
like PBR was going
to take that away, so to speak.
In talking with
other tech artists,
I wasn't the only person
who brought up that fear.
It was a vague sense that
we might lose some freedom
or some expression.
In reality, the exact
opposite happened.
Now, because of PBR
rendering taking over,
there are so many more jobs
and things for tech artists
to do in the field
compared to before,
and people are tackling
bigger, more complex problems
instead of all resolving
the same basic problems.
The same thing will likely
be true in other areas
in the future, such as
skies and atmospheres.
Fortnite uses a Time-of-Day
Manager Blueprint
to manage the
SkyAtmosphere values.
Our Time-of-Day Blueprint has
four values for time of day.
We have morning, day,
evening, and night.
Each time of day has
a separate struct
with a grouped parameters,
controlling multiple Actors
like the sun, the sky
atmosphere, and the height fog.
For the SkyAtmosphere, this lets
us change things like the
Rayleigh and Mie scattering
colors at each time
of day and fine tune
that interaction with the
lighting and the fog together.
Controlling each time separately
allows artistic control
as a layer on top
of physical basis.
Here we see the effect of
scrubbing the time of day
preview inside of the editor.
We can see how, even though
this is a physically-based sky,
it still manages to
look fairly stylized
with nice artistic
and stylistic colors.
And here we disabled
the height fog,
and now disabling
the atmosphere.
So you can see we still use
an Exponential Height Fog,
and together the Height
Fog with the SkyAtmosphere
really forms the total
atmosphere that we now
have in Fortnite.
We have another exciting
new feature coming in 4.26,
and that is volumetric clouds.
GitHub users can access
it using dev rendering.
The volumetric cloud system was
used in the Fortnite Chapter 2
Season 3 cinematic, as well as
the recently shown Playstation
5 demo alongside our Nanite
rendering technology.
The volumetric clouds use the
material domain called volume.
This lets users
specify their clouds
with the standard
material interface.
This is an example of an
extremely basic cloud material.
The cloud mask logic should
go into the extinction pin.
This graph here just maps
a simple tiling 2D texture
across the sky.
It then subtracts a
bias and multiplies
by a density parameter.
Next we will see
what it looks like
and what it takes to make
it look like a proper cloud
material.
We start by showing the
Volumetric Cloud Actor.
The Actor has some
basic controls
for things like the thickness
of the cloud layer, which we're
adjusting here, as well
as the starting altitude
of the clouds.
And so far this material is
just the simple tiling circle
we saw, but we can open
that material instance
and adjust the bias
parameter to erode the shape
to make the clouds
either larger or smaller.
Now let's take a look at
the material and improve it.
Here's where we left off
from the previous slide.
If we scroll down a
bit, there's a function
for controllable
height fall-off.
Cloud materials have access
to a node called Cloud Sample
Attributes, which
was just selected,
which gives access to
the normalized height
within the cloud layer.
This can be used to
map fall-off functions.
This is a simple
exponential fall-off
for the top and bottom.
We can now go back to
the instance and scrub
these values to play with
them and control them
each separately to create
different types of shapes.
Now let's go back to the
material and add some detail.
To do that, we're going
to add a volume texture.
So here we have a texture
sample of a volume texture,
and it just has
world coordinates
multiplied by a tiling factor,
and controlled by a multiplier
for the density.
Then we compile and go back to
the material, looking at it,
and now we have some detail
making some cloud-like edges,
and we can adjust
that detail parameter.
Notice it can be either
positive or negative
for some different
types of effects.
So now we can go beyond that,
and instead of just using
a tiling 2D circle texture,
using a texture that has
more of a natural noise in it.
So now compiling the shader
with the noise material,
we're going to have to adjust
the tiling a little bit
to get something reasonable.
So I'll go ahead and adjust
the density and the tiling
really quick, and then
we start to get something
that looks a lot more like a
procedural sky generated just
from some tiling textures.
So now when we get something
closer to where we want,
we can start to play with the
time of day and see how we did.
So you can also get creative
with defining the clouds.
Here's an example of a
texture mask being generated
from a curve so you can
define the height profile
of the clouds more directly.
And then of course you can
take that same approach
from the previous section
and apply a tiling noise
volume on top to
break up the surface
and create a realistic shape
that has detail as well
as a nice macro shape.
And this video shows
a Blueprint being
used to generate a cloud mask by
positioning individual Actors.
As the Actors are
moved, the cloud mask
is updated, which you can see
here in red in the lower right.
This is done by drawing
materials to a render target.
Each cloud object
has its own settings
for scale, noise,
opacity, and others.
Here we scrubbed the
seed of the noise
to try some different looks.
We can also quickly duplicate
a single cloud around the scene
to compose a desired view.
It can help to place some
larger clouds on the horizon
and use the noise
settings to create
a large cluster to easily shape
large regions of the sky at one
time.
So now that we've positioned
some clouds roughly
how we want our scene,
let's go ahead
and start moving around the sun,
which we can do just
by holding Control-L
and moving the mouse,
which is another nice new
feature that came along
with the SkyAtmosphere.
So you can see we're able to get
some pretty nice results just
from dragging a handful of
Actors around the scene,
and then of course we might
decide we want to add some more
clouds just for this other
sunset view over here,
and that can be pretty
satisfying as well.
And of course you're not limited
to realistic clouds, either.
This is an example
of a stylized test,
showing the flexibility of
the volume material, which
means you can try
non-realistic rendering as well
as pretty much anything
that you can imagine.
And now we have reached
our final category,
general editing, and we will
start with grid-based editing.
This is a new tool to help
automatically partition
the world into a grid,
similar to how the Fortnite
in-game mini map is divided
into grid cells A1 through J10.
Grid-based editing
will automatically
ensure that Actors get placed
into the right sub-level.
This feature is still a
ways out from release.
For now, similar results can
be achieved using the World
Composition Tool and it's
tiled landscape import
feature to help create a grid.
This is a debug
display of what it's
like to move an Actor
around with a level grid.
When an Actor is moved
outside of its grid cell,
you can see a new
level is chosen,
which is displayed as a yellow
square in this debug view.
Previously, if you accidentally
moved an Actor outside
of its intended level
grid, this could
have caused a streaming problem,
requiring a level designer
to manually fix up.
Another upcoming feature to
help with this sort of editing
is one file per Actor.
This is coming in
Unreal Engine 4.26.
This is a new file and
source control paradigm,
and while it's not
used in Fortnite yet,
we expect to be using it soon.
What this feature does is
allow an object's outer
to be defined by
another package.
This is in an effort to
reduce file contention.
So what that means is when you
have Actors stored in a level,
as is the traditional
method used now,
only one person can check out
that level file because it's
a binary file.
You can't merge different binary
edits like you can with code.
So what this might
mean here, if you
have a section of the world
with different types of objects
like trees, and
houses, and terrain,
typically only one
person is going
to be able to check out that
area and work on it at a time.
With one file per Actor,
since these Actors all
become separate files,
pretty much anyone
can check out what
they need to work on
as long as they don't need to
work on the same exact objects.
Another useful
feature that we added
to help with the development
of Fortnite is Actor Foliage,
and what that allows us to
do is paint down Blueprints
just like they were static
meshes with the foliage tool.
And to do that,
it's pretty simple.
You just create a
new Actor Foliage
type, shown here how to make
from the content browser.
And it's worth pointing out that
in Fortnite very few objects
are just regular static meshes.
Pretty much everything
you see in the world
is a Blueprint that allows
level designer customization.
So this means that previously
almost everything in the world
had to be manually placed
by duplicating or dragging
from the Content
Browser, which was
a little bit tedious
for level designers.
And now we can basically
paint down Actors
like they were foliage.
This shows an
example of what it's
like to paint down Actor
Foliage in the Fortnite map.
Like before, we go to
the Foliage Paint tab,
and we see our
selection of foliage
that we've added to
be able to paint with.
When we select one, we
can see that it's now
able to specify an Actor class.
So we can just click
down and paint these.
And another new mode
that's being shown here
which is really useful is
the new Single Instance mode,
which means you can place a
single instance with a click
without having to worry about
getting the density just right
to where either you wouldn't get
trees or you would get trees.
Because the traditional foliage
paint was more meant for,
I think, painting down more
density than single objects,
but it's still
pretty useful to be
able to plop down trees and
objects with single clicks
around the level like that.
This was definitely requested
by the level designers
and allowed them to use
the foliage paint tool,
whereas previously they weren't
really getting much out of it.
So the next feature
to talk about
is improved
hierarchical scattering.
This adds better procedural
scattering variety.
Large objects can essentially
act as anchor points
and spawn smaller objects
of a different variety.
So Unreal Engine has had
procedural scattering
for a while now, but
you couldn't really
have different nested levels
or have any hierarchy.
This is a debug
view showing what
it's like to spawn objects
using a landscape mask
and have different types of
objects spawned as children.
As we get closer,
we can actually
see that the cylinder objects
are spawning smaller sphere
objects.
Just an example of different
things you could do,
including large rocks
spires able to spawn
smaller pieces of
rock scree, and we've
been looking forward
to finding ways
to use this in Fortnite
upcoming as well.
So that concludes this talk.
Thank you all for listening.
We look forward to seeing your
feedback and all the things
you guys create with
these new tools.
Thanks again.
