>>Wyeth: Hello, everyone.
I'm Wyeth Johnson, Technical
Art Director at Epic Games.
And today, I want to
talk about the evolution
of our next generation
programmable VFX tool Niagara.
We first unveiled Niagara at the
GDC 2018 session, Programmable
VFX with Unreal Engine's
Niagara, which gave everyone
a glimpse of where
real-time VFX was headed,
a fully programmable,
modular performant tool.
We've now used
Niagara in production
on a massive game,
Fortnite, high-end demos,
such as Troll and Chaos,
virtual production, film,
collaborations with our
early adopters, forum users,
and, of course, now
Unreal Engine 5.
We've learned a
ton along the way.
Let's watch a brief recap.
[MUSIC PLAYING]
Before we begin, let's briefly
discuss what Niagara is
and why we made it.
Unreal Engine's legacy particle
effect editor, Cascade,
has served us incredibly
well throughout Unreal's
long history.
Over time, however, we started
to run into limitations.
The behaviors were fixed
function and hardcoded.
And we needed more
flexibility, more power.
And most importantly,
we needed to move
the output of the simulation
into the hands of artists,
like we did in Unreal Engine
3 with the material editor.
Niagara is the
answer to that need.
It's a programmable,
node-based editor,
which allows artists
to control every aspect
of their simulation
and rendering.
Underlying behaviors are
written in a node graph
or, for those highly
technical, directly in HLSL.
And then the resultant behaviors
are cross-compiled on both
the CPU and GPU and
can be packaged up
into compartmentalized,
individual functions
called modules, which
can be snapped together
like building blocks to
create complex and intricate
behaviors.
This approach lets
effects artists
work without a deep
technical burden
by placing pre-authored behaviors
into a module stack, which
flows from top to bottom, while
also giving unprecedented power
to technical artists
and programmers
to create anything they
can imagine and share it
with their team.
Niagara's systems are built
from multiple emitters.
Each emitter can be standalone
or an on-disk asset,
which can be art
directed and reused
to build up a library
of common effects
and save artists the time of
rebuilding the same things over
and over again.
These assets also support
chained inheritance
with master emitter behaviors
feeding into child variants
to create a variety
of effects easily
from a single common behavior.
In Niagara, simulation
and rendering
are also decoupled, allowing
a single-point simulation
to be rendered multiple
times, for instance,
as both sprites and meshes.
Artists can choose
the complexity
of their interactions
with the tool.
They can stay high
level, modifying
the presupplied inputs
on existing modules
with new values or the
included dynamic inputs
such as random numbers,
math operations, and so on.
For the more technical,
completely new behaviors
can be created inside of node
graphs, where existing particle
attributes, such as
position, velocity,
or color can be read in,
modified, and written back out.
Lastly, Power users can
write HLSL expressions
directly into module inputs
or node graphs if desired.
Niagara is an FX framework,
not just a particle simulation
tool, and will evolve heavily
alongside future generations
of hardware, software, and
the needs of our users.
So what are we going
to talk about today?
First up, I'd like to talk
about some of the improvements
we've made for the interface,
usability of the tool.
We've made a lot of enhancements
to performance and scalability,
forged in the
crucible of shipping
a massive game like Fortnite.
Of course, we've added
a bunch of new features
and capabilities to
the Engine and tech.
We'll talk about those.
And then we've got some
experimental things,
some things that
aren't quite finished
but which are really exciting.
First up, let's focus on
some UI and UX improvements.
So let's start with
the main presentation.
First off, you notice this
new view into your effects.
We call it the system overview.
And the main thing
we're solving here
is we were missing that 30,000
foot view of your system
and all the constituent
emitters that make it up.
So this view allows
for comments, grouping,
and an expression of artistic
intent during creation.
These are the magical sparks.
Here's the flying debris.
Here are the fire elements
all separated out.
And we think this
view will be home
to more things in the future.
We could see a world
where simple data
flow like the sending
and receiving of events
or communication of
attributes between emitters
is represented here.
So users can see all
their interactions
between emitters in one place.
If we move on to
the stack, we've
got a lot of visual
enhancements there,
of course, visual polish,
color palette changes, icons,
prompts, warnings, a better
utilization of whitespace.
Colors have been unified.
And contrast has been increased
overall, just for readability.
And this is an area where lots
of little things help a lot.
So for instance, there's just
a little visual indicator
if an emitter is CPU or GPU.
And we've got this
subtle feature
we're calling highlights, which
are those colored squares.
They just help
guide some intuition
about what modules do,
what their behaviors are.
Orange apply forces.
Red deals with position,
that kind of thing.
Another important thing
is we have full copy
and paste everywhere.
Copy and paste works
on emitters, modules,
dynamic inputs, anything.
Pasting works in
context and adapts
to the pasted inputs
correctly even across modules.
So if you had a
duplicate module and you
wanted to take the
parameterization from one,
paste it into the other,
it would do its best
to match those up.
This is also a big lifesaver
for dynamic input chains
or if you've created
a module variant
and want to get that
data copied across.
The same visual
improvements come
to the stack, multiselection
between emitters
and same improvements to drag
and drop, copy and paste,
and so on.
You'll also notice in
the stack, the parameters
look different now.
We've made some pretty
significant changes there
under the hood.
So let's dig into that.
We have completely refactored
and simplified namespaces.
And we limit the user just to
the most meaningful options,
easy to parse colors and icons.
The goals here are two-fold.
First is just to reduce typing
and, subsequently, typos.
And secondly, we want to
codify explicit behaviors
into each of these namespaces,
for example, explicit inputs
and outputs for data that
flows through modules,
true local variables
inside of module scripts
that aren't accessible
outside of that context.
Of course, all these changes are
represented in the graph view
as well.
You can create new
attributes, change namespaces
of existing attributes,
add subnamespaces
if you need a little bit
more granularity and so on.
Each of the namespace
icons is color coded
and has really clear tool
tips to help guide users
to what the various namespaces
mean and why you'd use them.
These changes should really
simplify our graph interactions
and should be a lot
more clear for users.
So let's talk about workflow.
Niagara is now in use across a
number of the different asset
categories for
effects in Fortnite.
And Fortnite scale is massive
on so many axes, users,
number of platforms, and also
the size of the development
team.
With so many artists
creating content,
we've learned a ton
about inheritance
can be improved upon.
The most valuable learning
is that our initial view
of inheritance was too rigid.
And codifying behaviors
to an inheritance chain
too early is problematic.
And often, development
mandates more flexibility
than you can account for when
you first plan a pipeline.
This led us to a number of
enhancements to inheritance.
First off, emitters can
now have a full inheritance
chain, parents, children,
grandchildren, and so on.
And you can reparent
emitters now.
If you need to remove a behavior
from an emitter midstream,
through the hierarchy
of inheritance,
we can support that.
While working with
inheritance, it also
became clear that the flow
of requiring on-disk assets
for basic behaviors just
wasn't desirable in all cases.
Oftentimes, you just
want to play or generate
a system outside of the context
of a codified emitter behavior.
So we've also leaned into
defining two separate behaviors
for that.
And you can choose them with
a new system creation wizard.
So this results in your
system being derived either
from template behaviors,
and these aren't
art directed necessarily.
There's no inheritance
chain with these.
This is just a functional
thing like burst radially
or make a beam,
that kind of thing.
Or you can drive them from
codified emitters that
are art directed and inherited.
And we also support
breaking inheritance
and creating fully
independent copies of these.
And for those of
you that are used
to working the cascade way,
i.e., with no inheritance
and a completely standalone,
unique particle effect,
we now support that.
This frees up artists
to play, to experiment,
and to create systems
without an inheritance chain
or on-disk emitters if desired.
Now this idea of play leads
me to the final primary area
of usability we
wanted to focus on.
In the course of using
Niagara, our general feeling
is that the tool felt too rigid.
To experiment with a
behavior, you make an emitter,
make a system, add the emitter.
And the new template
behaviors cover
making that simpler and better.
Then, however, you would
make a module on-disk,
add it to your stack.
You experiment.
You finally get to the stage
where you can actually play.
And then you decide
you don't want it.
And you're left with a
series of assets on-disk
and these half discarded ideas.
It's too rigid.
So to that end, we have a new
panel we're calling Scratchpad.
It's a completely transient
script and lives inside
of your emitter or your system.
It can be a module
or a dynamic input.
It can create the
script in line or use
whatever graph logic to generate
behaviors you would normally.
And then that script will be
saved alongside that asset.
If the behavior is a one-off,
leave it there with a comment
and move on.
If you love that behavior and
people on your team will too
or you'd like to reuse it,
promote the scratch behavior
out to an asset
on disk, and it'll
be exposed to the library.
Scratch behaviors
can be created inline
in the system
overview or the stack.
They can be full modules
or just little custom
dynamic inputs to drive
a module behavior.
That's a great option
if your dynamic input
chain is getting long and
unwieldy and a little hard
to read.
When created, a new module is
created in the Scratchpad pane.
You can expose
inputs, add metadata,
do everything you would in the
normal Niagara graph editor.
And that resulting
behavior is saved,
embedded in the emitter
system it was added to,
and it can be inherited
into child emitters.
When you're happy
with your behavior,
either keep it embedded
or promote it out
to a module on disk for
reuse in other emitters.
We believe Scratchpad was
one of the big missing
pieces of the puzzle
and will turn Niagara
into a far more
playful, experimental,
artist-friendly tool.
OK, let's move on to some of
the performance and scalability
improvements we've
made along the way.
Shipping Niagara FX in Fortnite
has put a tremendous focus
on performance.
If you're talking about
shipping across a huge number
of platforms from mobile
devices all the way to high end
PC in Fortnite, the
first question anyone
asks about is performance.
You've got to drive a lot
of things, such as culling
emitters on specific
platforms, scaling particle
counts, managing overdraw.
These things can be tedious.
We wanted to make the whole
process more efficient.
The first way we
accomplish this is
by sharing, if desired,
emitter calculations,
such as lifetime management
and scalability settings.
We've got new state
management modules, which
can inherit the
lifecycle calculations
of the owning system,
looping, delays,
am I allowed to spawn particles,
resulting in both a performance
improvement and a workflow
benefit for artists.
You can now manage the
lifetime and lifecycle
of 10 emitters in a
system all in one place
and see a performance
benefit along the way.
Emitters can manage their
own scalability settings.
But what about managing
scalability at a higher level?
If you allow the system
to manage scalability
at the system level as
opposed to per emitter,
we've introduced an easy
way to share those settings
with something we're
calling Effect Types.
Effect Types are
a new asset type
assigned per particle
system inside of the system
properties.
Their purpose is to
codify a common set
of scalability settings
into an on-disk asset.
This allows for the quick
application of common settings
to whole categories of effects,
say, weapon impacts or ambient
VFX.
And of course, you
can always override
these settings per
instance, if you
need to do something custom.
Effect Types have two parts.
The first is a culling
feature with multiple axes.
This drives a low level,
under the hood C++ scalability
manager, which reduces systems
to zero or near-zero cost when
culled.
It supports distance culling,
a max instance count,
and visibility or
view frustum culling.
And one area we'd
like to address soon
is a performance or
budget-based culling as well.
Secondarily, there
are explicit overrides
for emitters themselves
on each platform
and each scalability level and
an opt-in or opt-out override
listed there.
We hope Effect Types
allow a larger project
to make some clear,
simple choices
upfront about scalability
settings and their particle
systems and then reuse
those preset choices
across large swaths
of their VFX library.
Now let's talk a
little bit about how
we've improved the cost
of the actual simulation.
We've pushed most of
the threadsafe work
into the concurrent
Niagara tick.
And this concurrent tick,
it's allowed to run asynchronously
with all tick groups.
And this gives us
the best opportunity
to fit into whatever holes
we can on the CPU cores.
You can use tick
behavior to control
how Niagara ticks, which,
by default, is safety first
and will follow prerequisites
from the owner and all data
interfaces.
And we've made a
variety of optimizations
to that concurrent
tick as well, which
has resulted in an approximate
30% reduction in instruction
costs and cache misses
in our test assets.
Also made significant
improvements
to the performance of the mesh
renderer on the render thread
and have also significantly
reduced the cost of sorting.
The vector VM performance
has been improved.
We now perform a byte
code optimization pass,
reduces instructions and
allows memory read/write
batching and vectorization
on the target platform.
In most cases, Niagara now meets
or exceeds Cascade performance
on the game thread and beats
it on the render thread.
Those remaining cases
where it doesn't will
be areas of heavy focus
for us in future releases,
particularly in areas of high
emitter counts in a given
system or emitters with very,
very low particle counts
where you pay a lot of
overhead for the emitter.
And from a GPU
simulation perspective,
we know there's lots of low
hanging fruit from both memory
and simulation cost perspective.
Now on to the really fun stuff.
There's a ton here.
So let's start in an
obvious place, data.
One of the most powerful aspects
of a given simulation tool
is what kind of data
it has access to.
The flow of data
from external sources
or from other
parts of the Engine
is where these
types of simulations
really start to shine.
First up is a data
interface which
allows access to the properties
of the current player camera.
On the CPU side, we expose
a limited but useful set
of camera attributes.
On the GPU side, we pretty
much expose everything
from the view uniform buffer
that we think would be useful.
Note that this is an
early feature, which
still needs some work to be
split screen or VR friendly.
But it's incredibly
powerful and something
you've been clamoring for.
On the GPU, we also
provide a data interface
to calculate occlusion.
You can query a point in
space and compare that point
to the depth buffer by taking
a number of samples around it.
This can be calculated
per particle, which allows
for some interesting control.
Combining these two
powerful data interfaces
allow for complex
camera-based effects,
such as viewer-aware
simulations,
VFX attached automatically
to the camera, or even
lens flares.
Here's a fully
procedural lens flare
built using these new features,
which was used in our Unreal
Engine 5 reveal demo.
[MUSIC PLAYING]
Next up are two separate
data interfaces,
which allow access to
audio waveform data.
Oscilliscope provides
inexpensive direct access
to the audio waveform.
While Audio Spectrum
is more complex,
it uses something called the
Constant-Q Transform, which
is almost like a fast Fourier
transform but for audio.
This separates the audio
into buckets logarithmically,
akin to musical
semitones, and is
more intuitive for
listeners and viewers
as the audio response
matches note progression.
Here's a fun little
audio visualization
showing a Niagara
effect being driven
by these audio waveforms.
[MUSIC PLAYING]
One other fun aspect of the
audio effect you just saw
was that it used ray tracing.
Unreal Engine now supports
ray tracing, not just sprites,
but mesh particles as well.
Another exciting enhancement
comes from our good friends
at SideFX.
They've created version two of
their popular Houdini Niagara
data interface,
and it's excellent.
The workflow is
greatly simplified.
Just point it at a point
cache, and it just works.
It uses a new binary JSON format
for faster import and support
larger, more complex
Houdini caches.
The plugin now
automatically handles
animated caches as well,
including managing particle
birth, lifetime, and so on.
The plugin also now
works on GPU emitters,
which has been very in demand.
Here's a quick video of
the Houdini test scene
you see here imported quickly
directly into Niagara.
This is a complex simulation
with physics, cloth,
and procedural animation
generated in Houdini
and played back directly.
It's exciting because that
simulation still stays live
and can be affected by
our existing forces,
creating a hybrid of precomputed
and real-time interactive
simulation.
Another exciting
enhancement comes
in the form of a data interface
we're calling Attribute Reader.
It allows particles to
directly communicate
with other particles,
either in the same emitter
or different emitters
in the same system,
and asks them for attributes,
such as position or color.
This allows for extremely
complex behaviors,
such as chains or
flocking, with thousands
of other complex interactions.
This is also addressable
via custom HLSL.
So you can easily write
complex behaviors that require
intraparticle communication.
Another complementary feature
alongside the attribute reader
is our particle
neighbor grid 3D.
This is a spatial hash, which
looks into neighboring cells
and gathers information
about your nearest neighbors.
This can be used for an
infinite number of things
but is common in fluid
simulation, flocking,
or when solving
performant intraparticle
collisions by only comparing
particles with the nearest
relevant neighbors.
These last two features,
the Attribute Reader
and the Neighbor Grid,
were combined
with a swath of
other techniques,
such as distance field sampling,
custom vertex shader based
animation states,
and level aware forces,
to create some absolutely
stunning simulations of bats
and bugs, 100% in Niagara in
our Unreal Engine 5 demo reveal.
Let's take a look.
[MUSIC PLAYING]
>> Nice bugs. Don't mind me.
This last section
is all about where
we think real-time
visual effects is
headed moving forward.
All the features here are
listed as experimental.
Meaning, they aren't
necessarily production ready.
However, we see
these features as
crucial to our vision for
Niagara moving forward,
and the results
speak for themselves.
Let's dive in.
With the idea of direct
reads and spatial queries
exposing much more
complex simulations
on individual
points, it naturally
leads to the question of
a simulation potentially
being more than simply points.
To that end, we've introduced
a completely new type
of simulation data beyond
particles, Grid 2d.
You can imagine each
grid cell is a particle
and can be read from or
written to as an array.
We have transforms to move
in and out of grid space.
And particles can
write data into grids
or read from grid attributes.
These features enable complex
grid-based simulations
and solvers, smoke simulations,
shallow water, particle
in cell or flip fluids.
Iterative behaviors
normally relegated
to the land of
theoretical exercise now
integrate seamlessly
with production
particle-based effects.
And the results of
these grid simulations
can be written out
to render targets
and rendered directly in
standard Unreal materials
or simply used as
a data structure
to hold information
about your simulation.
Users can have multiple
grids and multiple attributes
per grid.
You can read and write to grids
with nodes or customer HLSL.
This is a powerful,
persistent data structure
that will unlock
Niagara in amazing ways
as we move forward.
The feature which ties all
of these other innovations
together and truly
unlocks their power
is something we're
calling Simulation Stages.
This is a GPU only feature.
And the core concept is
the idea of performing
multiple iterations
in a single update
tick directly in the stack UI.
Each simulation stage
becomes another script
that runs like Spawn or Update.
You can have as many
stages as you like.
And what's incredibly
powerful is
you can iterate, not
only over particles,
but also data
interfaces, for example,
iterating over every
grid cell in your Grid 2d
to create complex,
feedback-based solvers directly
in Niagara.
Now, the implementation of these
types of complex algorithms
is something you can do
directly in the interface.
And because they share data
with the rest of the simulation
behaviors and can talk freely
between particles and grids,
these types of
algorithms are no longer
relegated to the land of
creative coding exercise.
These complex features
will be able to integrate
with our existing particle
behavior seamlessly
and work side by side.
Imagine having a preset
suite of module behaviors,
for example, fluid
simulation or a mold growth
algorithm or a PBD solver,
and those behaviors
are applied alongside
the existing
forces in your simulation, such
as curl noise or gravity,
and solved together
in the same solver
to affect particle
simulations and behaviors
in new and interesting ways.
We've used these
experimental features
in a series of short vignettes
rendered in real-time to show
you the power of what the
attribute reader, Grid 2D,
and simulation stages can do.
And we're just
scratching the surface.
Let's take a look.
[MUSIC PLAYING]
We hope you've enjoyed
this look into Niagara
and what the future holds.
Everything you've seen
here, including early access
to these experimental features,
is available in Unreal Engine 4.25.
