Researchers from North Carolina State University
have discovered that teaching physics to neural
networks enables those networks to better
adapt to chaos within their environment.
The work has implications for improved artificial
intelligence (AI) applications ranging from
medical diagnostics to automated drone piloting.
Neural networks are an advanced type of AI
loosely based on the way that our brains work.
Our natural neurons exchange electrical impulses
according to the strengths of their connections.
Artificial neural networks mimic this behavior
by adjusting numerical weights and biases
during training sessions to minimize the difference
between their actual and desired outputs.
For example, a neural network can be trained
to identify photos of dogs by sifting through
a large number of photos, making a guess about
whether the photo is of a dog, seeing how
far off it is and then adjusting its weights
and biases until they are closer to reality.
The drawback to this neural network training
is something called “chaos blindness”
– an inability to predict or respond to
chaos in a system.
Conventional AI is chaos blind.
But researchers from NC State’s Nonlinear
Artificial Intelligence Laboratory (NAIL)
have found that incorporating a Hamiltonian
function into neural networks better enables
them to “see” chaos within a system and
adapt accordingly.
Simply put, the Hamiltonian embodies the complete
information about a dynamic physical system
– the total amount of all the energies present,
kinetic and potential.
Picture a swinging pendulum, moving back and
forth in space over time.
Now look at a snapshot of that pendulum.
The snapshot cannot tell you where that pendulum
is in its arc or where it is going next.
Conventional neural networks operate from
a snapshot of the pendulum.
Neural networks familiar with the Hamiltonian
flow understand the entirety of the pendulum’s
movement – where it is, where it will or
could be, and the energies involved in its
movement.
In a proof-of-concept project, the NAIL team
incorporated Hamiltonian structure into neural
networks, then applied them to a known model
of stellar and molecular dynamics called the
Hénon-Heiles model.
The Hamiltonian neural network accurately
predicted the dynamics of the system, even
as it moved between order and chaos.
The Hamiltonian is really the ‘special sauce’
that gives neural networks the ability to
learn order and chaos.
With the Hamiltonian, the neural network understands
underlying dynamics in a way that a conventional
network cannot.
This is a first step toward physics-savvy
neural networks that could help us solve hard
problems
