We often talk about how traditional computing
is reaching its limit--there’s a threshold
we can’t move past without making some seriously
big changes to the way we structure computers.
One of those exciting ways is by making physical
computers a little more like human brains.
We introduced this concept in more detail
here, but a quick recap: this kind of computing
is called neuromorphic computing, which means
designing and engineering computer chips that
use the same physics of computation used by
our own nervous system.
This is different from an artificial neural
network , which is a program run on a normal
computer that mimics the logic of how a human
brain thinks.
Neuromorphic computing (the hardware version)
and neural networks (the software version)
can work together because as we make progress
in both fields, neuromorphic hardware will
probably be the best option to run neural
networks on...but for this video, we’re
going to focus on neuromorphic computing and
the really exciting strides that have been
made in this field in the past year.
See, traditional computers ‘think’ in
binary.
Everything is either a 1 or 0, a yes or a
no.
You only have two options, so the code we
use and the questions we ask these kinds of
computers must be structured in a very rigid
way.
Neuromorphic computing works a little more
flexibly.
Instead of using an electric signal to mean
one or zero, designers of these new chips
want to make their computer’s neurons talk
to each other the way biological neurons do.
To do this, you need a kind of precise electric
current which flows across a synapse, or the
space between neurons.
Depending on the number and kind of ion, the
receiving computer neuron is activated in
some way--giving you a lot more computational
options than just your basic yes and no.
This ability to transmit a gradient of understanding
from neuron to neuron and to have them all
working together simultaneously means that
neuromorphic chips could eventually be more
energy efficient than our normal computers--especially
for really complicated tasks.
To realize this exciting potential, we need
new materials because what we’re using in
our computers today isn’t gonna cut it.
The physical properties of something like
silicon, for example, make it hard to control
the current between artificial neurons...it
just kind of bleeds all over the chip with
no organization.
So a new design from an MIT team uses different
materials-- single-crystalline silicon and
silicon germanium layered--on top of one another.
Apply an electric field to this new device?
You get a well-controlled flow of ions.
A team in Korea is investigating other materials.
They used tantalum oxide to give them precise
control over the flow of ions...AND it’s
even more durable Another team in Colorado
is implementing magnets to precisely control
the way the computer neurons communicate.
These advances in the actual architecture
of neuromorphic systems are all working toward
getting us to a place where the neurons on
these chips can ‘learn’ as they compute.
Software neural networks have been able to
do this for a while, but it’s a new advancement
for physical neuromorphic devices--and these
experiments are showing promising results.
Another leap in performance has been made
by a team at the University of Manchester,
who have taken a different approach.
Their system is called SpiNNaker, which stands
for Spiking Neural Network Architecture.
While other experiments look to change the
experiments we use, the Manchester team uses
traditional digital parts, like cores and
routers--connecting and communicating with
each other in innovative ways.
UK researchers have shown that they can use
SpiNNaker to simulate the behavior of the
human cortex.
The hope is that a computer that behaves like
a brain will give us enough computing power
to simulate something as complicated as the
brain, helping us understand diseases like
Alzheimer’s.
The news is that SpiNNaker has now matched
the results we’d get from a traditional
supercomputer.
This is huge because neural networks offer
the possibility of higher speed and more complexity
for less energy cost, and with this new finding
we see that they’re edging closer to the
best performance we’ve been able to achieve
so far.
.
Overall, we’re working toward having a better
understanding of how the brain works in the
first place, improving the artificial materials
we use to mimic biological systems, and creating
hardware architectures that work with and
optimize neural algorithms.
Changing computer hardware to behave more
like the human brain is one of a few options
we have for continuing to improve computer
performance, and to get computers to learn
and adapt the way humans do.
While scientists make computers that work
like brains, put your brain to use by building
your very own website!
Domain.com is awesome, affordable, reliable,
and has all the tools you need to build a
new website.
They can fulfill all your website needs.
They offer dot com and dot net domain names,
and intuitive website builders.
They have over three hundred domain extensions
to fit your needs, from dot club to dot space,
dot pizza!
Take that first step in creating an identity
online and visit domain dot com.
It looks like it’s gonna be a wild ride
ahead, you guys.
I think you should probably subscribe to Seeker
so you can always know when something new
and exciting happens as we progress along
this brain-mimicking path, and for even more
on this subject, may I suggest you check out
this video on neural networks?
Thanks for watching.
