[MUSIC PLAYING]
SPEAKER 1: Thanks for coming
to the next installment
of the Quantum AI
speaker series.
And I would say this talk is
a little bit of an experiment,
in the sense that, of course,
at Google most engineers have
a background in
computer science.
And this is a
widely-announced talk.
So you might ask the question--
why the bleep should somebody
in computer science listen
to a notable condensed matter
physicist?
And I wanted to use this
introduction to answer that.
Actually, earlier
this week there
was a White House meeting on
quantum information science.
And essentially to chart
the US government's strategy
for funding of this
area, which they consider
to be of strategic importance.
And there was a colleague of
ours, Seth Lloyd, who made
an interesting observation.
He was saying even prior to
the arrival of larger quantum
computers, quantum
information science
has already been tremendously
successful in the sense
that it has provided
a common language that
are now used by different
areas in physics and computer
science.
And if you think about
it, it's kind of true.
For example, when we
designed the random circuits
for the quantum supremacy
experiment, we try to attempt.
Then I remember that Sergio
went to see a black hole
physicist at Caltech to
discuss those circuits.
Because as far away as
these areas sound-- what
has a black hole to do with a
quantum computer-- it turns out
they use methods very related
and useful to Sergio's
analysis.
And in a similar way,
condensed metaphysics--
it turns out if you
ask the question,
how does an electron
get from one place
to another in a
conducting metal?
So this question is
not too different,
in the methods you will apply,
from asking the questions,
how does annealer or a
quantum enhanced optimizer
get probability moved through
an optimisation landscape?
So in that sense I
expect quite an explosion
in insights that very
sophisticated pockets
of expertise are suddenly
being interlinked.
And we start to think in very
new ways about old problems
that with classical messages
alone seem to have saturated.
But now suddenly it opens up--
you think about it differently,
because quantum information
science methods can be applied.
And in that sense,
I'm very happy is
that Boris
Altshuler-- who is one
of the grand masters of
condensed matter theory,
and well known in sub-areas
such as many-body localization
and many-body
delocalization-- volunteered
to give sort of a bit of a
tutorial talk that hopefully
to some degree is and amenable
to computer scientists as well.
So Boris has a very
distinguished career,
and I will brutally
shorten his resume.
So currently he's at Columbia
University, New York--
a professor there.
But previously he
held faculty positions
at Princeton and at MIT.
He has a long list of
prizes to his name.
He's a member of the
National Academy of Sciences
and the American Academy
for Arts and Sciences.
And he actually
comes from the school
of Russian analytical physics.
And, yeah, we're
very glad to have you
here and hear your talk.
BORIS ALTSHULER:
Ah-- oh, I have mine.
Can you hear me?
So first of all, thank you
very much for inviting me here.
I really enjoy and every
time I'm coming here
I learned a lot.
And second, that's right, there
is no really disconnected parts
of physics.
Physics, and especially
quantum physics, is unique.
And it turns out that quantum
computer is yet another quantum
system which has a
lot of advantages
from the point of
view of pure science.
So let me first, very quickly,
tell what is to my mind
the main feature of
the quantum computer.
So a classical case,
we have these bits.
And what we're doing, we are
doing certain manipulations
with these bits which
are dictated by the code
that we're using.
And in the end, we get an answer
in the form of a bit string.
Now, if you are dealing
with a quantum computer,
then at the end of
the day you want
to get a normal answer, which
also is coded as a bit string.
But in the middle of the
computation procedure--
in the middle of
this manipulation--
the state of the
quantum computer
is not a particular bit
string, but a kind of quantum
superposition of
these bit strings.
So in a sense, if you want
to use these bit strings
as a basis, which as I learned
is called computational basis,
then in this Hilbert space
we have a wave function
that characterizes
this quantum computer.
And our manipulation
l-- what we're doing,
we're trying to design and
tailor this very function
in a way that it will
contain desirable answer
as the major component
of this wave function.
So most, of course, important
is what kind of manipulation.
What is the quantum
algorithm to get this answer?
But I'm not in state
of discussing that.
There are in this
room many people
who know about the quantum
algorithms more than I do.
So I will probably play a
role of this small boy, who
is sitting on the shoulders
of serious people.
And all of them are
looking to hunt this duck.
And people are solving a serious
problem--how to get this duck
eaten.
But this small boy
just thinks about all
his stupid
hydrodynamic equations,
and what makes the
bird fly and so on.
So let me play this role.
I will just try to put
this quantum computer
into the context of a
generic quantum system
with a large number
of degrees of freedom,
and try to kind of introduce
you to the language in which we
can discuss it.
And actually, it turns out that
almost any quantum computation
can be reduced-- maybe in a
rather sophisticated way--
to some particular
universal Hamiltonian,
where are the degrees
of freedom are
these quantum spins and qubits.
And this Hamiltonian
consists of two part.
The first part is involving
only z components of the spin.
So if it would be a classical
bit, this is all what we have.
And in principle, it is written
in a form of just generalized
Ising model.
So we have the
spins-- each of them
a subject to some magnetic
field in z direction b,
and there are certain
couplings between them.
And we don't have any
particular geometry in mind,
so a JIJ can be represented
by a certain graph
or-- so any spin,
in principle, can
be connected with any other
spin with this exchange.
And the second term is
a transference field.
So this is a field
that, in principle,
can rotate our qubit.
Now, what is
important-- I hope you
will appreciate that--
is that in the absence
of a transference field,
what do we have here?
We have Hamiltonian such that
every Ising spin-- ever bit
is an operator of Ising spin--
commutes with the Hamiltonian.
Which means that-- looking on
system as a quantum dynamical
system-- we have actually
n conservation laws
Because each qubit, when it
is governed by the Hamiltonian
without the transference
field, is conserved.
And this conservation
law is violated
by the transverse field.
And you will see that this
structure of the Hamiltonian--
that is one part that has
a huge macroscopical number
of conserved quantities
of integrals of motion,
and another violates it.
This type of the Hamiltonian
is quite generic.
And has a lot of
problems that are
common for the whole class
of different systems,
Including quantum computing.
So to give you another
example is completely
from condensed matter.
If you have a
conductor and there
are electrons which are subject
to random potential-- static
timing depends on
the potential--
and at the same time
interact with each other,
you can also argue
that when you forget
about the interaction of
each particular particle
is in it's own state.
Each states are characterized
by their occupation numbers,
and these occupation
numbers will not
be changed as long as
there is no interaction
between particles.
But as soon as there
is interaction,
they will collide and change
their occupation numbers.
So I want to go
even further and try
to make a comparison between
what happens in the quantum
world and what happens
in the classical world.
So imagine that
you have certain--
as people call it-- dynamical
system, classical or quantum,
which has a large number
of degrees of freedom.
And then, very roughly,
you can separate
this world of these dynamical
systems into two classes.
One is integrable.
And if we are thinking
in classical terms,
it means that you can
separate the variables.
So you think you
have one problem
of d degrees of freedom.
At the end of the day,
you have d problems
of 1 degrees of freedom.
And in classical mechanics,
this is in a sense [INAUDIBLE].
And if you indeed can
separate variables,
then you have as many
conservation laws--
as many integrals of motion--
as you have degrees of freedom.
And on the other side
of this class of models,
there are those
which we have in mind
when thinking about
thermodynamic and statistical
physics.
Then we assume that this
astronomically large number
of degrees of
freedom leads to what
they called a
equipartition distribution
of the probabilities.
So in a sense, we assume
its a basic principle
of statistical mechanics
that probability
to find an isolated system in
two states with equal energies
are equal, regardless of
what is their history.
And it is interesting
to follow what
happens when you go from
the purely integrable system
to purely chaotic,
which can be used
as a subject of
statistical physics.
Or there, is
another word which I
will pronounce several
times, which is ergodic.
And as I told, in
the middle there
is quite an
interesting evolution.
So first of all, if
this lambda gives
this violation of
integrability-- violation
of symmetry-- is
very weak, there
is a very well-appreciated
in mathematics,
Kolmogorov-Arnold-Moser
theorem, that
tells you that-- provided
lambda is small enough
and perturbation
is smooth enough--
you don't destroy
integrability immediately.
But it turns out that even
outside this region, when
all estimations following
this theorem or proof
show that there should
be no integrability,
still have systems that do
not demonstrate anything
like a complete path.
So the first system
I want to mention
is so-called
Fermi-Pasta-Ulam system,
which is a kind of
model that probably
was the first model
that was numerically
treated, because these
three gentleman were working
in Los Alamos and just
got a first supercomputer
at that time.
And they tried
following Fermi's idea
that if system was
not integrable,
it should pretty quickly
get into the chaotic state,
and all modes will be mixed.
They considered the chain of
connected nonlinear oscillators
and found that they simply
could not get into the situation
where all modes are mixed.
In a sense, it turned
out that system actually
remembers its original state.
And another system is
even more familiar for us,
its solar system.
So I hope there is
no need to discuss
what I'm speaking about.
There are these planets flying
in the field of the sun,
and Isaac Newton
calculated the orbits.
But in fact, he calculated
them assuming that each of them
interacts with the sun, and
neglected the interaction
between these planets.
And it's not that
he was not bothered
about this approximation.
He was actually bothered a lot,
because for him the periodicity
of the world and the
perfectness of all the dynamics
was non-negotiable.
And at the same time,
he was smart enough
to understand that interaction
between the planets
will violate this perfectness,
and their dynamics will not
be periodic as he wanted.
And he was bothered
to the extent
that he even assumed--
imagine Isaac Newton assuming
that there is God
which once in awhile
intervenes and puts
everything back
into this perfect trajectory.
And then there was a discussion,
for instance with Leibniz,
who there was mutual
and unpleasant relations
between them.
And Leibniz told that
this is a bad idea
to reduce God to
a simple miracle.
Anyway, what do we know
about this solar system?
We know that, indeed, having the
best computers of today's time
and smart people who
are doing that, you
can make these
algorithms and predict
the state of each planet pretty
well in the future, but only up
to about 100 million years--
maybe a little bit less.
And after that it
becomes unpredictable.
And the reason is that
motion of the planets
is indeed not integrable.
And this chaotic
component takes a toll.
And a system kind of
starts to depend on
these original conditions pretty
strongly, so that for instance
if we will go and close
the door in this room,
in a couple of billion
years the orbit of Jupiter
will change its face
by something over pi.
So this system is
not integrable.
You cannot come up with
exact conservation laws.
But at the same time, it is
not at all completely chaotic.
And the only big--
as far as I know,
the only big
catastrophe could be
that with probability 1%
before sound exploits,
that would be collision
between mercury and Venus.
But nothing more
than that will really
cause this chaotic motion.
And the final classical
system that I want to mention
is a kind of whole class of
systems which are glasses.
And again, we know that
although a glassy state is not
an equilibrium state--
and in the equilibrium
the system should
become a crystal--
but they found on the moon.
Which means that
a billion of years
is not enough to actually
crystallize a piece of glass.
So it means that
there are systems
which are intermediate
between very well
predictable integrable,
which conserves
all these integrals of motion,
and chaotic, which does not.
So the way to think
about it that I propose
is the following-- that let's
start with integrable systems.
We are speaking about
with H0-- only about
Ising part of the Quantum
computer Hamiltonian,
or about interaction
between planets and sun.
Then there are these
integrals of motion.
And full set integrals of motion
determine so-called invariant
tori.
And every trajectory-- depending
on original condition--
is on this particular
tori, provided
it has the set of the
integrals of motion.
So if you are monitoring
the behavior of your system
in the space of
quantum numbers, then
the point that is in the
space of integrals of motion--
there will be a point which
is not moving with time.
And the difference between
quantum and classical
is not so big.
The only thing is that this
space of quantum numbers
is quantized.
It's not continuous space
anymore, but it is quantized.
And each side of this
lattice corresponds
to some particular state of
the integrable quantum system,
which is characterized by
a set of quantum numbers,
like in atomic physics.
And if we're dealing with
this quantum computer,
it is probably the
set of bit strings
and these slopes like a
d dimensional hypercube.
And for instance, if
I we are monitoring
the motion of a particle inside
a rectangular billiard, where
both horizontal and vertical
momenta are conserved,
we have just a
square lattice that
corresponds to this allowed
radius of the momentum.
And now the question
is, what is happening
if you apply this
perturbation which
does violate this integrable?
And then of course,
this point which
was not moving at all in
a purely integrable system
will start to move.
But if you are in a calm region,
then it will not go too far.
So the calm statement
in this representation
is just the fact
that this trajectory
will be kind of demonstrating
the finite motion.
And what happens?
What does it mean
for quantum system?
For quantum system, it suggests
that on this line, which
is energy shell-- which
is a line that corresponds
to a given energy
of your system--
there will be only a small
part of this energy shell
in which our system will live.
So there will be
only a small number
of the original state of
this integral system that
are involved in there
eigenstate of perturbed problem.
So the difference between their
chaotic and integrable behavior
is that chaotic
behavior would mean
that this wave
function is present
all over the energy shell.
And integral means that
it is only one point.
And we have this intermediate
situation when it is neither,
and the wave function
occupies a small part
of this energy shell.
And this phenomena
is kind of analog
of Anderson localization,
only instead of localization
of wave function of a particular
particle in real space--
like three-dimensional,
two-dimensional, or
one-dimensional space,
we have localization
of our whole quantum
system in this space
of the integrals of motion
of their original integrable
problem.
And just to remind
you quickly, this all
started with the discussion
of the diffusion.
So you can ask yourself-- if
you create a certain narrow wave
packet or a classical
particle that
is located in a
certain point, and then
it's subject to some
kind of random work,
then-- it was a famous
paper of Einstein
in the beginning of
the previous century
that told that the
probability to find
this particle at
a certain distance
is governed by a
diffusion equation.
And In particular,
mean square distance
is increasing because of time.
And what Anderson told--
that for quantum systems,
it might be not the case,
and instead of increasing
the mean squared distance
from the original point,
can separate and remain
finite, even when
times goes to infinity.
And this means that if we are
in a certain quantum state,
this state will evolve.
But if all components are
localized in the same part
of the space, you
will not go too far.
And this is just cartoon
how these eigenfunctions
of a particle in a random
potential-- how it can behave.
It might oscillate very
rapidly in a real space,
but it can be that
the wave function has
this envelope which
decays exponentially
from certain particular distance
and this means localization.
And to justify this
conclusion, Anderson
used the model which
is of interest for us.
It's now called Anderson
model, and it's basically
a tight binding model.
So you have a lattice.
Each side of the lattice
can host a quantum particle.
You have one particle
for the whole lattice.
And you can tunnel or hope
between nearest neighbors
on this lattice.
And the only difference of
this from a conventional
tight-binding model is that
on-site energies are not
equal to each other.
Instead they are
randomly distributed
in a certain interval.
And weights of this
interval, which is W,
is characteristic
of your system.
And if you fix this
tunneling amplitude as one
in between the
nearest neighbors,
then this W becomes a parameter
of our theoretical study.
And claim of Anderson was that
there is critical value of W.
And if W-- which is bigger
than its critical value--
if disorder strong enough,
all states are localized.
If disorder is weak, the
states can be extended.
And then it was
understood after a lot
of numerical and
theoretical work
that a particular model
is not as important.
And you can consider all kind
of modifications of this model,
but this qualitative
conclusion is robust enough.
Now just some intuition
about how you can
think about this localization.
So to start with, let's consider
something ridiculously simple.
So let's consider Anderson model
of two sides, which is also
known as two-well potential.
And in this case, when we have
two states, each belonging
to one well, and some kind
of tunneling amplitude,
you can consider a
Hamiltonian of this system
as a two-by-two matrix.
And there are-- depending on
the parameters of this matrix--
there are two possibilities.
Either these states are
off-resonance-- so a difference
between diagonal
matrix elements is
much larger than
of diagonal one--
and then, basically, there
is possibility for tunnel
will cause only
small perturbation
of the original wave functions.
And an opposite
situation is resonance,
when the diagonal matrix
elements are almost the same.
So with our tunneling, the two
levels are almost degenerate.
And then your resulting
wave functions
will be neither a wave function
in one well or another well,
but it will be rather the
bonding or anti- bonding
combination of these two states.
So in a sense, in
this situation,
a particle will belong equally
to each of the two wells.
And now, when we are going to
a bigger system, where there
is more sides with random
on-site energies, when we have
this randomness
very strong, there
would be only a few resonances.
And they will be very much
separated from each other.
And then their states will be
localized on one or maybe two
neighboring sites.
But when you keep
reducing disorder,
you will get more
and more resonances.
And they will start to overlap.
So you will have not two-level
but three-level resonances
and so on.
And at the end of the day,
you will get extended states.
And you can estimate
the transition point.
Now, just two words
about experiment.
I always like to
show this picture.
So you make a microwave
cavity, and then you
mimic disorder by something
that you're not supposed to do.
Or you put some metallic
particles inside this cavity.
And what people
did-- they analyzed
distribution of the intensity
of the electromagnetic field.
And they found that
depending on the frequency,
this distribution
is either strongly
localized in some particular
small part of the system,
or is extended.
So it's not uniform but
it is present everywhere
in the system.
And another example which
people in France-- and also
there is analogous
experiment in Italy-- did.
It is from the
field of cold atoms,
when you create first
points like a trap
and put your atoms
there, and then allow
them moving in one
direction, but you
mimic the random potential
by some complicated laser
speckles.
And what you observe
is that this wave
packed doesn't want to spread.
So coming back to many-body
systems-- or systems
with many degrees of
freedom-- what you can tell
is that you have this
lattice, whatever it is,
and you have energy shell.
And if matrix elements
of this perturbation that
violate integrabilty
are connecting
only relatably close
points of this lattice,
you have the same arguments.
And you have something analogous
to the Anderson localization.
So basically you can
justify Anderson model
for almost any Hamiltonian which
consists of these two parts.
And coming back to this
quantum computer Hamiltonian,
it is almost exactly Anderson
model on a d-dimensional--
or here it
n-dimensional-- hypercube.
Because the operators
sigma x is just
sigma plus plus sigma minus.
And it means that what this
second term in the Hamiltonian
does-- it flips a spin
either from up to down
or from down to up.
And this is exactly the same as
transition between the nearest
neighbors.
And so basically you
have this with something
like Anderson model, only on
the large-dimensional hypercube.
And in principle it looks like
the situation is different
because this cube has
a linear dimension one.
But actually, what
is important is
that distance between different
points on this hypercube
can be as big as the number of
the spins-- so-called Hamming
distance-- and a number
of sides is huge.
Just now you can steal almost
anything on the internet,
and this is
six-dimensional hypercube.
This is projection of a
9-dimensional hypercube,
and probably a much
bigger hypercube
is a huge system, even though
linear size is equal to one.
OK, now what can we tell,
as from the point of view
of statistical physics, about
these many-body localized
states?
The state which
localized, by definition,
means that if I start here
I will stay here forever.
If I start here I will
stay here forever.
Which means that there is
nothing like equipartition.
And all arguments of statistical
physics are not applicable.
You cannot introduce
temperature.
The only thing you know is
that you have quantum states,
and the states have
certain energy.
So for instance, entropy
is not increasing with time
if states are localized.
So for localized states,
our usual approaches
are not applicable.
And the question is, what
happens if we undergo
the Anderson transition?
Is a normal picture
appearing immediately
after the transition or not?
And in fact, to have some kind
of clue of what is going on,
it's again interesting to think
about the classical system.
And I have to
admit that there is
kind of a big difference of
approaches of mathematicians
and physicists.
There was a joke
that a physicist
is asked to calculate
the stability of a table
with four legs, he first
does the problem of table
with one leg, then
infinite number of legs,
and then there are
rest of his life works
on a table with an
arbitrary number of legs.
And what is lacking in
most of mathematical papers
is the second step.
So most of the problems
of the transition
between integrable
motion and chaotic motion
are done for systems with
two degrees of freedom.
And nobody was really
building mathematical theory
of the system with high
number of dimensions.
But there was a still
important step made by Arnold.
And his statement
was interesting.
I will try to explain
to you how it goes.
So if you have two
degrees of freedom,
and you violated integrability
but the tori still
exists-- only
deformed-- then even
if there are only a few
toris in the system,
still if you are
starting inside some tori
you will always be there.
If you are outside, you
will always be there.
Because the dimension of
the spaces-- two dimensions,
the dimension of the face space
is doubled because coordinate
and momentum.
Its four-dimensional energy
shell is three-dimensional.
And tori-- remember it is
just where the particular set
of the integrals of
motion is realised--
the tori has dimension equal
to the dimension of the space.
So we have two dimensional tori
in three dimensional space.
And it's clear that we
are always either inside
or outside, because trajectory
cannot cross any tori.
Now, when you're going to a
high number of dimensions,
let's say three, then
difference in dimensions
between the tori and energy
shell is bigger than one.
So in a sense, what do we have?
We have not like a tori in
three-dimensional space,
but a circle in
three-dimensional space.
And a circle doesn't
have inside and outside.
And what happens as a
result is that even when
this perturbation
is infinitely--
is very, very weak,
but finite-- then there
will be still certain
trajectories that go
and cross these existing
circles many times.
And as a result you can
get in the energy shell
arbitrarily far.
And this process is known
as Arnold diffusion--
so that at least some
part of the probability
will be actually diffusing.
And I think this
phenomena, which
is rather weak in
three dimensions,
becomes dominating in a very
high number of dimensions.
So this is just two systems
which I wanted to discuss,
but I don't have time to
give you more details.
It is either transport
of interacting particles
where you have certain interval
of temperatures in which there
is no conductivity at
all, or you can also
study the Josephson Array.
And you can find
that there is also
behavior which corresponds to
localization in this Hillbert
space.
And there is a part
of the phase diagram
where we have a
ergodic behavior,
but in between there is a new
state-- a new type of behavior
which we call bad metal
the corresponds to this non
ergodic, but extended state.
So this is more or less the
genetic evolution of our system
when we increase their
violation of the integrability
and grow from the
insulator to ergodic metal.
And as we believe
now, there is not
one transition that was
predicted by Anderson,
but two transitions-- one from
insulator to non-ergodic metal,
and another from a
non-erdgodic to a ergodic one.
Now, how much time do I have?
SPEAKER 1: You have
like seven more minutes.
BORIS ALTSHULER: Ah, OK.
I think that will be enough.
So a few words
about ergodicity--
so the way of thinking
about it is the following--
that if you have just some
generic problem like Anderson
model, then there
is-- as I told--
critical value of the
disorder at which Anderson
transition happens.
And there is also at
any finite system,
there is a usual
critical behavior
where a system is neither
localized nor extended.
Because this critical
volume of the system
diverges when we approach the
critical point in disorder.
And the way to analyze
these wave functions
is the following-- that you
can consider so-called moments
of inverse participation
ratio, which
are the sums of wave
function to the power of 2q,
where q is in our hands.
And of course this wave function
is supposed to be normalized.
So for q equal one, we
will always have one.
But as soon as q does
not equal to one,
then these quantities
should somehow
scale with the system size.
So they should be proportional
to the total volume-- total
number of sites in the lattice--
to some negative power.
And Just to make simple
estimations, if everything
is ergodic, because every site
is more or less occupied, then
since wave function
is normalized
it's square on each
side is inversely
proportional to the
number of sites.
When you take it
to the power of q,
it would be a number of
sides of the power minus q.
And some in the
definition of this moment
of inverse participation
ratio will give you one.
So all in all, this exponent
will be equal to q minus 1.
Now, if we have a
localized state,
it is clear that if a
function is present only
on a finite number
of sides, then
all these moments of
participation ratio
are independent on system size.
And all tau qs
are equal to zero.
And just the reason why I'm
speaking about ergodicity
is that you can show
that if tau q is exactly
equal to q minus one,
then averaging over space
is equivalent to
averaging over ensemble
of different
organizations of disorder.
And it kind of resembles
the definition of ergodicity
that was given by Boltzmann,
who qualified ergodicity--
the fact that average over
space and average over time
is equivalent.
OK, so if everything is
ergodic then these exponents
are equal to q minus one.
If everything it is localized
then they are equal to zero.
And in a generic case, when
it is neither zero nor q
minus one, your can
introduce a ratio
of these particular
exponents in q minus one.
And this ratio is called
fractal dimension.
So there are-- for each q, there
is a certain fractal dimension.
And the situation when all
of the qs are different
is called multi-fractality.
Now, if we are dealing with
ordinary, three-dimensional
Anderson transition, then
there are serious possibilities
to believe that this
multi-fractality takes place
only at the critical point.
So as soon as you are out
of the critical region,
the situation becomes
either ergodic or localized.
But it is probably not the case
for many-body localization.
And the reason is that if you
are thinking about the Hillbert
space, and think about how
many states are connected
with this particular state
in N's order of perturbation
theory, this number increases
exponentially with N, rather
than in a polynomial
way as it does
in the final dimensional space.
And because of that,
it is much more
probable to have resonance
on the large distance.
And as a result,
the wave function
after delocalization
still can be imagined
as a certain number of more
or less separated large peaks
And of course, in
this situation when
we have wave function
which consists
of some number of peaks that
are far from each other,
there is no ergodicity
because there
are so strong fluctuations.
So I will probably skip this
story about Anderson model.
On the Bethe
lattice, which allows
kind of make a poor
man's calculation
and demonstrate
that there is indeed
a big region in the parameters,
where the wave function is
neither ergodic nor localized,
but rather multi-fractal.
And well, now the question
which we have to understand
is how to use these
types of wave functions.
And from a generic
point of view,
it looks very tempting to
make this quantum computation
in the region where the
wave functions of our system
are indeed multi-fractal
and consists
of several separated peaks.
Because this type
of wave function
actually contains a
lot more information
then is a localized or
fully ergodic state.
So let me finish here.
My time with over, and thank you
very much for your attention.
SPEAKER 1: So we're
just at 2 o'clock,
but maybe one or two
questions for-- no questions?
Then I think we--
AUDIENCE: I have a question.
So what does--
SPEAKER 1: Let him use
the microphone please.
AUDIENCE: Yes.
Given this phenomena that
you describe, how do you
think potential of this could
be of application to field
of machine learning?
In a very broad sense-- result--
BORIS ALTSHULER:
In a broad sense,
I think it is the following--
that if I understand correctly,
what you start with
machine learning is giving
a certain number of examples.
For instance, you show a
picture which is set of pixels,
and call this is a dog, this
is a dog, this is a dog.
And all pictures are different.
And then what you
want to create,
you want to create kind of
idea-- abstract idea of a dog.
And if you just have large a
number of particular pictures,
what you can do--
you can form a state
which is a linear
combination of those.
And then these look in some
sense on the scalar products
of this state with the picture
which you should recognize,
you can probably get some idea.
I think this is the way
how our brains should
think when they make these
letters of generalization.
They take particular
examples and form kind
of linear combinations of them.
AUDIENCE: OK, thanks.
SPEAKER 1: So maybe adding
to Vadim's question,
since he asked already
about machine learning
and this is your last
question, if you would
be so kind to
speculate, how do you
think we could use such
non-ergodic states in quantum
annealing?
BORIS ALTSHULER: Suppose we are
discussing exactly the problem
which we discussed
yesterday-- that you found
some state with low
energy, some solution,
some bit string which
corresponds to low energy.
And we want to find
another one which also
corresponds to lower energy.
So imagine now that there
is a wave function which
is formed by state localized
here-- near one solution,
near another solution,
near a third solution.
And these states
are close in energy.
Now, when we decrease
disorder or increase
the transference field--
what we are doing?
We are hybridizing them.
So true states of our system
in this region of energies
will be a kind of linear
combination of the solutions.
And then, if you put your
system in one of these states,
then it will be a
linear combination
of these states
with nearby energies
in this particular point.
So all states with
those peaks-- symmetric,
anti-symmetric--
all states which
were obtained by N nearby
states will have the same form.
And if originally they
are adding with only one
peak, after you allowed them to
evolve and generate some faces,
you will unavoidably
grow another peak,
and then yet another
peak, and so on.
So in this sense,
if you are having
this multi-fractal state,
what you will have?
You will have a continuously
increasing probability
to get into a new solution
or new approximate solution.
So in a sense, if we have
this type of structure
of wave functions,
it's much better
than going through
ergodic state which
contains no information
about the structure.
It's just distributed
completely.
And of course, if we are
already in a localized regime,
it's also bad because
then you will never go.
AUDIENCE: I have
one more question.
If you think of quantum
computation as a quantum state
evolving with a [INAUDIBLE]
the Hamiltonian,
and you look at the state
which is [INAUDIBLE] up
in the next step that you're
preparing in the computation,
do you think this
will be multi-fractal
or localized-- delocalized?
BORIS ALTSHULER: No, it
depends on transference field.
So as I told you, if your
transference field is large--
not infinitely large,
then it dominates,
but large enough--
you might in principle
have-- I think I should
show you this phase diagram.
So if your transference
field is very large--
and here it's vice versa,
the transference field is
equal to one, but z field
is increasing-- this W.
So if W is very, very small,
you are in the red region
which is ergodic.
And only in the
edges of the band,
you start to have this
non-ergodic state.
But when you
increase the disorder
or decrease the
transference field,
you will get into a
green region when you
have more non-ergodic state.
And the further you go up,
you will have these states,
these fractal dimensions
smaller and smaller.
And if you go too far, you won't
get into localized threshold.
So depending on this
transference field
and on your energy, you can
deal with different states.
And I think this is generic for
more or less any Hamiltonian.
AUDIENCE: I think you're talking
more about quantum annealing,
and I was thinking it wasn't
very clear on gate model
quantum computation, which I
think if you express it in that
Hamiltonian will also have a
time-dependent transference
field and--
BORIS ALTSHULER: Oh yeah.
But what I learned--
maybe now it is disproved,
but there is some kind
of mathematical paper
with like number of authors that
any-- even circled-- quantum
algorithm can be reduced to the
quantum annealing of this type.
So I think this way of
thinking is even more generic
than it looks like.
SPEAKER 1: OK.
Then we should conclude here.
So, thanks Boris for the
very informative talk.
