All right. There's one more class of randomized optimization
algorithms that is really attractive. It's, it's, the, it's very
interesting to people. It has proven itself time and time
and again. And it's this notion of genetic algorithms. So
the main insight that's exploited in the genetic algorithm setting
is this. So let's imagine we've got a two dimensional
space. And, it's you know, hard for me to actually
draw a, a fitness surface over a two dimensional space.
So just kind of think of this as being one of
those maps. Those contour maps. And so imagine that we've
got, one dimension that X now comes in, in these
two different dimensions. And What we're trying to do is find
the peak, which happens to be there. So what if,
we actually evaluate 3 different points, so these green points
here, we actually check what the values are at these
points. So what we find is that, from this initial point,
this green point here. If we increase our dimension 2, we
get a better value. But, it's also the case. It's starting from
that point. If we increase on dimension 1, we get a
better value. So, maybe, what we, we ought to do, is take
kind of elements of these 2 solutions, these 2 inputs and combine
them together, and move out on dimension 1 and dimension 2, and
maybe that will actually give us a good score as well.
And in this particular case, it puts us in the base interaction
of the local maxima. So, this turned out to
be useful in many spaces, especially spaces that can be
specified combinatorially like this. Where there is this separate dimensions
that contribute in various ways to the overall fitness value.
>> Ok Michael, that sort of makes sense. But, what does
this have to do with genetics or algorithms for that matter?
>> Well, it's an algorithm in that we're doing, it's an
optimization algorithm, and the genetic part, is because what we're going to
do, is were going to build an analogy with Biological evolution.
>> Mm, analogies.
>> In particular, instead of thinking about these input
points, these little green dots, we're going to think of them
as each input point is like an individual, and
a group of them taken together is like a population.
>> Mm-hm. It's really the same idea, but
we're just giving it a different name. Okay.
>> The the idea of local search where you make little changes
to a, to an input, we're going to now call that mutation. All right?
Where you take an individual and kind of tweak it a little bit
>> Oh, like we did in the the, the example we
did before where we define the neighborhood as every one, difference
in every single bit. That's right. So, so the mutations can
happen along neighborhoods. It's the same kind of concept as that.
>> Okay.
>> And then, you know? And, and, you can
see that there's, the mutations happening over X. So,
I assume that you get X [UNKNOWN].
>> [LAUGH] I think that's a fair point. I like your science.
>> Yeah. [LAUGH] Yeah, that's right. It
is science. The, those are all concepts
that we were already using when we
were doing these other randomized optimization algorithms.
One thing that's different though, is the
notion of crossover. So, what crossover does,
is it takes different points, like this green point and this green point and
instead of moving them just to their own neighborhood, it
gives you a way of combining their attributes together with
the hope of creating something even better. So, that is
where it starts to actually kind of deviate from the
standard notion of local search or randomized optimization. And gets
us into something that, that feels a little more like
evolution. So, this is kind of like. Dare I say
sexual reproduction. Right, where the two parents can actually form a
new kind of offspring that you know, if you're lucky is has
all the positive attributes of both of the parents. Like my children.
>> Uh-huh, and if you're unlucky, it has the
worse attributes of the parents, like other peoples children.
>> [LAUGH]
Exactly. And so, and finally what we were calling
iteration before in the context of genetic algorithms, we
can call it generation. Because we're going to take, a
population of individuals and kind of you know, mate them together
to create a new population of individuals. And we're
going to, what we hope is improve iteration by iteration.
>> Okay, that makes sense. So, If I can just push
on this a minute, it seems like, if it weren't for crossover,
this is really like doing random restart, except instead of doing restarts
you just do 'em all at once cause we have parallel computers.
>> Yeah, I think that's I think that's fair. I think that's quite fair
actually. Okay. So then the thing that makes it more than that is, crossover.
That somehow these parallel, random searches are bleeding
information back and forth, help maybe bleeds the wrong.
>> [LAUGH] Yeah, you don't want to get too biological about this.
>> Yes, right. Well, so they're
sharing fluids, metaphorically, with one another
>> [LAUGH]. And conveying information that way, just the
way genes do. Right. And then, and so that's
the sort of interesting concept that now we have
information, not just in the individual, like we're moving
that one individual around and trying to find better and
better solutions. But the population as a whole, represents a
distribution over individuals. And that, that distribution might actually be
a useful thing to guide the search for higher scoring individuals.
>> Okay. That, that makes sense.
