this is the prebuilt application so if I
go ahead and run it you have three
options either to train or to run or
to exit so if we run we have a big
difference between the actual result and
the target result so let's go ahead and do
some training and now run it again so here we
go the actual result and the target result
are now very close so what we have here
is a neural network of five neurons two
of those are in the input layer and two
are in the hidden layer and one is in
the output layer and the i indicates
input the h indicates hidden layer and
the o indicates output layer and
those are the two weights in this
neuron and this is the threshold for
this neuron and so each neuron has
two inputs and a threshold and here
the adjusted error is smaller than
the current error so we go ahead and
proceed and this is the same case and
this is the same case and here the
adjusted error is bigger than the
current error so we take a look at the 
probability number and random number and
the random number is bigger than the
probability number so we hold
so we have the same weights and threshold
here and here we have the adjusted error
is bigger than the current error so
again we take a look at the probability
number and the random number and the
random number is smaller than the
probability number so we go ahead and
proceed
etc
until we get to those weights and
threshold for the neural network now we
have also this temperature and it is
used in the formula for how we proceed
or we hold if the adjusted error is
bigger than the current error so when
the temperature is high we can proceed
more often and when the temperature
becomes low as it gets lower and
lower we do more holding this is the
same idea as in annealing so when the
metal is hot it's easier to manipulate
when it gets colder it becomes much
harder to manipulate I will start by
creating a new project
and here I will have a neural
network class and it will be in this
package and the neurons in this
neural network would be either in the
input layer or a hidden layer or the
output layer and we will have two
neurons in the input layer and two
neurons in the hidden layer and one
neuron in the output layer so the total
number of neurons will be five and we
will also have a neuron class
and a neuron will have a threshold and
two weights and an output and it
will belong to one of three layers
the input the hidden or the output
layer and we will have get and
set methods for the threshold the weights
and the output and a get method for the
layer type and we will have a simulated
annealing class where the simulated
annealing logic will be and an
application driver class with a main
method
now the neuron class will have two
constructors one that instantiates a
new neuron given what layer it will
belong to and another one it will
instantiate a new neuron given a
threshold value and the two weights and
what layer it belongs to and we
will be using
the log sigmoid transfer function so
1 over 1 plus exponential of minus 1
times the weighted sum that is passed in
here and we will have this method that
instantiate a new neuron and
populates it with the adjusted weights and
threshold of this neuron
and returns that new neuron so this
should do it for this class next let's
go to the neural network class and here
we will have an array of neurons of
all the neurons the input the hidden
and the output neuron and get and
set methods and this class will have
two constructors
one that instantiates a new neural
network given a passed in array of
neurons that is used to populate this
array of neurons and the second
constructor will instantiate a new
neural network with two input neurons
and two hidden neurons and one output
neurons so here 0 to number of
input neurons and number of input
neurons to number of input neurons
plus number of hidden neurons and each
neuron will be initialized to a random
threshold two random weights
and I will have the XOR training data
defined here so if we have 0 & 0
or 1 & 1 as inputs then
the target result will be 0 and if we
have 0 & 1 or 1 & 0 as the inputs then
the target result will be 1 and here
let me define this run method let's return 0
for now so it takes in the input coming
from here
and I will have this calculate root
mean square error that will calculate
that error and return it so run here would
return the actual result and here we
are going through all the training data
and here we subtract the target result minus
the actual result and we square that and
then we add all those up for for
training data
and we end up with the local errors and
then we divide the local errors by the
training data length and we do a square
root for that and this will be the
code for the run method
so we go through all the neurons and we
check if it is an input a hidden or an
output neuron if it is an input neuron
then we set the output for that input
neuron to be what's coming in as input
from the training data and if it is a
hidden neuron then we add the
threshold of that neuron to the weighted
sum and then we go through all the neurons checking which one is an input
neuron and we pick up the outputs of
that input neuron one at a time
multiplying them each time by the weights
of this neuron and we end up with a
weighted sum that we pass to the apply
activation function and in case we
have an output neuron we do the same
thing except here we are checking if
have a hidden neuron we go through all
the neurons and we check if it is a
hidden neuron then we pick the output of
that hidden neuron and we multiply
it by the weights of this neuron
that we're looping through
and then we end up with a weighted sum
that we pass it to the apply activation
function and this run method returns as
result the output of the output neuron
so number of neurons minus 1 now
going to the simulated annealing class
let me define this rate of
cooling and this initial temperature and
I'll set the starting temperatures to be
the initial temperature and we'll define
this minimum temperature and let me
define those 2 printing methods so this will
print the heading and this method will
print the rows below that heading and I
will have a public anneal method that
is called in order to perform
simulated annealing
on this neural network and I will have
this private adjust neurons method
that takes in a neural network and we go
through each neuron in that neural
network and call the adjust method on it
and we put those adjusted neurons in
this neuron array and we do a set neurons
on the neural network
and for convenient return the neural
network that was passed in here and
here I will first print the heading and
then go into a while loop so while the
temperature is bigger than this minimum
temperature and the temperature will
start from the initial temperature we
will use this formula to decrease the
temperature so the new temperature is
equal to the old temperature times one
minus this rate of cooling and
inside this while loop I will start by
picking up the current error by
calling calculate root mean square
error on this neural network that is passed
in and I will also pick up the adjusted
error by calling this adjust neurons
method and calling the calculate the RMS
error on this neural network on the
adjusted neural network that is
returned and if the adjusted error is
smaller than the current error then we
will go ahead and proceed let me define
this string buffer and I will append to
it this proceed message
and here let me pick up the neurons on
the passed in neural network and I want to
keep track of the current network so
here the neurons will be what's returned
from neural network dot get neurons
specially since here we called adjust
neurons on the neural network and the
current error will be the adjusted error and
let me define this method that is used
in case the adjusted error is bigger or
equal to the current error so in the else
case here let's for now return false
so I will do an else statement here so
if this method return true then we
will proceed so the neurons will be whats
returned from neural network dot get
neurons and the current error will be the
adjusted error and here I will set the
neurons and print
the current row so while going through
this while loop and this will be the
logic for this method this is the
acceptance probability formula that I am
using here and this number is for
convenience so the idea here is the same
as in the annealing process so in the
annealing process as a temperature
decreases it becomes more difficult to
manipulate the metal so here as the
temperature decreases we will
increasingly do more hold than proceed
so we get a number from the formula
it would be the probability number and we
come up with a random number and if the
probability is bigger than the random
number then we set the flag that we will
be returning to true and if it is true
then we print out the proceed message
random number smaller or equal than the
probability otherwise it defaults to
a decision of hold random number
is bigger than the
probability and we end up appending the
decision to the passed in string buffer
and we return the acceptance flag so
this should do it for this class and
next I will go to the driver class and
here let's start by defining this
printing method it will print the result
of running this neural network and I will
be using a buffered reader to pick up what
the user is entering on the command line so
we will have a while always true loop and
here we will start by prompting the user
they can either enter train or run or
exit and we will read what that user
entered
and in case they entered exit then
we will break out of the while loop and
exit the application otherwise if
they enter train then I will
instantiate a simulated annealing
instance and let me define a new neural
network here so I will instantiate
a new simulated annealing instance and
call the anneal method on it passing in
this neural network and if the user
wants to run the neural network then I
will define this result array that will
contain the result of running this
network and then I will pass this result
to the print result method and
actually instead of four so we have four
here
so instead of four let's enter training
data dot length so here we will find
the result of each one of those inputs
and this should do it for this class
next let me go ahead and test
run the application so let's try to run it
first and we have a big difference
between the actual result and the target
result so let's try to do some training
and run it again and here we go so the
actual result and the target result are now
very close
