In this lesson, 
we will see a first application
of algorithmic information dynamics 
to an abstract area of mathematics -
that of graph theory -
and also to a major area in science -
that of network reconstruction 
or network reverse engineering.
We will use these concepts, 
and at least three papers,
in direct connection 
to these applications.
But, obviously all other papers 
about CTM and BDM are relevant,
including two-dimensional complexity 
and graph complexity.
So, you may remember what is
the graph spectrum of a graph.
It is a list of eigenvalues 
of its adjacency matrix,
sorted from largest to smallest.
Something very interesting is that the 
area has very few analytical results,
and a lot of what is known 
is mostly empirical.
There are some obvious eigenvalues 
related to very simple cases,
such as in the case 
of the complete graph.
In a complete graph, 
the largest eigenvalue
is proportional to 
the size of the graph.
So, it basically provides 
the size of the network,
because all the entries 
in the adjacency matrix
in a complete graph are 1.
So, it is not difficult 
to see and even prove
how it can be shown 
that the largest eigenvalue
is related to the number of nodes 
of a complete graph.
But, for non-trivial graphs,
very little is known about 
the behaviour of the eigenvalues,
with respect to classes 
of different graphs.
Usually, the largest eigenvalue is 
considered to be the most informative.
And, that's why it is traditionally
placed first in the graph spectrum.
But, not even that 
is known or true in general,
and traditionally, the rest of
the eigenvalues are ignored -
even when the first one is traditionally
associated only to the graph size.
And so, the rest may encode more 
interesting features about the graph,
but we just disregard them.
So, one project that we have started 
[which is] far from being closed,
is an investigation of the informative 
value of different eigenvalues
in graph spectra 
for different graph types.
Notice this requires 
an objective measure,
because we don't know what an eigenvalue
is measuring in different graphs.
So, it is not possible to define 
a cost function
that requires no inner feature 
of interest,
where no feature of interest 
in the graph or the eigenvalue
is known or identified - let alone 
of the association or relationship.
So, this is an obvious application 
of algorithmic information dynamics,
because we want to compare 
each eigenvalue
to the algorithmic content 
of the graph,
in order to find which one is capturing
most of the graph properties.
And, to do this we can evolve graphs.
So, as I said, 
in graph spectral theory -
traditionally, only the first eigenvalue 
is considered or kept.
Another option... 
that is also very frequent is
to take all eigenvalues to have the same
informative value or the same weight.
However, we know that they capture 
different features for different graphs.
One way to study eigenvalues is,
therefore, to plot them on a space.
Then, make them evolve, 
and take the distance
from those eigenvalues to different 
mutated versions of the same graph.
and compare against all eigenvalues 
to see why and how they change,
and whether they follow the loss or gain
in information content
after applying a perturbation to the
original graph.
So, these are some of the very first 
results we have found.
First, that the largest eigenvalue
is not by far the only one highly 
correlated to the graph complexity -
meaning that, indeed,
the other eigenvalues
are encoding important content 
of the original graphs.
And so, they shouldn't be disregarded 
in principle or a priori.
Each dot here on the screen 
is a different graph class.
A class can be, for example,
a Cayley graph,
which is a graph with particular 
algebraic properties
that have implications 
in [the] size of the graph...
automorphism group,
to which they belong.
And, having no symmetries, 
they tend to be [on] their own
in their own automorphism groups.
Another type of graph 
can be the set of lattices.
Or, another type of graph 
can be planar graphs, that is,
whose edges do not cross each other
in some embeddings
in two dimensions and so on.
So, the way we identify 
the most informative eigenvalue
is by performing the perturbation
analysis to a graph,
thereby emulating the changes 
that a graph can be subject to.
And then, we track 
the eigenvalue behaviour
against the changes 
of algorithmic probability of the graph,
and match the eigenvalue 
that best describes
the direction of the... evolving
trajectory of the graph.
More interestingly, we can do the same 
for different... measures,
such as Shannon entropy, 
and even select a feature of interest
and see which eigenvalue matches the 
Shannon entropy change for that feature,
relative to the change of algorithmic 
content of the graph and so on -
maximising the ways to characterise 
a graph by different measures,
relative to the behaviour 
of different eigenvalues.
So, this approach is very rich,
because it puts together many areas -
just as we like to do with
algorithmic information dynamics.
In particular, it combines algebraic 
and spectral graph theory,
perturbation analysis, 
algorithmic complexity,
information theory 
and dynamical systems.
In another application
of algorithmic information dynamics
to graphs and networks - different to 
the study of graph spectra -
is the evaluation of methods 
that are used to reconstruct networks
representing events happening 
among objects, such as genes,
to reconstruct 
genetic regulatory networks.
The area of network reconstruction 
is very large,
and thousands of methods exist -
but little has been done 
to find ways to evaluate these methods
with other than 
very simplistic measures.
Here is, for example, 
a pipeline that we suggested
to evaluate how much of the original 
algorithmic information content
of a network, 
represented by data,
was preserved by different algorithms 
reconstructing those networks.
And, how to compare each other
to different techniques,
including reduction methods -
such as graph spectra itself 
and motive profile analysis
using various assessing methods
including, for example,
lossless compression and, 
of course, BDM.
One approach suggested 
by algorithmic information dynamics
is simply to evaluate how good a network
reconstruction algorithm can be
at capturing the algorithmic information
content of the original network.
So, here we did it for five of the best 
and most popular algorithms
for network reconstruction 
on different graph topologies.
And, one can also see 
which parameters are best,
or can be optimised,
when reconstructing a network,
in order to keep or lose information -
such as edges -
and then find an optimal cut-off value, 
sometimes or even always necessary
when using popular network
reconstruction algorithms.
So, a similar assessment,
based on algorithmic complexity, 
can be performed.
In the next lesson,
we will see how to actually analyse 
these type of gene regulatory networks,
that are often the result of applying 
these reconstruction methods,
but can also be derived 
or validated
by performing actually 
in material experiments.
[ end of audio ]
