First of all, statics vs dynamic verification.
We're familiar with the static methods. They
utilize search and analysis techniques to
check for design failures under all possible
test cases. Examples would be CDC, RDC, and
formal.
On the other hand, dynamic methods compute
design behavior dynamically, and check that
computed behavior for failures under user-specified
test cases. Examples of dynamic methods would
be simulation and emulation.
So, what are some of the characteristics of
various design methodologies?
Today I just want to talk about three methodologies;
simulation, static sign-off, and formal.
If you look at formal and simulation, they
are both generic applications. That means
the user must build the error checking. To
facilitate the error checking you can buy
VIPs or apps; but it's the user's responsibility
to build the error checking. And because these
are generic products, the debug is also generic.
Formal and static sign-off are both static
methods; however, static sign-off includes
complete and customized error checking in
the context of the tool; as a result, the
debugging is also customized to the application
at hand.
Here's a very simplified model of how we can
evaluate the merits of each of these three
approaches.
The first metrics is that the analysis always
finishes, which means that the method completes
in a practical time frame.
The next metric (on the left) says that all
the violation flagged by the analysis are
definitely design failures.
The third metric says that, for the targeted
checks, 100% of the failures are found. This
is another way of saying that you can prove
the absence of any errors in the design.
These are the three dimensions on which we
can evaluate the various methodologies.
This is the ideal scenario -- a verification
methodology that rates very high on each of
these metrics.
Unfortunately, there is no ideal methodology.
So, let's look at the characteristics of the
various methodologies.
Let's look at simulation first.
We know that simulation finishes. When you
create a test bench, you have some idea of
how much time it will take to run and simulation
will finish. And we also know that for simulation
the failures that are flagged are definite
failures.
However, simulation is the weakest on the
third metric; it cannot confirm that hundred
percent of failures have been detected.
The weakness of a methodology is where the
engineering effort must be spent to overcome
that deficiency.
Next, let's look at the characteristics of
formal methods.
We know that in formal analysis, if a failure
is flagged, it's a definite failure. At the
same time formal analysis is very capable
of finding 100% of the failures; that is to
say, it is capable of proving correctness.
But it lacks on the third dimension, which
is the performance dimension -- the formal
analysis may not finish in the project timeline.
As a result, all the engineering effort goes
into figuring out how to improve the completion
rate for the formal analysis. This is the
weakest dimension for formal analysis.
If you look at static sign-off, it's a very
different kind of a methodology.
Static sign-off analysis finishes. Static
sign-off also finds 100% of the failures that
are targeted by the checks.
However, static sign-off does not find definite
failures -- it finds potential failures, or
the so-called "noise". And the noise level
in the static sign-off tools will vary from
tool to tool, or the accuracy will vary from
product to product. But the static sign-off
methodology makes a trade-off in the third
dimension.
The idea is that if you can iterate and clean
out all potential violations that are reported
then you have sign-off.
So this is the characterization of the three
methodologies.
