Professional Documents
Culture Documents
this can be very time-consuming, taking hours or the designer, and therefore fail the easy-to-use
days. On the positive side, it always returns the criterion.
true worst-case corner(s).
Designer Best Guess: Here, the designer makes 3 FAST PVT METHOD
guesses (based on experience) about what the 3.1 Overview of Fast PVT Approach
worst-case corners may be, e.g. 1-2 worst-case Fast PVT uses an advanced modeling approach
corners per output. The advantage of this called Gaussian Process Models (GPMs) [2].
approach is speed. The disadvantage is lack of GPMs are arbitrarily nonlinear, making no
reliability: a wrong guess can mean failure in assumptions about the mapping from PVT
testing or in the field. Reliability is strongly variables to outputs. Fast PVT is adaptive, to
dependent on the designer’s skill level, make maximum use of the simulations so far.
familiarity with the design, familiarity with the As we will see, these core characteristics enable
process, and if the designer has adequate time to Fast PVT to be rapid, reliable, and user-friendly.
make a qualified guess. In practice, it is difficult
to consistently have all of these qualities, which The Fast PVT application supports two distinct
makes the “best guess” approach inherently tasks, corner extraction and verification:
risky. Corner extraction is for finding corners
Sensitivity Analysis (Orthogonal Sampling, that the designer can subsequently design
Linear Model): In this approach, each variable against. Corner extraction runs an initial DOE,
is perturbed one-at-a-time, and the circuit predicts all values using advanced modeling,
simulated, to construct an overall linear response then simulates the predicted worst-case predicted
surface model (RSM). The worst-case PVT corners for each output. There is no adaptive
corners are chosen as the ones that the linear component.
model predicts worst-case output values for. Verification keeps running where corner
This is fast, as it only requires n+1 simulations if extraction would have stopped. It loops, to
there are n variables (the +1 is for nominal). adaptively test candidate worst-case corners
However, reliability is poor: it can easily miss while updating the model and improving the
the worst-case corners because it assumes a predictions of worst-case. It stops when it is
linear response (no interactions or other confident it has found the worst-case.
nonlinearities) from PVT variables to output; this Verification takes more simulations than corner
is often not the case. extraction, but is more accurate in finding the
Quadratic Model (traditional DOE): In this worst-case corners.
approach, the first step is to draw n* (n-1)/2 Figure 1 shows the Fast PVT algorithm. Both
samples in PVT space using fractional-factorial Fast PVT corner extraction and verification start
Design-of-Experiments (DOE) [1], then simulate by drawing a set of initial samples, X (i.e.
them. The next step constructs a quadratic corners), then simulating them, y. A model
response surface model (RSM) from this mapping X→ y is constructed.
input/output data. Finally, the worst-case PVT
corners are the ones that the quadratic model After that, corner extraction simply returns the
predicts as worst-case output values. While this predicted worst-case points. Verification
approach takes a few more simulations than the proceeds by iteratively choosing new samples
linear approach, it is still relatively fast because via advanced modeling, and then simulating the
the number of input PVT variables n is relatively new samples. It repeats the modeling /
small. However, reliability is still not great simulating loop until the model is 95% confident
because circuits may have mappings from PVT that worse output values are no longer possible.
variables to output that are more nonlinear than When choosing new samples, it accounts for
simple quadratic. Also, these methods typically both the model’s prediction of the worst-case, as
require considerable expertise and effort from well as the model’s uncertainty in order to
account for model “blind spots”. Error! needed. Solido tools support this flow.
Reference source not found.
Table 1: Fast PVT Design / Verification Flow
tolerance=0 may end up having to test most In summary, Fast PVT’s limitations are
candidate corners; but with a tolerance it can noteworthy, but do not preclude its use with
declare convergence much earlier. production designs.
3.5 Fast PVT “Corner Extraction” 3.7 Guidelines on Measurements
Limitations Fast PVT's speed and accuracy depend on how
Recall that the “corner extraction” task of Fast accurate a model it can construct, in mapping
PVT simply runs the initial DOE, builds a from PVT input variables to outputs. The more
model, predicts worst-case corners, and stops. It accurate the model, the faster the convergence.
is not adaptive. Therefore its main limitation is
In designing measurements / choosing targets,
that it is not as accurate as Fast PVT in
the user should be aware of these guidelines:
“verification” in terms of finding the actual
worst-case corners (for the benefit of speed and Can be binary, and more generally, can have
simplicity). discontinuities. These do typically take more
3.6 Fast PVT “Verification” Limitations simulations to model accurately, however.
Stopping criteria and model accuracy govern the Can contain measurement failures, as long
speed and reliability of Fast PVT verification. as those measurement failures correspond to
The most conservative stopping criteria would extreme values of one of the outputs being
lead to all simulations being run, at no benefit targeted.
compared to fast factorial, and one that is too
aggressive will stop before the worst-case is Should not be constant. If you know that an
found. Fast PVT verification strikes a balance, output is constant, you should not specify for
by stopping as soon as the model is confident it Fast PVT to aim for this output, as it will waste
has found the worst-case. The limitation is that simulation effort trying to find worse values, and
the model may be overly optimistic, because it have difficulty converging. Alternatively, set a
has missed a dramatically different region. To tolerance so that convergence is easier.
avoid hitting this limitation, section 3.7 provides Cannot contain primarily simulator noise as
guidelines on measurements, which directly the key cause of output variation. If this is an
affect model quality. issue, it usually means there is a defect in the
Model construction becomes too slow for >1000 measurement. If needed, a partial workaround is
simulations taken. In practice, if Fast PVT has to set a nonzero tolerance.
not converged by 1000 simulations, it probably
will not, so it will simply simulating the 4 FAST PVT VERIFICATION:
remaining corners. Appendix B has details. BENCHMARK RESULTS
Fast PVT performs best when there are fewer 4.1 Experimental Setup
input PVT variables. For >20 variables, This section shows Fast PVT verification
modeling becomes increasingly difficult and benchmark results on a suite of problems: 13
starts to require >1000 simulations. For these circuits with a total of 108 outputs, based on
reasons, we do not recommend using Fast PVT industrial circuit designs and PVT settings. The
with >20 input variables. Appendix B has circuits include a shift register, two-stage bucket
details. charge pump, two-stage opamp, sense amp,
second-order active filter, three-stage mux,
Fast PVT speedup compared to Full Factorial is switched-capacitor amplifier, active bias
dependent on number of candidate corners: the generator, buffer chain, and SRAM bitcell. The
more candidate corners, the higher the speedup. number of candidate PVT corners ranged from
This also means that if there is a small number of 120 to 2025. All circuits had reasonable device
candidate corners (e.g. 50 or less), then the sizings. The device models were from modern
speedup will usually not be significant (e.g. <2x, industrial 65nm to 28nm processes.
and perhaps even 1x).
The benchmarking methodology is as follows. corners or more; for fewer corners, the speedup
For each of the 13 circuit problems: can vary more greatly.
We simulated all candidate PVT corners, A case of 1.0x speedup simply means that Fast
and recorded the worst-case value seen at each PVT did not have enough confidence in its
output. These form the “golden” results. models to stop sooner than running all possible
corners.
We ran Fast PVT “verification” on the
circuit (each circuit had ≥1 target outputs) and Fast PVT speedup can be significantly higher if
recorded the worst-case values that it found, and tolerances are nonzero or if “corner extraction”
how many simulations it took. mode is used. (At the cost of some accuracy, of
course.)
challenge is to solve the global optimization corresponding output points y where the exact
problem with as few evaluations of f(x) as value of f(x) is known. Second, note the dark
possible (minimal number of simulations), yet line passing through the dots; this is the
reliably find the x* returning the global minimum predicted output values of the GPM for all
(true worst-case corner). possible input values. Whereas most regressors
do not guarantee to pass through the training
Fast PVT approaches this optimization problem
data, GPMs do. Third, note the 95% confidence
with an advanced model-building optimization
intervals (shaded region), having zero width at
approach that explicitly leverages modeling
training points and bulging as they get farther
error. We now detail the steps in the approach,
away from training points.
as shown inError! Reference source not
found..
(Step 1) Draw initial samples: Fast PVT
generates a set of initial samples X=Xinit in PVT
space using Design of Experiments (DOE) [1].
Specifically: the full set of PVT corners is
bounded by a hypercube; DOE selects a fraction
of the corners of the hypercube in a structured
fashion.
Simulate initial samples. This is simply SPICE
running on the initial samples: y = yinit = f(Xinit).
(Step 2) Construct model mapping X→y. Here,
Fast PVT constructs a regressor (an RSM)
mapping the PVT input variables to the SPICE- Figure 4: Example of Gaussian Process Model
simulated output value. The choice of regressor (GPM) in modeling x * sin(x)1.
is crucial: recall that a linear or quadratic model
makes unreasonably strong assumptions about
To recap, GPMs meet our criteria for Fast PVT
the nature of the mapping. We do not want to
modeling. For further details on GPMs, we refer
make any such assumptions – the model should
the reader to Appendix B.
be able to handle arbitrary nonlinear mappings.
Furthermore, the regressor must not only predict (Step 3) Choose new sample xnew: Once the
an output value for unseen input PVT points, it model is constructed, we use it to choose the
must be able to report its confidence in that next PVT corner xnew from the remaining
prediction. Confidence should approach 100% candidate corners Xleft=Xall\X. One approach
(0% error) at points that have previously been might be to simply choose the x that gives
simulated, and increase with distance from minimum predicted output value g(x):
simulated points.
xnew = argmin(g(x)) subject to x in Xleft
An approach that fits these criteria is Gaussian
However, this is problematic: while such an
Process Models (GPMs, a.k.a. kriging) [3].
approach optimizes f(x) in regions near worst-
GPMs exploit the relative distances among
case values that it has seen, there may be other
training points, and the distance from the input
regions with few samples that cause very
point to training points, while predicting output
different behavior than predictions. These are
values and the uncertainty of the predictions.
model “blind spots”, and if such a region
Figure 4Error! Reference source not found. contained the true worst-case value then this
shows an example of a GPM modeling the simple approach would fail.
function f(x) = x * sin(x), demonstrating how it
meets the target regressor properties. First, note 1
From http://scikit-learn.sourceforge.net/modules/gaussian_process.html
the dots; these are the training input points X and
References
[1] D.C. Montgomery, Design and Analysis of
Experiments, 6th Edition, Wiley, 2004.
[2] N. Cressie, “Geostatistics,” The American
Statistician, Vol. 43, 1989, pp. 192-202.
[3] D.R. Jones, M. Schonlau, and W.J. Welch,
“Efficient Global Optimization of Expensive
Black-Box Functions,” Journal of Global
Optimization, Vol. 13, 1998, pp. 455-592.
[4] C.E. Rasmussen and C.K.I. Williams, Gaussian
Processes for Machine Learning, MIT Press,
2006.
[5] M.J. Sasena, Flexibility and Efficiency
Enhancements for Constrained Global
Optimization with Kriging Approximations, PhD
thesis, University of Michigan, 2002.