You are on page 1of 4

An Overview of Static and Simulation-Based

Techniques for Systems-on-Chip Verification


João Luiz Carneiro Carvalho Wagner Luiz Alves de Oliveira
Departamento de Engenharia Elétrica Departamento de Engenharia Elétrica
Universidade Federal da Bahia, Escola Politécnica Universidade Federal da Bahia, Escola Politécnica
Salvador, Brazil Salvador, Brazil
joao.luiz.cc@gmail.com wlao@ufba.br

Abstract — The effort of verifying in Integrated Circuits has This paper presents the most used functional verification
increasing progressively in the last decade, mainly due to the techniques for digital IC in semiconductor industry, based on
complexity of large integrated circuits, such as Systems-on-Chip the recent literature, EDA vendors and market study. The
(SoC). This demand has motivated the semiconductor industry, section II describes the scope of functional verification. The
including EDA vendors, to develop industry standards and sections III and IV refer to different types of functional
powerful EDA tools trying to lower the difficulty of verifying verification, whereas the section V presents contemporary
complex designs. Several solutions are available to attack the details of using described techniques.
problem in different strategies, but it is possible to use them
combined to increase the robustness of Verification. This paper
II. FUNCTIONAL VERIFICATION
discusses the contemporary use of most used techniques, but
focus mostly on Simulation-based Verification. From an abstract specification to the manufactured chip,
universities, research institutes and design centers follow a
Keywords — Functional verification; System-on-Chip; Simulation; reasonably consolidated design flow. Each step inside this
process represents an incremental approximation of the final
I. INTRODUCTION chip, so that in latter phases of IC development the hardware
description contains physical information directly related to
In the last decades, hardware verification has gained focus manufacturing process. In parallel with design flow there are
in the semiconductor industry. This emphasis is mainly verification activities, where outputs produced by each design
fostered by market pressures and also by increasing design step are verified against the primary hardware specification or
complexity [1]. Hardware verification is placed throughout the implementation rules. Figure 1 illustrates the standard flow of
Integrated Circuit (IC) implementation flow, including the digital circuit design.
comparison of chip implementation against the documented
specification or reference model and the analysis of static
properties like portability, synthesizability and manufacturing
process rules. The part that verifies if the IC has been
implemented according to its intent and without functional
bugs is called Functional Verification [3].
As the Integrated Circuit complexity grows, the cost and
complexity of functional verification also increase.
Furthermore, the cost to repair a functional bug in design
grows significantly as the project advances to final phases in
design flow. In this case, the cost of an eventual re-design
during finishing procedures like tapeout and packaging may be
impracticable in terms of project budget and schedule (time-to-
market) [1].
Figure 1: Standard IC Design Flow
On the other hand, a complex digital hardware description
(RTL) has uncountable states and several modes of use so that The starting point of Functional Verification is a
the hardware correctness and completeness assurance can Verification Plan. This is the most important document for
demand more resources than available [4]. In this context, verification team because it defines the design features that will
recently the semiconductor industry have been investing efforts be tested and the tools and strategies to be applied. A
to make functional verification more efficient by promoting verification plan also defines metrics to indicate the
verification languages, tools, techniques and methodologies verification progress and conclusion criteria.
that reduce the engineering costs during verification [5]. Even
so, hardware verification still represents a critical part of In verification domain, there are basically two types of
integrated circuits design and discussion about the next- verification techniques: Static and Simulation-based.
generation tools and methodologies are far from over. Alternative techniques to verify SoCs include Hardware
Accelerated Simulation and Field-Programmable Gate Array
(FPGA) Prototyping [10] [11] [12]. Whereas Hardware
Acceleration consists of connecting the design simulator to an properties and find bugs. A FV environment is composed of the
external hardware to save CPU cycles, FPGA prototyping design itself, the formal specification, a set of properties, and
involves porting the HDL description in any stage of an element that generates valid stimuli, the FV driver.
development (since it is synthesizable) into a FPGA device
placed in a development board or a custom printed circuit Appling FV for complex hardware is not an easy task. The
board, then actually running it. Because these techniques make expected behavior must be formally specified, hidden design
use of external hardware and this paper focus exclusively on features or assumptions must be modeled and long detailed
computer-based verification, they will not be discussed here. proofs can difficult the review. Because of that, formal
verification is not very spread in semiconductor industry.
So far, simulation-based verification is the most used However, the rigor of mathematical proofs is an important
technique (as discussed in next sections), but there are aspect to be considered when verifying hardware of critical
limitations in this approach. Firstly, simulation tools require an applications, such as medical equipment and aircrafts.
environment1 specially built to generate stimulus to be driven
to Design Under Verification (DUV) – such environment is C. Logic Equivalence Checking
also referred to as testbench. Usually, user-defined stimuli are
limited to typical and predictable scenarios so that the Eventually, synthesis tools infer different logic or remove
comprehensiveness of tests is limited in terms of coverage of original structures from RTL design. Generally it is induced by
design states [2]. optimization algorithms of synthesizer, but in most cases the
reason is bad logic in HDL or inconsistent synthesis
To overcome the weak coverage of user-defined scenarios, parameters. In these circumstances, designer needs to verify if
several test cases and test sequences are applied, resulting in the functional behavior of synthesized logic remains intact, but
huge code to be executed and countless CPU cycles to simulation may not be sufficient (consider the huge size of
complete test batches. Finally, the task of describing tests design state space).
depends on the programming of scenarios by Verification
Engineer, which can be numerous and error-prone. Defining The best way to check if a post-synthesis design works
only a fraction of the possible states that hardware assumes can exactly as its pre-synthesis version is comparing both
lead to a weak verification [5]. combinational logic. If RTL description is correctly mapped to
netlist, the number of registers remains the same and each
Furthermore, even if the hardware described behaves combinational path between registers and ports are logically
correctly in typical and corner cases, a RTL badly designed can equivalent. The process of verifying the equivalence between
infer extraneous or inefficient circuits during logic synthesis, two HDL descriptions is called Logic Equivalence Checking.
and hence compromising the design integrity. In this context,
design guidelines and good practices of coding are included as IV. SIMULATION-BASED VERIFICATION
essential practices in hardware development, but EDA tools
that statically scan HDL description looking for suspicious Static Verification is a powerful technique, but it generally
structures play an important role in verification. requires deep mathematic background to obtain effective
results and often demands a huge effort for debugging. On the
other hand, simulation by user-defined stimulus may be less
III. STATIC VERIFICATION rigorous than static techniques, but the advantage of simulation
The purpose of static verification is to achieve high is the possibility to debug design using internal signals and
confidence of design correctness by formal analysis without known stimuli. When simulating a design, user is able to
depending on user-defined and error-prone testbenches. follow the execution of HDL and monitor the value of any
Several tools and techniques explore static aspects of HDL particular signal during all simulation time.
design, including syntax, semantics and functional behavior.
The most common are described in the following topics. Recently, several techniques and methodologies have been
created to strengthen the Simulation-based verification. Some
of them are discussed in the next topics.
A. Lint Checking
The name “lint” is generally used to define a program that A. Directed Tests
examines source code for suspicious or non-portable constructs
of a software or hardware description. In hardware domain, lint Possibly the simplest and most intuitive way to verify a
tool checks the design for HDL quality metrics like code design is writing a program (testbench) with a sequence of
consistency, portability, synthesizability, testability, etc. [6]. fixed stimuli to verify one feature at time. In more
Lint checking is actively used in hardware design, especially in sophisticated approaches, a testbench can be built to drive
intermediary phases of development. Due to the ease of use multiple sequences of stimuli and hence verifying a set of
and testbench independency, this technique is actively used by design features. However, in both cases every fixed sequence
any project member involved with design or verification. must be directly defined by user before running the testbench,
and for that reason this technique is called Directed Test.
B. Formal Verification Designing a set of pre-defined stimuli to drive has the
Formal verification (FV) makes use of mathematical advantage of the full control by the user of what is been driven.
representation to prove the correct implementation of design However, the verification members usually specify directed
[2]. Using FV the design is exhaustively explored to prove its tests that represent typical scenarios or possibly imaginable
corner scenarios.
1
The environment is usually written in a Hardware Verification Language Although the user can track the efficiency of executed tests
(HVL), like SystemVerilog, for instance. by analyzing functional coverage and state coverage of design
(it will be discussed later), defining multiple directed tests to D. Assertion-Based Verification
verify each scenario induces many testbench code to be An assertion is a positive statement about a property of
written, and consequently many code to be maintained. design. If the property does not remain true in all simulation
Additionally, the simulation time required to hit a corner case time, an error indication is issued for the assertion [8]. An
is greater than introduction of random stimuli. expression defines the assertion by using a combination of
signals or a sequence of events that represents one design
B. Constrained-Random Simulation feature to check on-the-fly the consistency of that feature.
The problem of generating user-defined stimuli relies on Therefore, assertions verify the hardware, but they depend on
the high dependence of the identified and modeled valid stimuli generated by the testbench.
scenarios by verification user. To overcome this problem, an The place of assertion does not affect its behavior. For
alternative approach is generating random stimuli, increasing example, the Algorithm 3 shows an assertion placed inside the
the chance of rapidly reach extreme or rare states of design, module m. Obviously, the assertion cannot be synthesized
also named corner cases. In the other hand, using random together with the module logic, but its presence inside the
stimuli leads to invalid and unreal design states, which can instance gives full visibility of module internal signals and
generates a false bug or an unnecessarily verified scenario. enables the RTL designer to use assertions inside his/her
As expected, hardware in normal operation usually has module during its development, improving the reliability of
constraints in its states, events and signals. For example, if a design.
configuration register is 8-bit wide but the minimum and Algorithm 3. A simple assertion inside a Verilog module.
maximum value are, respectively, 1 and 99, any value outside module m( input wire a, clk)
this range is considered illegal. Then, the constraint of such reg a, b;
register is assuming only the valid values.
always @(posedge clk) begin
These constraints can be modeled by user in the object that a <= c;
randomly generates the stimulus. During simulation, the b <= !c;
simulator generates randomized data based on the designed end
constraints, choosing values that strictly fit the user-defined
criteria. The SystemVerilog language contains structures to assert property (@(posedge clk) a != b)
define signal constraints, as shown in the algorithm below. else $error(“a and b are equal”);
endmodule
Algorithm 1. Example of SystemVerilog Constraint
// data field
rand bit[7:0] cfg_reg;
As mentioned before, assertions depend on external stimuli
// valid config. values must be between 1 and 99, inclusive (from testbench) to activate their continuous verification. The
constraint valid_cfg { cfg_reg inside {[1:99]}; } Assertion-Based Verification relies on using assertions to
support the verification of design features, without monitoring
C. Coverage-oriented Verification the input and output stimuli [8].
Inside the verification environment, coverage metric is an
important parameter to assess verification progress. The main V. RECENT SCENARIO
motivations to use coverage metrics in Functional Verification
are the need to identify visited states and the need to define an HDL simulators are widely used in IC design centers [7].
end-of-verification point, where the verification is considered The advantages discussed in the previous sections encourage
done when the coverage is 90%, for example. Full coverage the engineers for the simulation throughout the project.
(100%) is very challenging to achieve because of the large However, simple simulations based on directed tests are no
number of states and the particularity of some corner states, longer satisfactory to fully implement the scenarios and metrics
which are difficult to arrive in simulation. contained in the Verification Plan.

The Algorithm 2 demonstrates a structure to collect In Functional Verification context, the previously presented
coverage data, named coverpoint. In this example, an object approaches are distinct techniques to address the problem of
representing a transmission packet has fields for source and verification complexity. Fortunately, many of them can be used
destination identifications. The value of each source and in a combined fashion, which enhance the verification
destination IDs, including the combination of these values, are environment robustness. Modern EDA tools support and
captured by the coverpoint. Every time a packet is captured, encourage the use of mixed different verification techniques
the coverpoint is notified and the coverage report is updated. such as constrained-random stimulus generation, code and
functional coverage and assertions.
Algorithm 2. An exemple of covergroup in SystemVerilog.
Packet pkt; A study carried by Wilson Research Group and Mentor
... Graphics Corp. [7] demonstrates the adoption of verification
covergroup cov1; techniques presented in this paper. The study compares
s: coverpoint pkt.src_id { different surveys conducted between 2007 and 2012, allowing
bins src[8] = {[0:7]}; the analysis of hardware verification evolution in the enterprise
} environment. The adoption of advanced techniques such as
d: coverpoint pkt.dst_id { constrained-random simulation and functional coverage is
bins dst[8] = {[0:7]};
}
increasing significantly, as illustrated in Fig. 2.
cross, s, d;
endgroup cov1;
VI. CONCLUSION
80%
70% As the design complexity increases, several verification
70% 66%
62% 63% tools and techniques emerged as an evolution of old ones or as
60% a new approach to be adopted. Although the simulation-based
48% verification is widely used across different industry domains,
50%
41% 40% other approaches have been progressively applied to attend the
40% 37%
2007 demand of increasing time and resources yield. As a result, the
30% 2012 tools and methodologies have been evolving to take the best of
20% all available techniques, and the semiconductor industry and
EDA vendors had been working to standardize the use of these
10%
techniques and verification methodologies.
0%
Functional Coverage Code Coverage The UVM is the greatest example of a successful
Constrained Random Simulation Assertions methodology that integrates advanced techniques to attack the
Figure 2. Simulation-based Verification Techniques in 2007 and 2012. complexity of SoC verification. The advent of an open
standard also enabled the cooperative contribution to the
In addition to the previously mentioned techniques, the international community, which allowed small companies to
verification teams realized that a successful functional make use of most advanced verification methodologies.
verification also depends on methodology. The verification
elements, like bus drivers, monitors and transactors should be REFERENCES
developed in a form that allows code reuse and standardization.
[1] B. Wile, J. C. Goss and W. Roesner, “Verification in the Chip Design
In the last decades, many verification methodologies has Process” in “Comprehensive Functional Verification: The Complete
emerged as international standards (Accellera UVM, Mentor Industry Cycle” San Francisco, CA: Elsevier Inc, 2005, pp. 5-31.
AVM, Synopsys VMM, Cadence eRM, etc.) and describing [2] B. Wile, J. C. Goss and W. Roesner, “Introduction to Formal
each one in this paper could be very extensive. However one Verification” in “Comprehensive Functional Verification: The Complete
verification Methodology have been emerging considerably in Industry Cycle” San Francisco, CA: Elsevier Inc, 2005, pp 439-486
last years, as pictured in figure below: Accellera Universal [3] M. Mintz and R. Ekendahl. “Hardware Verification With SystemVerilog:
Verification Methodology (UVM). The picture shows that An Object Oriented Framework,” New York, NY: Springer, 2007.
adoption of UVM by study participants grew almost 600% in [4] J. Bergeron, C. Eduard, A. Hunter and A. Nightingale, “Verification
two years (Fig. 3). Methodology Manual for SystemVerilog,” New York, NY: Synopsys Inc
and ARM Limited, printed by Springer, 2006.
[5] A. Molina and O. Cadenas, “Functional Verification: Approaches and
45% 41% 42% 41% Challenges,” in Latin American Applied Research Journal, vol 37 pp.
40% 65-69, 2007.
34% [6] HAL User Guide, Product Version 14.1, Cadence Design Systems Inc.,
35% 32%
San Jose, CA, 2014, pp. 14-23
30% 27%
[7] H. Foster, “2012 Functional Verification Study,” Wilson Research Group
25% and Menthor Graphics Corp., Bristol, UK, 2013.
20%
20% 2010
14% [8] E. Cerny et al., “The Power of Assertions in SystemVerilog”, New York:
15% 2012 Springer, 2010.
11%
10% 7% [9] Universal Verification Methodology (UVM) 1.1 User’s Guide, Accellera
5% Systems Initiative, 2011.
[10] M. Gschwind et al., “FPGA prototyping of a RISC processor core for
0%
embedded applications,” IEEE Trans. Very Large Scale Integr. (VLSI)
OVM Cadence eRM Syst., vol. 9, no. 2, pp. 241-250, Apr, 2001.
Accellera UVM Synopsis VMM Others
[11] A. Jain et al., “Accelerating System Verilog Uvm Based Vip to Improve
Figure 3. Standard Methodologies adoption in 2010 and 2012.
Methodology for Verification of Image Signal Processing Designs Using
HW Emulator,” International Journal of VLSI design & Communication
The UVM is a standard methodology that provides an Systems (VLSICS), vol. 4, no. 6, Dec. 2013
open-source framework which integrates the previously [12] E. Glocker et al., “Emulated ASIC Power and Temperature Monitor
techniques and many others like Transaction-Level Modeling System for FPGA Prototyping of an Invasive MPSoC Computing
(TLM), Object-Oriented Programming (OOP), verification Architecture,” Workshop on Resource Awareness and Adaptivity in
components reuse, etc. Due to the abundance of features in Multi-Core Computing, Paderborn, Germany, May 2014.
UVM, its complexity and learning curve increased
substantially. A detailed explanation of UVM resources can be
found in [9].

You might also like