You are on page 1of 21

ABSTRACT

The prior objective of the study is to thoroughly analyze, comprehend, and validate
the operational capabilities of a five-stage pipelined MIPS processor. The design
being tested consists of a total of 16 instructions, which further have 49 variants,
along with 5 pipeline stages and a hazard unit. The verification of the design is
performed using Constrained Random Verification techniques to test the
functionality of each instruction. The System Verilog Unified Verification
Methodology (UVM) is utilized to implement these verification techniques.

The Universal Verification Methodology (UVM) is a comprehensive class library in


System Verilog that offers a wide range of built-in features for efficient verification.
UVM has seen significant advancements, including support for reusability,
transaction-level communication, simplified maintenance, configurability,
automation, and memory management.

The layered testbench is built using components and objects from the UVM
environment, utilizing object-oriented programming (OOP) for class constructs.
Constrained randomization is employed to control the values generated during
randomization, ensuring compliance with declared conditions. Interconnectedness
among components is established through virtual interfaces. The verification process
incorporates the concept of pipelining and automation, enabling parallel organization
of activities for increased efficiency.

The objective of this project is to verify the functionality of a 16-bit, 5-staged


pipelined processor with 16 instructions and 49 variants. The verification process is
conducted in two steps: first, individual blocks of each stage are verified, and then
the overall design functionality is validated. Automation is utilized to streamline the
verification process, using a single testing system with interfaces to all internal and
input-output blocks. This approach enables highly automated, high-speed, and
accurate verification.
CHAPTER 1
INTRODUCTION

1.1 Problem Background

With the increasing complexity of RTL designs due to advancements in VLSI


technology, the verification process has been impacted, necessitating efficient
techniques to address this complexity. In today's scenario, verification needs to be
time-efficient and accurate. Therefore, standardized verification methodologies are
employed to ensure functionality of testbenches. The Universal Verification
Methodology (UVM) is widely used, leveraging its reusability features to configure
components in various ways and enhance flexibility. UVM is a System Verilog (SV)
based class library that provides built-in features for effective verification.

1.2 Problem Statement

The growing complexity of RTL designs resulting from advancements in VLSI


technology has impacted the verification process, requiring efficient techniques to
manage this complexity. In the current landscape, verification must be both time-
efficient and accurate. As a result, standardized verification methodologies are utilized
to ensure the functionality of testbenches. The widely adopted Universal Verification
Methodology (UVM), based on System Verilog (SV) class library, is renowned for its
reusability features that enable flexible configuration of components. UVM provides
built-in features that facilitate effective verification.

1.3 Project Goals and Objectives

The primary objective of this study is to analyze, comprehend, and verify the
functionality of a MIPS processor with a five-stage pipelined architecture. The design
includes 16 instructions with 49 variants, 5 pipeline stages, and a hazard unit, which
are all subjected to verification in this project. The Constrained Random Verification
technique is employed to test the functionality of each instruction. The implementation
of these verification techniques is carried out using the System Verilog Unified
Verification Methodology (UVM).

The MIPS processor design is a foundational concept in modern processor designs,


incorporating essential elements such as pipelining, data dependency handling, and
forwarding to enhance processing capabilities and speed.

1.4 Driver in the UVM architecture

The role of the UVM driver is to accept discrete sequence item transactions from the
UVM sequence and propel them onto the DUV interface by means of a virtual
interface connection. It extends the UVM_driver base class and initiates new
transaction requests, driving signals to lower-level components to perform signal
activities at the pin level.

1.5 MIPS Architecture

MIPS follows a register-register architecture, commonly known as a load/store


architecture, where instructions primarily operate on registers, except for load/store
instructions that are used to access memory.
A Load/Store Architecture:
• Load: Read a value from a memory address into a register
• Store: Write a value from a register into a memory location

As a general practice, the instruction execution process can be categorized into five
stages, namely fetch, decode, execute, memory access, and write back, denoted by Fi,
Di, Ei, Mi, and Wi. These stages encompass various operations such as instruction
fetch from memory (IF), instruction decode and register read (ID), execution of
operation or calculation of address (EX), access to memory operand (MEM), and
writing the result back to the register (WB).
CHAPTER 2
LITERATURE REVIEW

2.1 Introduction

The objective of this article is to provide a synopsis of how to construct a reusable


RTL verification environment utilizing the Universal Verification Methodology
(UVM). The principles and best practices of UVM have gained widespread acceptance
in the field. The paper provides a summary of UVM's characteristics, including its
benefits, drawbacks, difficulties, and prospects. Furthermore, it offers step-by-step
guidance on creating an efficient and effective verification environment and validating
an IP. To illustrate the contrast between conventional verification and UVM-based
verification, an SoC case study is also included [1].

The study presents a hierarchical verification platform using the Universal Verification
Methodology (UVM) for a RISC-V System-on-Chip (SoC) at both module and system levels.
At the module level, constrained random stimuli are generated to support module function
testing. The article discusses the functional verification of the RISC-V SoC and provides a
UVM verification platform for system and module-level verification [2]. By vertically reusing
the module-level verification environment, the platform implements a bottom-up hierarchical
verification approach at the system level. This UVM-based verification platform addresses the
drawbacks of FPGA verification, such as longer cycle times, lower efficiency, limited
reusability, and difficulty in measuring coverage. It enhances the effectiveness of verification
and achieves the functional coverage objective.

The world of modern electronics is the field of integration involving designing


accurate microprocessors. The modeling of these microprocessors undergoes the
construction of complicated design as the scale of the processor gets expanded, this
leads to the necessity of verification. The verification [1] of the design is needed to
meet all the requirements set for it and testing is done to verify if it works as
anticipated. The perpetual growth and expeditious development of chip design and
microelectronics technology bring down the need of using conventional verification
[2] methods of the chip. System Verilog is the initial language to verify the design
after Register Transfer Level, but System Verilog does not support MACROS and is
not reusable. The UVM is assuring a solution for verification to reusability [3] of test
cases and this methodology is suitable for all simulator vendors. The Universal
Verification Methodology brings more automation in testing and contains Base Class
Library (BCL).
It allows the verification in all ways rather than only fixed instantiation at the
abstraction level. As compared to this traditional/ old verification methodologies do
not guarantee to show/find all the bugs in the design using the testbench through
simulation and bugs at which the errors occur by the logfile [4]. The architecture of
UVM testbench [5] describes the hierarchy of testbench and its structure in a
systematic procedure. The top module includes all the components like program, test,
and environment, this environment further contains the sequence, scoreboard, DUV,
and the active agent which is dynamic in transactions, and the passive agent which is
static in terms of transactions.

The driver module is extended from the base class of uvm_driver. The driver receives
the signals from the sequencer by the commands get_next_item() and item_done() to
fetch data and complete fetching. The driver [6] collects the signals as input and gives
them to DUT as transactions and to the scoreboard for comparison. For the
transportation of the data from the static components to the dynamic module and vice
versa, the model of interfacing [7] is used. The debugging of the design would be
simpler by implementing constrained random verification [8]. The digital inputs from
the DUT are randomly chosen and are constrained to check if the specifications are
met.

The architecture of a Microprocessor without Interlocked Pipeline Stages (MIPS) [9,


10, 11] is generally the design of Reduced Instructions Set Computer (RISC) that
focuses on the process of load and store. The Single-Cycle MIPS Processor [12]
illustrates the working of a Single-Cycle Processor in which the processor processes
all the instructions at the same time sequentially which delays the time which is solved
by the MIPS processor. To perform the operations specific instructions are included
for the respective processor. The instruction set acts as the channel between the
software simulation and the proper functioning of the hardware modules. The main
functioning of the instruction set architecture depends on the data operations, data
transferring and sequencing [13]. These instructions given are executed in a single
data path by doing more than one operation simply in a parallel process named as the
5-stage pipelining [14] which undergoes the process of fetching the instruction,
decoding the instruction, executing, accessing the memory, and writing back [15].

Functional coverage and assertions [16] are two major concepts that are to be
considered while constructing a system Verilog and UVM environment. Functional
coverage depicts whether the functionality of the design is tested, and assertions check
how the design behaves. Rerunning the test cases is necessary each time the design is
modified. Regression [17] is the process of determining whether the test cases have
reappeared after rerunning. It is very crucial to execute the regression when the design
code is modified. To avoid the problem of interlocking, we will be stalling the pipeline
stages. To stall the pipeline stages and to avoid data hazards, the ALU forwarding unit
[18] is employed which forwards the ALU result to the upcoming instructions instead
of writing back to register. In the processor, there will be an array of registers called
register files which holds instructions needed by the processor to execute in the current
simulation cycle. Data hazards [19,20] occur when an instruction depends on a prior
instruction's outcome, but that result hasn't yet been computed, and data hazards occur.
if storage is used by two separate instructions. The location shall appear to be executed
systematically.
2.2 Limitations & Research Gap(s)

Identifying and extracting relevant information from the project perspective has been
challenging due to limited recent papers/articles in the specific project area.
Analyzing and interpreting the appropriate approach to address this issue has been
time-consuming, and additional research work has been considered to align with
current industry standards for the verification process.
CHAPTER 3
RESEARCH METHODOLOGY

3.1 Introduction

3.1.1 Universal Verification Methodology (UVM)

UVM provides users with an automated and flexible verification environment that can
be configured and reused. Verification components can be easily reused across
different environments and hierarchies, and the code can be modified without
disrupting the UVM base class. The UVM class library includes base classes, macros,
and utilities, which can be extended to enhance the System Verilog environment. UVM
has three major types of classes: UVM_object, UVM_component, and
UVM_transaction. UVM phases are essential to ensure synchronization and ordered
flow of testbench execution, including build_phase(), connect_phase(),
end_of_elaboration_phase(), start_of_simulation_phase(), run_phase(), extract_phase(),
check_phase(), report_phase(), and final_phase(), which are executed sequentially
throughout the simulation.

The field macros in UVM contribute to the automation mechanism for implementing
compare, copy, and print operations on objects. These macros operate on class
properties and allow users to register a component or an object with the factory,
enabling the requested object type to be returned when needed without implementing
custom functions for each class. The UVM factory allows the creation of objects
without specifying a particular class, thus allowing for the override of default objects
and data items in the testbench.

3.1.2 UVM Testbench Architecture

The UVM testbench architecture comprises various modules, including the Top
Module, Test, Environment, Scoreboard, Agent, Sequencer, Sequence, Monitor, and
Driver. The environment module is responsible for combining higher-level blocks,
such as the agent and scoreboard, while the UVM agent groups UVM blocks that are
related to a specific interface. The sequence item defines the signals that are generated
and driven to the DUT (Design Under Test) through the driver. The UVM Driver is
utilized to drive the signals from the sequence item into the DUT. The UVM Sequence
determines the order in which the stimulus should be generated and sent to the driver.
The UVM Sequencer is employed to transfer the signals generated in the sequence to
the driver. The environment module is also used to bundle the agent and scoreboard,
which are higher-level blocks in the UVM testbench. The UVM agent groups UVM
blocks that are specific to an interface. The sequence item defines the signals that are
generated and driven to the DUT through the driver. The UVM Driver is used to drive
the signals from the sequence items into the DUT. The UVM Sequence specifies the
order in which the stimulus should be generated and sent to the driver. The UVM
Sequencer is utilized to transfer the signals generated in the sequence to the driver.
Fig.1 UVM Testbench Architecture

3.2 Proposed Method

3.2.1 MIPS Pipeline Stages

Pipelining is an effective approach for organizing parallel activity in a computer


system, often likened to an assembly line operation. In a pipeline, new inputs are
accepted at one end, while previously accepted inputs appear as outputs at the other
end. To implement instruction execution using pipelining, the instruction execution
process needs to be divided into different tasks, each to be executed in separate
processing elements of the CPU. For example, the instruction fetch and instruction
execution phases are two distinct phases of instruction execution.

In a pipelined system, the second instruction's fetch can occur simultaneously with the
first instruction's decode. This allows for multiple activities to take place in parallel,
with up to five instructions being processed at any given time. To achieve this, five
separate hardware units are required, each capable of performing its task concurrently
without interference. Information is passed from one unit to the next through storage
buffers, ensuring that all the necessary information is available to downstream stages as
the instruction progresses through the pipeline. It's important to note that if the stages
of the pipeline are not balanced, the speed-up may be less effective, as increased
throughput does not necessarily result in decreased latency (time for each instruction).
Interface registers, also known as latches or buffers, are used to hold intermediate
outputs between two stages of the pipeline.
Fig.2 MIPS Architecture

3.3 Research Activities

3.3.1 Object-Oriented Programming

Object-oriented programming (OOP) is a well-established methodology for creating


abstract, reusable, and maintainable software code. Classes are utilized to model
reusable verification environments and define the abstract data and methods that
operate on them. Inheritance enables code reuse by allowing properties and methods
of a base (or super) class to be inherited by a newly created class, known as an
extended (or derived) class.

Polymorphism is another fundamental principle of OOP, which allows code to behave


differently based on the type of object it is dealing with. Polymorphism can be achieved
in System Verilog through two different methods: static polymorphism at compile time
using parameterized classes, and dynamic polymorphism at run-time using virtual
methods. It's important to note that once a method is declared as virtual, it remains
virtual in all derived classes, and it cannot be overridden to make it non-virtual. The
prototype of a virtual method is fixed from the perspective of the base class variable
making the call.

A common use of polymorphism, involving virtual and non-virtual methods, combines


inheritance with the concept of deep copy() and the creation of a new object, commonly
referred to as a clone(). The clone() method, being virtual, returns a handle to a new
object that is a deep copy of the calling object, without the need to know whether it's
dealing with a base class object or one of its derivatives. The only difference among the
overrides of copy() method in each clone() is the type of object it constructs. Each
copy() override replicates its local properties and then calls super copy() to copy the
local properties of the class from which it was derived. The copy() method is made
non-virtual to allow its argument to remain a local class type, enabling direct access to
local properties. The virtual nature of the clone() method ensures that the correct
derived copy() method is called, depending on the class variable type used to invoke it.

3.3.2 Interfacing

System Verilog introduces the interface construct, which serves as a means of


communication between blocks. An interface is a collection of signals or nets that
facilitate communication between a testbench and a design. A virtual interface, on the
other hand, is a variable that represents an instance of an interface. In this section, we
will discuss the concept of interfaces, their advantages over traditional methods, and
virtual interfaces.

The interface construct is used to establish connections between a design and testbench.
It defines a named bundle of wires that encapsulates communication, and specifies
directional information such as module ports and timing information like clocking
blocks. An interface can contain parameters, constants, variables, functions, and tasks.
It can be instantiated hierarchically similar to a module, with or without ports.

One of the key advantages of using interfaces over traditional connections is the ability
to group multiple signals together and represent them as a single port. This allows for
passing a single port handle instead of dealing with multiple individual signals or ports.
Additionally, interface declaration is done once, and the handle can be passed across
modules or components, making addition or deletion of signals easier to manage.

3.3.2.1 Virtual interface

A virtual interface is a variable that serves as a representation of an instance of an


interface. Before using a virtual interface, it must be properly initialized, typically by
connecting or pointing it to the actual interface. Attempting to access an uninitialized
virtual interface will result in a runtime fatal error. Virtual interfaces can be declared as
class properties and can be initialized either procedurally or through an argument to the
"new()" function. They can also be passed as arguments to tasks, functions, or methods.
Through a virtual interface handle, all variables and methods of the interface can be
accessed, allowing a single virtual interface variable to represent different interface
instances at different times during the simulation.

3.3.3 Constrained Randomization

Random variables are designed to receive random values during randomization. However,
in some cases, it may be necessary to exert control over the values assigned during
randomization. This can be achieved by using constraints. Constraints are written for
random variables in the form of constraint blocks, which are class members similar to tasks,
functions, and variables. Constraint blocks have unique names within a class and are
enclosed in curly braces {}. Constraints are expressions or conditions that constrain or
regulate the values of a random variable. They can be specified either within the class or
externally, similar to extern methods. An externally defined constraint block is referred to as
an extern constraint block.
3.3.4 MIPS Instruction set

There are three fundamental types of instructions in computer programming: (a)


Arithmetic/bitwise logic instructions, which encompass operations such as addition,
left-shift, bitwise negation, and XOR; (b) Data transfer instructions that involve moving
data between registers and memory; and (c) Control flow instructions, which determine
the flow of program execution.

Fig.3 ALU flow chart

3.3.4.1 CISC and RISC

CISC (Complex Instruction Set Computer) architecture is characterized by a large


number of complex instructions, while RISC (Reduced Instruction Set Computer)
architecture features a smaller number of simple instructions. The CPU's architecture
is designed based on either RISC or CISC principles, which determine how the CPU
operates in terms of its instruction set. In CISC architecture, a single instruction can
perform multiple low-level operations, such as memory storage, loading, and
arithmetic operations. On the other hand, RISC architecture is based on the idea that a
reduced instruction set can lead to better performance when combined with a
microprocessor design that executes instructions in a few cycles. This article explores
the differences between RISC and CISC architectures.

3.3.5 Sequencer to driver communication

In UVM (Universal Verification Methodology), there is a predefined mechanism for


sending the sequencer transmit transactions to the driver to furnish stimuli to the DUT
(Design Under Test). Sequences in UVM encapsulate the intelligence for generating
different types of transactions, and sequencers are used as the physical components for
executing these sequences. A particular sequence is directed to run on a sequencer,
which further breaks down into a series of transaction items. These transaction items
need to be transferred to the driver, where they are converted into cycle-based signal or
pin-level transitions. Transactions are implemented as classes containing data members,
control knobs, and constraints information, while transaction items are objects of the
transaction class.

Fig.4 Sequencer to Driver Communication

Bidirectional transaction-level modeling (TLM) interface is utilized in the UVM


methodology to enable communication between the sequencer and the driver, for
transferring REQ (request) and RSP (response) sequence items. The driver is furnished
with a uvm_seq_item_pull_port that is linked to the uvm_seq_item_pull_export of the
corresponding sequencer. This TLM interface supplies an API for retrieving REQ items
and returning RSP items. Customized to the REQ and RSP sequence items, the
uvm_seq_item_pull_port and uvm_seq_item_pull_export classes are parameterized. As
the TLM connection between the driver and sequencer is one-to-one, it is not possible
to connect multiple sequencers to a single driver, and vice versa, multiple drivers
cannot be connected to a single sequencer.

3.3.5.1 TLM interface

It is crucial to manage most of the verification tasks, such as stimulus generation and
coverage data collection, at the transaction level, which aligns with the natural way of
thinking for verification engineers when validating a system. At the transaction level,
UVM offers a comprehensive array of communication channels and interfaces that
assist in interconnecting components. TLM models operate at a higher level of
abstraction, closely matching the level of abstraction at which verification engineers and
design engineers conceptualize the intended functionality. This simplifies model
development and enhances understanding for other engineers. In Transaction Level
Modeling (TLM), interaction between diverse modules or components is realized via
transaction objects. A TLM port characterizes a collection of methods (API) for a
specific connection, whereas the concrete realization of these methods is identified as
TLM exports.

Establishing a connection between a TLM port and an export enables communication


between the two components. Analysis ports/FIFOs constitute an additional
transactional communication medium that enables a component to broadcast or
disseminate transactions to numerous other components. TLM ports (put/get) facilitate
one-to-one connections, where one component puts the transaction and only one
component gets the transaction. On the other hand, analysis_port enables one-to-many
connections, where one component can broadcast transactions to multiple components.

3.4 Tool(s) & Platform(s)

3.4.1 EDA Playground

EDA Playground gives engineers immediate hands-on exposure to simulating and


synthesizing System Verilog, Verilog, VHDL, C++/System C, and other HDLs. All you
need is a web browser. With a simple click, run your code and see the console output in
real-time. View waves for your simulation using EPWave browser-based wave viewer.
Save your code snippets (“Playgrounds”). We can share our code and simulation results
with a web link.

3.5 Chapter Summary

 Literature Survey on driver verification of processor based on Universal


Verification Methodology (UVM).
 Data verification and identifying appropriate methods and approaches for
meeting requirements.
 Identifying the base paper and Analysis. Interpretation of MIPS pipelined
concept (5 stages).
 Analysis of the environment architecture of UVM. Understanding and
analysis of the driver module in the UVM architecture.
 Interpretation and analysis of sequencer driver communication, TLM ports, and
analysis port (communication).
CHAPTER 4
SUSTAINABILITY OF THE PROPOSED WORK

4.1 Sustainable Development Goal indicator(s)


The 17 sustainable development goals (SDGs) to transform our world:

 GOAL 1: No Poverty

 GOAL 2: Zero Hunger

 GOAL 3: Good Health and Well-being

 GOAL 4: Quality Education

 GOAL 5: Gender Equality

 GOAL 6: Clean Water and Sanitation

 GOAL 7: Affordable and Clean Energy

 GOAL 8: Decent Work and Economic Growth

 GOAL 9: Industry, Innovation and Infrastructure

 GOAL 10: Reduced Inequality

 GOAL 11: Sustainable Cities and Communities

 GOAL 12: Responsible Consumption and Production

 GOAL 13: Climate Action

 GOAL 14: Life Below Water

 GOAL 15: Life on Land

 GOAL 16: Peace and Justice Strong Institutions

 GOAL 17: Partnerships to achieve the Goal

4.1.1 Goal 9: Industry, Innovation, and Infrastructure


The goal is to develop high-quality, reliable, sustainable, and resilient infrastructure,
including regional and transborder infrastructure, in order to support economic
development and improve human well-being. This includes ensuring affordable and
equitable access for all. Another objective is to promote inclusive and sustainable
industrialization, aiming to increase the industry's share of employment and gross
domestic product (GDP) by 2030, taking into account national circumstances, and
doubling its share in the least developed countries. To achieve this, efforts will be
made to enhance scientific research, upgrade technological capabilities in industrial
sectors of all countries, particularly in developing countries. This includes encouraging
innovation, increasing the number of research and development workers per 1 million
people, and boosting public and private spending on research and development by
2030. Support will also be provided for domestic technology development, research,
and innovation in developing countries, including creating a favorable policy
environment for industrial diversification and value addition to commodities.
Furthermore, efforts will be made to significantly increase access to information and
communications technology (ICT) and strive towards universal and affordable Internet
access.

4.2 Relevancy of the SDG to the proposed work


Hence, the proposed idea and project aim to develop an efficient processor driver
verification module for a five-stage pipelined MIPS processor using the Universal
Verification Methodology (UVM) in the testbench. The primary objective is to reduce
time consumption and achieve high accuracy, meeting industrial standards in real-
time. Given the increasing design complexity and advanced technology requirements,
a systematic methodology is necessary. UVM serves as an approach to the
verification process, aligning with Goal 9 of the Sustainable Development Goals,
which emphasizes industry, innovation, and infrastructure, and supports the project
theme in the proposed area.
CHAPTER 5
SOFTWARE IMPLEMENTATION

The UVM processor driver sends a request for a sequence item to the sequencer
through get_next_item(), once the sequencer authorizes the sequence, it transmits the
sequence item to the driver logic, which subsequently propels the sequence item into
the DUV. In UVM connect phase, we connect the sequencer to the driver. In the event
that a response is expected by the Sequence/Sequencer, the item_done (rsp) function is
invoked, leading to a refresh of the "rsp (response)" object handle in the FIFO
sequencer response. Response from the DUV is put into the driver and through the
sequencer get_response(rsp) is obtained in the UVM_sequence. The communication
from the sequencer to the driver is done by TLM ports where communication should
not be called through the component handle. It is necessary to have an intermediate
object to handle the communication as data packets between UVM components.
Fig.5 Driver Module Code Implementation
CHAPTER 6
RESULTS & DISCUSSION

Subsequently, when the design part of the processor code is implemented with the
testbench, the driver part is successfully verified by obtaining the UVM report
summary with severity counts of UVM_info (858), UVM_warning (1), UVM_error
(8), UVM_fatal (0). All the constrained random verification test cases are executed,
and all the cases were passed on to obtain the report counts by ID. It is observed that
the actual instruction, calculation, memory address, and register address passed are
matched with the respective expected results. The overall coverage report after the
simulation performed has given a cumulative summary of the coverage group with
design-based coverage and the weighted average as 100 percent (%).

Fig.6 Report Analysis


Fig.7 Overall coverage report
CHAPTER 7
CONCLUSION

The eventual aim of the approach is to perform a standardized verification to facilitate


the process of the verification in order to have improved speed, automation, and
minimal consumption of time. Therefore, universal verification methodology (UVM)
provides one such environment to reach the expected results on the execution and
generation of constrained random stimulus which is a good verification approach to
check and test the functionality of each instruction involved. Functional coverage
ensures that the design meets the specifications, while code coverage ensures that all
lines of code in the design have been tested. Together, they provide a comprehensive
measure of the quality of the verification process. The testbench components hierarchy
is available for reuse and can be configured as it is synthesizable. The interfacing
concept of using a virtual interface benefits one to involve a single-step process to link
up the internal blocks of the testbench, this makes the verification automated. Hence,
we have carried out the five-staged MIPS processor driver verification using UVM
that on matching the actual and expected results with the coverage report summary.

You might also like