You are on page 1of 12

1 a) what is the role of testing in VLSI design flow?

(4Marks)
Ans:
Role of testing in VLSI design flow :
If you design a product, fabricate and test it, and it fails the test, then there must be a
cause for the failure. Either (1) the test was wrong, or (2) the fabrication process was faulty, or
(3) the design was incorrect, or (4) the specification had a problem. Anything can go wrong. The
role of testing is to detect whether something went wrong and the role of diagnosis is to
determine exactly what went wrong, and where the process needs to be altered. Therefore,
correctness and effectiveness of testing is most important for quality products (another name for
perfect products.)
If the test procedure is good and the product fails, then we suspect the fabrication
process, the design, or the specification. If all students in a class fail then it is often considered
the teachers failure. If only some fail, we assume that the teacher is competent, but some
students are having difficulty. To select students likely to succeed, teachers may use prerequisites
or admission tests for screening. Distributed testing along a product realization process catches
the defect-producing causes as soon as they become active, and before they have done much
damage. A well thought out test strategy is crucial to economical realization of products.
The benefits of testing are quality and economy. These two attributes are not independent
and neither can be defined without the other. Quality means satisfying the users needs at a
minimum cost. A good test process can weed out all bad products before they reach the user.
However, if too many bad items are being produced then the cost of those bad items will have to
be recovered from the price charged for the few good items that are produced. It will be
impossible for an engineer to design a quality product without a profound understanding of the
physical principles underlying the processes of manufacturing and test.

1b) what are the various problems associated with the simulation based design
verification? (4Marks)
Ans:
Simulation serves two distinct purposes in electronic design. First, it is used to verify the
correctness of the design and second, it verifies the tests. The first form of simulation is
illustrated in following Figure. The process of realizing an electronic system begins with its
specification, which describes the input/output electrical behavior (logical, analog, and timing)
and other characteristics (physical, environmental, etc.). This simulation-based design
verification method has strengths and weaknesses. Its strength lies in the details of the circuit
behavior that can be simulated.

The weakness of this method is its dependence on the designers heuristics used in
generating the input stimuli. To contain the complexity, these stimuli are non exhaustive and,

therefore, a guarantee of conformance to specification is impossible. Such a guarantee is possible


with a formal verification method , which mathematically proves the correctness of the design. A
restricted form of formal verification, known as model checking , verifies finite state concurrent
systems by an exhaustive search of the state-space. It verifies whether a given specification is
true. An efficiently implemented model checking procedure will always terminate with a yes/no
answer and can be run on moderate-sized machines, though not on an average desktop computer.
Thus, the high complexity of formal methods allows their use only at the higher behavior level.
In spite of the incompleteness, simulation provides a better check on the manufacturability of the
design. An ideal system of design verification should combine the behavior-level formal
verification with the logic and circuit-level simulation.

1c) Define Controllability and Observability. Discuss about SCOAP numerical


measures of a signal. (4Marks)
Ans:
Controllability: Controllability for a digital circuit is defined as the difficulty of setting a
particular logic signal to a 0 or a 1.
Observability: Observability for a digital circuit is defined as the difficulty of observing the
state of a logic signal. These measures are important for circuit testing, because
while there are methods of observing the internal signals of a circuit, they are
prohibitively expensive.
SCOAP consists of six numerical measures for each signal (l) in the circuit:
1. Combinational 0-controllability, CC0 (l)
2. Combinational 1-controllability, CC1 (l)
3. Combinational observability, CO (l)
4. Sequential 0-controllability, SC0 (l)
5. Sequential 1-controllability, SC1 (l)
6. Sequential observability, SO (l)

1d) Discuss the Output Compression Technique in BIST.


Ans: Output Compression technique---(4Marks)
1e) what are the various chip level Testability problems of late 1990s?
Ans:
(4Marks)
Various chip level Testability problems of late 1990s are:
1. There is an extremely high and still increasing logic-to-pin ratio on the chip. This increasingly
makes it harder to accurately observe signals on the device, which is essential for testing.
2. VLSI devices are increasingly dense and faster with sub-micron feature sizes.
3. There are increasingly long test-pattern generation and test application times.
4. Prohibitive amounts of test data must be stored in the automatic test equipment (ATE.)

5. There is increasing difficulty in performing at-speed (rated clock) testing using external ATE.
For clock rates approaching 1 GHz, at-speed testing with an ATE is very expensive due to pin
inductance and high tester pin costs.
6. Designers are unfamiliar with the gate-level structure of their designs, since logic is now
automatically synthesized from the VHDL or Verilog hardware description languages. This
compounds the problem of testability insertion.
7. There is a lack of skilled test engineers.

Part B
Answer one question from each unit. Each question carries 8 marks.
2a) what are the differences between testing and verification. (4Marks)
Ans:
Verification: Predictive analysis to ensure that the synthesized design, when manufactured, will
perform the given I/O function.
Test: A manufacturing step that ensures that the physical device, manufactured from the
synthesized design, has no manufacturing defect.
Verification vs. Testing
Verification
Verifies correctness of design.
Performed by simulation, hardware
emulation or formal methods.
Performed once prior to manufacturing.

Testing
Verifies correctness of hardware.
Two-parts:
1.Test generation: software process executed
once during design.
2.Test application: electrical tests applied to
hardware.
Test application performed on EVERY
manufactured device.

2b) Explain the terms defect, fault, error and diagnosis with suitable examples. (4Marks)
Ans:
Defect. A defect in an electronic system is the unintended difference between the implemented
hardware and its intended design.
Defects occur either during manufacture or during the use of devices. Repeated occurrence of
the same defect indicates the need for improvements in the manufacturing process or the design
of the device.
Error: A wrong output signal produced by a defective system is called an error. An error is an
effect whose cause is some defect.

Fault: A representation of a defect at the abstracted function level is called a fault.


The difference between a defect and a fault is rather subtle. They are the
imperfections in the hardware and function, respectively.
Example: Consider a digital system consisting of two inputs a and b, one output c, and one twoinput AND gate. The system is assembled by connecting a wire between the terminal a and the
first input of the AND gate. The output of the gate is connected to c. But the connection between
b and the gate is incorrectly made b is left unconnected and the second input of the gate is
grounded. The functional output of this system, as implemented, is c=0 instead of the correct
output c=ab
For this system, we have:
Defect: a short to ground.
Fault: signal b stuck at logic 0.
Error: a=1, b=1, output c=0 correct output c=1 . Notice that the error is not permanent. As
long as at least one input is 0, there is no error in the output.
Fault diagnosis:
3a) what is fault model? What are the characteristics of a good fault model? Why stuck at model
is widely accepted?
Ans:
Fault model-Def-----------1mark
Characteristics of a good fault model-----1mark
Reasons for widely acceptance of stuck at model---2 marks

3b) Classify different types of testing based on their type and attribute. (4
Marks)
Ans:
Types of Testing
VLSI testing can be classified into four types depending upon the specific purpose it
accomplishes.
Characterization
Also known as design debug or verification testing, this form of testing is performed on a
new design before it is sent to production. The purpose is to verify that the design is correct and
the device will meet all specifications. Functional tests are run and comprehensive AC and DC
measurements are made. Probing of internal nodes of the chip, commonly not done in production
testing, may also be required during characterization. Use of specialized tools such as scanning
electron microscopes (SEM) and electron beam testers, and techniques such as artificial
intelligence (AI) and expert systems, can be effective. A characterization test determines the
exact limits of device operating values.
Production
Every fabricated chip is subjected to production tests, which are less comprehensive than
characterization tests yet they must enforce the quality requirements by determining whether the
device meets specifications. The vectors may not cover all possible functions and data patterns
but must have a high coverage of modeled faults. The main driver is cost, since every device
must be tested. Test time (and therefore cost) must be absolutely minimized. Fault diagnosis is
not attempted and only a go/no-go decision is made. production tests are typically short but

verify all relevant specifications of the device. It is an outgoing inspection test of each device,
and is not repetitive.
Burn-in
All devices that pass production tests are not identical. When put to actual use, some will
fail very quickly while others will function for a long time. Burn-in ensures reliability of tested
devices by testing, either continuously or periodically, over a long period of time, and by causing
the bad devices to actually fail. Correlation studies show that the occurrence of potential failures
can be accelerated at elevated temperatures. Briefly, two types of failures are isolated by burn-in:
Infant mortality failures, often caused by a combination of sensitive design and process variation,
may be screened out by a short-term burn-in (10-30 hours) in a normal or slightly accelerated
working environment. Freak failures, i.e., the devices having the same failure mechanisms as the
reliable devices, require long burn-in time (100-1,000hours) in an accelerated environment.
During burn-in, we subject the chips to a combination of production tests, high temperature, and
over-voltage power supply.
Incoming Inspection
System manufacturers perform incoming inspection on the purchased devices before
integrating them into the system. Depending upon the context, this testing can be either similar to
production testing, or more comprehensive than production testing, or even tuned to the specific
systems application. Also, the incoming inspection may be done for a random sample with the
sample size depending on the device quality and the system requirement. The most important
purpose of this testing is to avoid placing a defective device in a system assembly where the cost
of diagnosis may far exceed the cost of incoming inspection.
Types of Tests. Actual test selection depends upon the manufacturing level (processing, wafer,
or package) being tested. Although some testing is done during device fabrication to assess the
integrity of the process itself, most device testing is performed after the wafers have been
fabricated. The first test, known as wafer sort or probe, differentiates potentially good devices
from defective ones . After this, the wafer is scribed and cut, and the potentially good devices are
packaged. Also, during wafer sort, a test site characterization is performed. Specially designed
tests are applied to certain test sites containing specific test patterns. These are designed to
characterize the processing technology through measurement of parameters such as gate
threshold, polysilicon field threshold, bypass, metal field threshold, poly and metal sheet
resistances, contact resistance, etc.
In general, each chip is subjected to two types of tests:
(1) Parametric Tests. DC parametric tests include shorts test, opens test, maximum current test,
leakage test, output drive current test, and threshold levels test. AC parametric tests include
propagation delay test, setup and hold test, functional speed test, access time test, refresh and
pause time test, and rise
and fall time test. These tests are usually technology-dependent. CMOS volt- age output
measurements are done with no load while TTL devices require current load.
(2) Functional Tests. These consist of the input vectors and the corresponding responses. They
check for proper operation of a verified design by testing the internal chip nodes. Functional tests
cover a very high percentage of modeled (e.g., stuck type) faults in logic circuits and their
generation is the main topic of this tutorial. Often, functional vectors are understood as
verification vectors, which are used to verify whether the hardware actually matches its
specification. However, in the ATE world, any vectors applied are understood to be functional

fault coverage vectors applied during manufacturing test. These two types of functional tests may
or may not be the same.

4a) Discuss the applications of fault simulation


Ans:
Fault simulation--Def---1 mark
Applications- at least 5-----3 marks

4b) Explain the fault simulation process


Ans:
Block diagram----2 marks
Explanation of Process---2 marks

5)An ATPG system is used to generate tests for nine faults f1,f2,f3,f4.f5.f6,f7,f8,f9 in a
combinational circuit. it generates six tests t1,t2,t3,t4,t5,t6 to detect the first eight faults and
identifies the fault nine f9 to be redundant. Next a fault simulator is used to determine all faults
detected by each test(without fault dropping) and it finds:
The test t1 can detect fault f3 and f5
The test t2 can detect fault f2 and f7
The test t3 can detect fault f2, f3 and f7
The test t4 can detect fault f1, f2 and f7
The test t5 can detect fault f4 and f6
The test t6 can detect fault f1,f4,f6 and f8
a) fault dropping method------2marks
b) What is the compact test obtained by above method? List the size of test set and tests.
Ans: Compact test-{t1, t3, t6}---1mark
Size of test set---3---1mark
Tests-----t1, t3, t6---1mark
c) What is the fault coverage of compacted test set?
Ans: f1-f8-----------2 mark
d) How do you improve the speed of testing?
Explanation-------1 mark

6a) how do you test sequential circuit testing? (4Marks)


Ans: Explanation-------4Marks

6b) Explain the 5 valued algebra and 9 valued algebra for sequential circuit
testing? (4Marks)
Ans: 5 valued algebra : D calculus explanation-----2marks
9 valued algebra : D calculus explanation-----2marks

7 ) Problem---solution8 marks

8a) Draw and explain the basic architecture of BIST.


BIST Architecture:

(2 Marks)

The above figure shows the BIST system hierarchy and all three levels of packaging
mentioned earlier. The system has several PCBs, each of which, in turn, has multiple chips. The
system Test Controller can activate self-test simultaneously on all PCBs. Each Test Controller on
each PCB can activate self-test on all chips on the PCB. The Test Controller on a chip executes
self-test for that chip, and then transmits the result to the PCB Test Controller, which
accumulates test results from all chips on the board and sends the results to the system Test
Controller. The system Test Controller uses all of these results to isolate faulty chips and boards.
System diagnosis is effective only if the self-test procedures are thorough. For BIST, fault
coverage is a major issue. Other issues are chip area overhead, its impact on chip yield, the cost
of the additional chip pins required for test, the performance penalty in terms of added circuit
delay, and extra power requirements. For BIST, the test engineer frequently, but not always,
modifies the chip logic to make all latches and flip-flops controllable, perhaps by using the scan
technique.
BIST Implementations (2 Marks)
The following figure shows typical BIST hardware in more detail. Note that the wires
from PIs to the Input MUX and the wires from circuit outputs P to primary outputs (POs) cannot
be tested by BIST. These wires, instead, require another testing method, such as an external ATE
or JTAG Boundary Scan hardware. Following figure also shows how a comparator compares the
signature produced by the data compacter with a reference signature stored in a ROM during
BIST.

This comparator and ROM hardware can frequently be implemented with a single logic
gate with 32 or fewer inputs. This is acceptable only when the comparison can occur at
extremely low rates of circuit operation, since this logic gate is exceedingly slow.

8b) Draw the block diagram of BILBO and explain its operation. (4Marks)
Ans:
BILBO, is a rather unfortunate acronym for Built In Logic Block Observation, it
implements the signature analysis idea, in practice. The memory elements in the system are
connected in a Scan Path as shown in following figure.

Each BILBO can act as.


A Scan Path shift register.
An LFSR generating random patterns.
A multi-input signature analyzer.
Provided that the start state is known, and that a known number of clock cycles are
injected, the finish state will be a known pattern.
Test Procedure with BILBO's
Testing using BILBO's is carried out as follows.
For logic block 1.
1. BILBO 1 is initialized to a non-zero initial state by shifting in a pattern through
Scan_In.
2. BILBO 1 is configured as a PRBS generator and BILBO 2 as a multi-input
Signature analyzer.
3. N clock pulses are applied.
4. BILBO 2 is configured as a Scan-Path, and the result is shifted out through
Scan_Out.
To test logic block 2, BILBO 2 becomes the sequence generator, and BILBO 1 the
signature analyzer.
The quality of the tests generated (fault coverage) must be determined by prior fault
simulation. The final signature may be determined by checking a known good part ( Dangerous!)
or by logic simulation.

9a) Explain Circular self test path system .(4Marks)


Ans: Circular Self-Test Path System
The above figure shows the circular self-test path (CSTP) BIST configuration. In this
testing system, the hardware pattern generator and response compacter are combined into a
single hardware device, which is the entire circular flip-flop path. Therefore, this is a non-linear
mathematical BIST system, so superposition no longer holds. Some of the flip-flops are

converted into self-test cells (see Figure), where in TEST mode, the cell XORs its D input with
the state from the immediately prior flip-flop in the CSTP chain. After initialization of the
registers, in the TEST mode, the circuit runs for a number of clock cycles and then the signature
is read out of the circular register path. The entire path can be regarded as a MISR with
characteristic polynomial f(x) = x n + 1.However, the non-linear nature of this system makes it
difficult to compute the fault coverage.

9b) Write short notes on Signature Analysis . (4Marks)


Ans:
Signature Analysis is a data compression technique, it takes very long sequences of bits
from a unit under test and compresses them into a unique N-bit signature which represents the
circuit. A good circuit will have a unique signature, and a faulty one will deviate from this.
Signature analysis is based on Linear Feedback Shift Registers (LFSR), basically, the
memory elements in the system are reconfigured in test mode, to form an LFSR, as shown in
following figure.

The summation unit (+) performs modulo 2 addition (according to the rules of addition in
GF(2)) on the incoming bit stream and the taps coming back from the LFSR. (XOR Gates).A bit
stream is fed in to the register. After N clock pulses, the register will contain the signature of the
data stream. Hewlett Packard, among others, makes signature analyzers for this purpose.
Such a machine can trap all 1 bit errors, it is however possible that 2 or more errors will
mask each other. The probability of two different data streams yielding the same signature is
given by.

Where m is the length of the LFSR and n is the length of the sequence, for n tending to
infinity this tends to.

So by making m large, the probability of a bad sequence being masked is small. Hewlett
Packard use m = 16, giving Perr = 1.5E-5 and have not found this error probability to pose a
problem in practice.
LFSR's corresponding to primitive polynomials over GF(2) make good sources of
pseudo-random patterns (PRBS). As an example, consider figure 19 where the sequence length
will be 2^16 -1 distinct patterns (all zeros is not allowed).

To perform in-situ testing of a logic network, we could place one of these registers at its
input, and some signature analysis circuitry at the output. The LFSR generates random binary
sequences which are fed through the network under test, and analyzed by the signature analyzers.

10) Discuss about Boundary Scan test Instructions. (8 marks)


Boundary Scan Test Instructions:
The various JTAG TAP Controller test instructions include IDCODE and USERCODE
instructions, as well as some of the other instructions, can be useful in normal system mode
operation, and not just in test mode operation.
Sample /pre Load instruction: The purpose of the SAMPLE/PRELOAD instruction is to obtain a
snapshot of the normal component input and output signals and store them in the first of the two
master-slave flip-flops in the boundary scan ring.
EXTEST Instruction: The purpose of the EXTEST instruction is to test off-chip circuits and
boardlevel interconnections independently of the chip. This is achieved by capturing the signals
coming into the chip in the boundary scan register, and also by driving the signals coming out of
the chip from the boundary scan register.
INTEST Instruction: The purpose of the optional INTEST instruction is to conduct a test of the
on-chip system logic when the chip is assembled onto the PCB/MCM, by the use of externallyapplied test vectors shifted into the chips through the boundary scan register. This instruction
also facilitates shifting of the response of the on-chip system logic to the vector out through the
boundary scan register.
RUNBIST Instruction: The purpose of the optional RUNBIST instruction is to issue BIST
commands to a component through the JTAG hardware. The test logic can control the state of the

component output pins, which can be determined by the pin boundary scan cell, or the output pin
can be forced into the high-impedance state.
CLAMP Instruction: The purpose of the optional CLAMP instruction is to force component
output pin signals to be driven by the boundary-scan register. This instruction bypasses the
boundary scan chain between TDI and TDO by using the one-bit bypass register instead. One
may have to reset the on-chip system hardware to prevent circuit damage caused by shorting
zeroes and ones simultaneously onto internal busses after the CLAMP instruction has been used.
IDCODE Instruction: The purpose of the IDCODE instruction is to connect the component
device identification register serially between the TDI and TDO pins in the Shift-DR TAP
Controller state. This allows a board-level test controller or external tester to read out the JEDEC
component ID. This JTAG instruction is required whenever a JEDEC identification register is
included in the chip design.
USERCODE Instruction: The USERCODE instruction is intended for user-programmable
components, such as field-programmable gate arrays (FPGAs) and electrically erasable and
programmable ROMs (EEPROMs). The USERCODE instruction allows an external tester to
determine the user programming of a programmable component.
HIGHZ Instruction: The optional HIGHZ instruction puts all component output pin signals into
the High-impedance (Z) state. This prevents damage to logic on this particular chip and to other
components in the PCB/MCM when the various JTAG test instructions are used.
BYPASS Instruction: The purpose of the BYPASS instruction is to bypass the boundary scan
chain with a one-bit bypass register. This is useful in PCBs/MCMs where all components have
their boundary scan chains connected serially, but only one component is being tested. The
BYPASS instruction makes all other components appear to be having only one-bit long boundary
scan registers.

11) Discuss about System configurations with boundary scan. (8Marks)


System Configuration with Boundary Scan

The above Figure shows an integrated circuit that is compliant with the 1149.1 boundary
scan standard. Note that on each pin of the chip, there is internal hardware that provides a
register at that pin position. The serial connection of these registers around the periphery of the
chip at the pins is known as the boundary register. Input pins can drive the internal system circuit
through this internal pin hardware, or the boundary register cell for the particular input pin can
be loaded by serially shifting a pattern into the boundary register, and the value at that pin can be
used to drive the system circuitry. Similarly, the output of the system circuitry can directly drive
an output pin, or the output of the system circuitry can be caught in the boundary register cell for
that pin, and then serially shifted out of the chip. The TDI pin is the serial input to the boundary
register, and the TDO pin is the serial output from the boundary register. Between TDI and TDO,
a number of registers provided by the boundary scan hardware can be connected, depending on
the current mode of the test hardware.
TAP Controller and Port:
Several boundary scan data registers, including the boundary register, the Device ID
register, and the bypass register, can be connected serially between TDI and TDO. Also, the
instruction register can be connected serially between TDI and TDO. The Device ID register
provides the device identification. The bypass register bypasses the boundary register for this
component. This is useful when all boundary registers of all components on the PCB are chained
together into one long shift register, and it is desired to reduce the length of the register by
ignoring hardware on components that are not involved in the current test.
The instruction register can be loaded with an instruction, which enables various different
operation modes of the test hardware. Several instruction modes are mandatory, others are
optional, and user-defined instructions can be added, subject to the constraints of the JTAG
standard. The TCK pin provides the test clock for the boundary scan hardware, and must be
capable of operating at an independent clock rate from the system clock rate, asynchronously
from the system circuitry. The TMS pin provides the test mode select signal, which causes the
testing hardware to enter various testing modes. Finally, the optional TRST* signal provides an
asynchronous reset capability for the boundary scan hardware. The TDI, TDO, TCK, TMS, and
TRST* pins form the Test Access Port (TAP), and may not be shared with any other system
function.

You might also like