You are on page 1of 15

QualComm Interview Questions

1. Coverage Improvement.
2. AU fault list and explain what kind of faults will be seen in AU fault list.
Ans. Under Au fault list we find
a) Black Boxes
b) Pin Constraints
c) Tied cells
d) Unclassified

3. Difference bw Stuck at and At Speed.


Ans. Stuck at works on Scan frequency (slow Clock) whereas At speed works on functional
frequencies.
Stuck at:A Stuck-at fault is a particular fault model used by fault simulators and
Automatic test pattern generation (ATPG) tools to mimic a manufacturing defect
within an integrated circuit. Individual signals and pins are assumed to be stuck
at Logical '1', '0' and 'X'. For example, an output is tied to a logical 1 state during
test generation to assure that a manufacturing defect with that type of behaviour
can be found with a specific test pattern. Likewise, the output could be tied to a
logical 0 to model the behaviour of a defective circuit that cannot switch its
output pin.
Transition fault model: This is considered to stuck at fault model within a time
window. The value of the node changes but not within the time, at which it should
change. For detecting such faults we have two vector for each pattern one for
launching the fault and the other to capture the fault. The time between the
launch and the capture is supposed to be equal to the time at which the chip
would normally function. This is the reason it is all called at-speed test.
4. How many capture pulse are needed in At speed and Stuck at?
5. Difference bw LOC and LOS, which method is better? Why not other?
Ans. For qn 4 and qn 5

C scan test is a 2 pattern test, the first pattern launches a transition at the source flip-
flop(s), and the second captures the transition at the destination flip-flop(s). Hence, we
need two clock pulses for each AC test.
There are two ways to achieve this. The first way is to add another clock pulse during
scan capture (while your scan enable is inactive). This is called Launch on Capture
(LOC).

Code:
Launch on Capture
                         2 fast pulses
                             L   C
            ___   ___               _   _                  ___     ___
clock __|   |__|   |________| |_| |_________|   |___|   |____
        _______________                  _____________________
Scan_en                        |__________|
The other way is to rely on the last shift clock (while scan enable is still active) to launch
the transition (1st pattern), and use 1 clock pulse in the capture cycle as the 2nd pattern
to capture the transition. This is called Launch on Shift (LOS).

Code:
Launch on Shift
                 L      C                      
           ___    ___    _                      ___      ___
clock __|   |__|   |___| |_____________|   |___|   |____
        _____________               _____________________
scanen                    |_________|

In general, to make sure the transition reaches the capture flip-flop in time, the delay
between the launch and capture cycles should be your cycle time (or the actual path
delay, for those who run transition tests faster than at-speed).

As you can see, to run LOS at-speed, your scan enable must switch at-speed also. This
is usually problematic in layout, since you need to either treat the scan enable signal as
a clock net (require clock tree synthesis with accurate delay/skews), or pipeline the scan
enable signal, which increases the area/complexity of your scan enable.

I have seen publications that claim either LOS gives you higher transition fault coverage
than LOC, or vice versa. I believe this is design dependent, and it depends on the
complexity of the logic cone(s) driving the source flip-flop(s). If the logic cone(s) are
simple, it gives ATPG a greater degree of freedom to generate the appropriate 2nd
pattern in LOC. Notice that the 2nd pattern in LOS is always 1 bit shifted from the 1st
pattern. On the other hand, if the cone is complex, it may be hard to generate the
appropriate 2nd pattern through the logic, making LOS coverage numbers more
attractive.

6. What is lockup Latch and where to use lockup latch?


Ans.

Lockup latch – principle, application and timing


What are lock-up latches: Lock-up latch is an important element in scan-based designs, especially
for hold timing closure of shift modes. Lock-up latches are necessary to avoid skew problems during
shift phase of scan-based testing. A lock-up latch is nothing more than a transparent latch used
intelligently in the places where clock skew is very large and meeting hold timing is a challenge due
to large uncommon clock path. That is why, lockup latches are used to connect two flops in scan
chain having excessive clock skews/uncommon clock paths as the probability of hold failure is high in
such cases. For instances, the launching and capturing flops may belong to two different domains (as
shown in figure below). Functionally, they might not be interacting. Hence, the clock of these two
domains will not be balanced and will have large uncommon path. But in scan-shift mode, these
interact shifting the data in and out. Had there been no lockup latches, it would have been very
difficult for STA engineer to close timing in a scan chain across domains. Also, probability of chip
failure would have been high as there a large uncommon path between the clocks of the two flops
leading to large on-chip-variations. That is why; lockup latches can be referred as as the soul mate of
scan-based designs.

Figure 1 : Lockup latches - the soul mate of scan-based designs


Where to use a lock-up latch: As mentioned above, a lock-up latch is used where there is high
probability of hold failure in scan-shift modes. So, possible scenarios where lockup latches are to be
inserted are:

 Scan chains from different clock domains: In this case, since, the two domains do not
interact functionally, so both the clock skew and uncommon clock path will be large.
 Flops within same domain, but at remote places: Flops within a scan chain which are at
remote places are likely to have more uncommon clock path. 

In both the above mentioned cases, there is a great chance that the skew between the launch and
capture clocks will be high. There is both the probability of launch and capture clocks having greater
latency. If the capture clock has greater latency than launch clock, then the hold check will be as
shown in timing diagram in figure 3. If the skew difference is large, it will be a tough task to meet the
hold timing without lockup latches.

Figure 2: A path crossing from domain 1 to domain 2 (scope for a lock-up latch insertion)

Figure 3: Timing diagram showing setup and hold checks for path crossing from domain 1 to domain
2
Positive or negative level latch?? It depends on the path you are inserting a lock-up latch. Since,
lock-up latches are inserted for hold timing; these are not needed where the path starts at a positive
edge-triggered flop and ends at a negative edge-triggered flop. It is to be noted that you will never
find scan paths originating at positive edge-triggered flop and ending at negative edge-triggered flop
due to DFT specific reasons. Similarly, these are not needed where path starts at a negative edge-
triggered flop and ends at a positive edge-triggered flop. For rest two kinds of flop-to-flop paths,
lockup latches are required. The polarity of the lockup latch needs to be such that it remains open
during the inactive phase of the clock. Hence,

 For flops triggering on positive edge of the clock, you need to have latch transparent when
clock is low (negative level-sensitive lockup latch)
 For flops triggering on negative edge of the clock, you need to have latch transparent when
clock is high (positive level-sensitive lockup latch)

Who inserts a lock-up latch: These days, tools exist that automatically add lockup latches where a
scan chain is crossing domains. However, for cases where a lockup latch is to be inserted in an intra-
domain scan chain (i.e. for flops having uncommon path), it has to be inserted during physical
implementation itself as physical information is not feasible during scan chain implementation (scan
chain implementation is carried out at the synthesis stage itself).

Which clock should be connected to lock-up latch: There are two possible ways in which we can
connect the clock pin of the lockup latch inserted. It can either have same clock as launching flop or
capturing flop. Connecting the clock pin of lockup latch to clock of capturing flop will not solve the
problem as discussed below.
  Lock-up latch and capturing flop having the same clock (Will not solve the problem): In this
case, the setup and hold checks will be as shown in figure 5. As is apparent from the waveforms,
the hold check between domain1 flop and lockup latch is still the same as it was between domain
1 flop and domain 2 flop before. So, this is not the correct way to insert lockup latch.
Figure 4: Lock-up latch clock pin connected to clock of capturing flop

Figure 5: Timing diagrams for figure 4

  Lock-up latch and launching flop having the same clock: As shown in figure 7, connecting
the lockup latch to launch flop’s clock causes the skew to reduce between the domain1 flop and
lockup latch. This hold check can be easily met as both skew and uncommon clock path is low. The
hold check between lockup latch and domain2 flop is already relaxed as it is half cycle check. So,
we can say that the correct way to insert a lockup latch is to insert it closer to launching flop and
connect the launch domain clock to its clock pin.
Figure 6: Lock-up latch clock pin connected to clock of launch flop

Figure 7: Waveforms for figure 6

Why don’t we add buffers: If the clock skew is large at places, it will take a number of buffers to
meet hold requirement. In normal scenario, the number of buffers will become so large that it will
become a concern for power and area. Also, since skew/uncommon clock path is large, the variation
due to OCV will be high. So, it is recommended to have a bigger margin for hold while signing it off
for timing. Lock-up latch provides an area and power efficient solution for what a number of buffers
together will not be able to achieve.

Advantages of inserting lockup latches:


 Inserting lock-up latches helps in easier hold timing closure for scan-shift mode
 Robust method of hold timing closure where uncommon path is high between launch and
capture flops
 Power efficient and area efficient
 It improves yield as it enables the device to handle more variations
7. If you have a controllability but no observability at a node, how do you resolve it? (Vice-
Versa)
Ans.

What DFT is meant for: Design for Testability (DFT) is basically meant for
providing a method for testing each and every node in the design for structural
and other faults. Higher the number of nodes which can be tested through the
targeted number of patterns, greater is the test coverage of the design. For
this to be possible, every node in the design has to be controllable and
observable. But what is controllability and observability? We can consider
these as the two basic principles of DFT which are to be followed in order to
have the maximum test coverage possible through minimum number of
patterns. Let us discuss these.
Controllability: By controllability from DFT point of view, we intend if both ‘0’
and ‘1’ are able to propagate to each and every node within the target
patterns.  A point is said to be controllable if both ‘0’ and ‘1’ can be propagated
through scan patterns.
What if a node is not controllable: To achieve DFT coverage for a node, it is
needed to be controllable. If a node is not controllable, it cannot be tested. For
production mode devices, it is necessary to have certain minimum percentage
of nodes controllable to ensure reliable devices to the customers. So, less
number of controllable nodes mean less DFT coverage, and hence, less
reliable devices.

Figure 1 : Figure showing a node that is not controllable within targeted number of
patterns
Inserting control points (enhancing controllability): A node can be made
controllable by inserting control points. If the test coverage target is not getting
met through target number of patterns, control points are inserted to increase
the test coverage. A control point is an alternate path supplied to a node to let
a particular value propagate to it. As shown in figure1, NODEA is not
controllable within targeted number of patterns. We need to insert a control
point in order to increase its controllability. As shown in figure2, an alternate
path can be provided to NODEA controlled by enable signal. Adding control
point at NODEA not only improves the controllability of NODEA, but it also
improves the controllability of fanout cone of NODEA.

Figure 2 : Inserting control point to enhance controllability

       There can be three types of control points:


 ‘0’ control points – Suppose we were earlier able to propagate ‘1’ to
NODEA. We only need to let ‘0’ propagate. In that case, we can add a ‘0’
control point 
  ‘1’ control points - Suppose we were earlier able to propagate ‘0’ to
NODEA. We only need to let ‘1’ propagate. In that case, we can add a ‘0’
control point 
  Flop control points – In order to let propagate both ‘0’ and ‘1’ through

control points, flops need to be inserted.


In case of ‘0’ and ‘1’ control points, the control input of mux can be tied to ‘0’
or ‘1’. Hence, area overhead is only a mux. On the other hand, flop control
points come with an overhead of an extra flop.

Observability: By observability, we mean out ability to measure the state of a


logic signal. When we say that a node is observable, we mean that the value
at the node can be shifted out through scan patterns and can be observed
through scan out ports. 

What if a node is not observable: If a node is not observable, it means we


cannot see the value at the node and it is not possible to see the value at that
node through targeted number of patterns. To have scan coverage for the
node, it is necessary to be observable. As in the case of less number of
controllable nodes, if number of observable nodes is less, it means less scan
coverage. So, less number of observable nodes also means less reliable
devices. There may be many cases in which we cannot observe a node. The
most common are the nodes connected to inputs of an analog block not
having a scan chain inside. Since, analog blocks do not have scan chains, the
input nodes cannot be observed in the normal scenario which renders the
entire fanin logic unobserved.
 

Figure 3: Figure showing a non-observable node

How a node can be made observable: To improve the observability of nodes


in the design, observe points are inserted. An observe point is a flip-flop
inserted in the design that observes the value at one or more point. It has no
functional purpose. It is inserted in the scan chain and shifts the observed
data to scan out ports through scan shifting. A flip flop inserted as observe
point can be used to observe a large number of hard-to-detect faults which
results in a significant reduction in pattern count.

Figure 4 : Inserting observe point to enhance observability


8. EDT Compressor and Explain every Block.
Ans.
9. Wrapper Cells and types of Wrapper cells.
Ans.

Test Core Wrapper


For DFT, each core can be tested separately before SoC level
integration. While performing integration, when they are configured in
internal test mode, the core’s internal logic can be tested separately or
in a group. However, when configured in the external test mode, the
surrounding logic of the core can be tested. By doing so, our main
concern is to divide the SoC test in a different configuration, to greatly
reduce the pattern generation effort and in turn to reduce test time.

Wrapper Cell Structure

Test wrapper modes


Inward-facing or INTEST mode

In INTEST mode, by driving the inputs from the input wrapper cells,
we test the partition and outputs are captured through output wrapper
cells. This is completed by disabling the scan chain outside the core.
It facilitates the isolated testing of the partitioned core with ATPG.
During capture, input wrapper cells are shifted with a separate input
wrapper scan enabled signal, which avoids x capturing from outside
the partition. Whereas output wrapper cells capture the internal state
of the partitions.
(Figure [2]: Inward Facing (Intest) Mode)

Outward-facing or EXTEST mode

In EXTEST mode, wrappers are enabled and configured to drive and


capture data outside of the design. It essentially disables the internal
chains by bypassing it in this mode. Hence, it reduces ATPG test time
as well. To test the top-level logic between the partition and
unwrapped logic, we can use this mode. During the capture stage,
values are being captured by input wrapper cells outside the partition
and the output wrapper cells shift during capture to avoid capturing x’s
from inside the partition’s un-driven internal scan chains.

      
(Figure [3]: Outward Facing (Extest) Mode)

10. How do you toggle the reset pin?


Ans.Keeping a Mux bw the reset pin and SFF

Async
To SFF
sync

TM=Scan _en
11. What kind of issues do you face if we have two different clock domains given to one
pattern?
Ans.
a) Power Issues
b) Timing Issues
a. Power: This is the biggest concern now-a-days. As the requirements to
decrease power increases, the DFT gets more complex. Its part of DFT to
make sure that DFT coverage has no impact with all the added logic due to
new logic added to save power, adding different power domains and
retaining values of logic during power down. Apart from that power
consumption during ATPG could also lead to generation of Patterns for
each power domain/clock domain separately.
b. Hold Timing Issues: Simply, data should be hold for some time (hold time) after
the edge of the clock. So, if the data changes with the hold time might cause
violation. In general, hold time will be fixed during backend work (during PNR)
while building clock tree. If u r a frontend designer, concentrate on
fixing setup time violations rather than hold violations. 
12. What is CGC?
Ans.Clock gating is one of the power-saving techniques used on many
synchronous circuits including the Pentium 4 processor. To save power, clock
gating refers to adding additional logic to a circuit to prune the clock tree, thus
disabling portions of the circuitry where flip flops do not change state. Although
asynchronous circuits by definition do not have a "clock", the term "perfect clock
gating" is used to illustrate how various clock gating techniques are simply
approximations of the data-dependent behaviour exhibited by asynchronous
circuitry, and that as the granularity on which you gate the clock of a
synchronous circuit approaches zero, the power consumption of that circuit
approaches that of an asynchronous circuit.

You might also like