You are on page 1of 6

Jump Threading Optimization

1.
At the highest level, jump threading's major goal is to reduce the number of
dynamically executed jumps on different paths through the program's control flow
graph. Often this results in improved performance due to the reduction of
conditionals, which in turn enables further optimizations. Typically, for every runtime
branch eliminated by jump threading, two or three other runtime instructions are
eliminated.
The classic jump thread example is a simple jump to jump optimization. For instance,
it can transform the following:

if (a > 5)
goto j;
stuff ();
stuff ();
j:
goto somewhere;

into the more optimized sequence below:

if (a > 5)
goto somewhere;
stuff ();
stuff ();
j:
goto somewhere;

However, jump threading can also thread two partial conditions that are known to
overlap:

void foo(int a, int b, int c)


{
if (a && b)
foo ();
if (b || c)
bar ();
}

The above is transformed into:


void foo(int a, int b, int c)
{
if (a && b) {
foo ();
goto skip;
}
if (b || c) {
skip:
bar ();
}
}

An even more interesting sequence is when jump threading duplicates blocks to avoid
branching. Consider a slightly tweaked version of the above:

void foo(int a, int b, int c)


{
if (a && b)
foo ();
tweak ();
if (b || c)
bar ();
}

The compiler cannot easily thread the above, unless it duplicates tweak(), making
the resulting code larger:

void foo(int a, int b, int c)


{
if (a && b) {
foo ();
tweak ();
goto skip;
}
tweak ();
if (b || c) {
skip:
bar ();
}
}
Thanks to the code duplication, the compiler is able to join the two overlapping
conditionals with no change in semantics.
By the way, this is the ultimate goal of jump threading: avoiding expensive
conditional branches, even though it may come at the expense of more
code.

2.
Jump threading tries to find distinct threads of control flow running through a basic
block. it looks at blocks that have multiple predecessors and multiple successors. If
one or more of the predecessors of the block can be proven to always cause a jump to
one of the successors, the edge from the predecessor is forwarded to the successor by
duplicating the contents of this block.
In this particular approach of compiler optimization the conditionals are turned into
unconditional branches on certain paths at the expense of code size.
Here is a simple example of the transformation. It depicts a control flow graph where
code in some basic blocks is shown.

The central basic block contains a conditional jump (if). If the block is reached from
the right side, ‘x’ is false and we always branch into the blue block. Hence, jump
threading rewires the control flow, to circumvent the ‘if’ (the green block). However,
the central block contains more code (the call to foo()), which must be duplicated on
the new path. If control flow comes from the left, we do not know the value of ‘x’ and
hence, whether it leads the central block to the blue block or not, thus the conditional
jump must be preserved.
Jump threading can be more complex if loops are involved. The following example
illustrates that.
While the ‘if’ has only one control flow predecessor, there is actually two paths we must
consider: loop entry and loop body. If we assume that the predecessor block of the ‘if’
(i.e. the initial block of the loop body) does not change the value of ‘x’, then it is rest
assured that in the first iteration, the false branch will always be taken. The
transformation duplicates the loop body of the first iteration and then goes into the
loop. Effectively, we perform loop peeling here.
Also, note the block with the red border. It was inserted to not create a critical edge.
This is usually desired during optimization.

Critical Edge in CFG


A critical edge in a control flow graph is an edge from a block with more than one
successor to another with more than one predecessor. Consider this if-then statement:

if (foo) {
bar = 1;
}
...

and its basic block representation:


In the above CFG, the red edge is a critical edge, since it is one of two incoming edges
and one of two outgoing edges. Most compilers will split the critical edge by inserting
an empty basic block:

Justification:
For a number of reasons, we usually want to split critical edges in the control-flow
graph. We sometimes need to insert some code whenever the program follows a critical
edge: e.g., the register allocator may need to “fix up” the machine state, moving values
around in registers as expected by the target block. Consider where we might insert
such code: we can’t insert it prior to the jump, because this would execute no matter
what out-edge of the source block is taken. Similarly, we can’t insert it at the target of
the jump, because this would execute for any entry into the target block, not just
transfers over the particular edge.
The solution is to “split” the critical edge: that is, create a new basic block, edit the
branch to point to this new block, and then create an unconditional branch in the block
to the original target block. This basic block is a place where we can insert whatever
fixup code we need, and it will execute only when control flow transfers from the one
specific block to the other.

❖ Critical Edge Splitting for Partial Redundancy Elimination:

You might also like