You are on page 1of 4

Here in lecture 8.

5, we're going to
complete our treatment of Don't Cares in multilevel logic and how they arise
implicitly from the structure of boolean logic network models.
The final class of don't cares are the observability don't cares.
Observability don't cares are patterns that mask the output of a node so that no
matter what happens, the output of the overall boolean logic network is
insensitive to your node when this value shows up.
So these are different than the controlability don't cares, which kind of
focus on the nodes in front of a node. The observability don't cares focus on
the nodes in back of a node, between you and the output.
But it turns out that the same computational Boolean algebra techniques
we learned at the beginning of class will let us come up with a recipe to extract
the don't cares. So let's go look at the observability
don't cares. So here we are finally at the last kind
of implicit don't cares. We've done the satisfiability don't
cares, and the controllability don't cares.
And now we want to talk about the observability don't cares.
These are patters input to a node that make the network output insensitive to
the output of the node, or patterns that mask the output from effecting anything.
Let's see how we compute these. So, I've got a new benchmark here, got a
new example. It's got 3 primary inputs, abc, and 1
output, Z. The node F, that we want to optimize is
now in the middle of the network, it's interior to the network.
It goes through a node, Z equals ab plus Fc bar, plus F bar b bar, in order to get
out. So observability don't cares are patterns
that mask a node's output, and in our case this is F at the network output,
which in this case is Z. So, it's a really important point here,
these are not impossible. These patterns can occur, but these
patterns make the node's output not observable at the network output.
And what that means, not observable means that the boolean value of the node, in
this case F, does not affect any network output, which in this case is Z.
So, it doesn't matter what F is. Z doesn't care.
So it's, it's a real don't care in a different kind of way.
So, you know, just to be clear the observability don't care for F is
going to be patterns of a and b, because that's what's input to F that makes Z
insensitive to F's value. Let's just go talk a little bit about
that insensitive thing. When is the network output Z insensitive
to the internal variable F? And note that we've got two nodes here in
my kind of cartoon. There's an F equals stuff node with some
arrows into it. And that has an arrow labeled F that goes
into a Z node that says Z depends on F and Z has some other arrows going into
it. What it means for Z to be insensitive to
F is that Z is independent of the value of F, given some other inputs to Z.
We can be clear about that with some little examples.
So for example, if Z was just an AND gate, if any other input was a 0, Z is
insensitive to F because if there's an, a 0 going into the gate, Z is just a 0.
Similarly, if Z was an OR gate, Z would be insensitive to F if any other input
was a 1 because if any other input is a 1, Z is just a 1.
What we need is a general formula. We know that Z depends on F.
How do we compute in the general case when Z is insensitive to F?
Now it turns out, we've almost same as before.
Leaves solved the flip problems. What makes F sensitive to X.
Now note the change of variables names here, this is a blast from the past this
is slide 21 a lecture 2.6 that talked about the boolean difference.
And what that said was that any pattern dell F dell X equal 1 makes X observable
at output F. So interpreting the boolean difference,
we have a box that says a big blob of logic.
And as an output that says F is a function of a, b, c...w and X.
And there are arrows going into the big blob of logic that say a, b, c...w and X.
And what the slide says is that any pattern of the other variables that are
not X that makes a dell F dell X 1 makes F sensitive to X.
And the words say when dell F dell X is a 1, it means that, if you apply a pattern
of the other inputs a, b, c up to W, that makes dell F dell X 1, any change in X
will force a change in the output of F. So we know what makes some output
sensitive to some input we just want the opposite.
We want the output to be insensitive to the input which is going to turn out to
be easy. When is the Z output which is the output
node of the diagram here, insensitive to the F input.
Because F is an internal node. Well, you know, the slide title gives it
away. It's when the boolean difference, dell Z,
dell F complemented, is equal to 1. That makes sense.
let's talk about that with a little bit more detail.
when is the network output Z insensitive to internal variable F?
Here's a good mathematical definition. When you can find some inputs to Z that
make the cofactors Z with F set to 0 equal to Z set to F equals 1.
So if ZF equals ZF bar, the Shannon cofactors, then Z is not dependent on F.
Any inputs to the Z function that make ZF equal ZF bar make Z independent of F.
If the cofactors are the same when you set F to 0 and F to 1, Z does not depend
on F. That makes sense.
How do you find when the cofactors are the same?
Well you solve a little equation that says, I'm going to put an equality gate,
and that's an exclusive NOR, in between the two cofactors, ZF equal 0, and ZF
equals 1, and I'm going to solve for when that's a 1.
Now note that Z exclusive NORed with ZF of 0 exclusive NORed with ZF of 1 is just
the same thing as ZF of 0 exclusive ORed with ZF of 1 and, and compliment the
whole thing. So that's nothing, you know, complicated.
That's just the difference between an X NOR and an X OR gate where you just like
put an invertor after the X OR gate. But the reason I wrote it like that, was
that this is sort of familiar. The thing under the big bar, ZF equals 0,
exclusive or ZF equals 1 is just the boolean difference.
And so, I finally get the result I want. If you can compute dell Z dell F, and
then compliment it and then find a pattern that makes it a 1.
And then take that pattern, or those patterns, and apply them to the other
inputs to Z that are not the F input. Any pattern that makes that thing a 1,
dell Z dell F bar. Any pattern that makes that a 1, makes Z
insensitive to F. Those patterns are the raw material of
the don't care that I need to go compute. So, I have a recipe again.
It looks a lot like the previous recipes we've seen for the don't cares.
What do I start with? I start by computing dell Z dell F
complement. Any patterns that make that thing a 1
mask F for Z. Make Z not depend on F, and then I
universally quantified all away the variables that are not inputs to f
because just like with the control ability don't cares, there's a lot more
variables happening here, and the patterns of behavior that I'm interested
in must stay true for all values of the variables that I get rid of.
And the result will be the observability don't care is a boolean function that, it
makes a 1 for these masking patterns. So here's the, here's the equation, and
again showing you the same network. F is ab bar plus a bar b feed, Z is ab
plus Fc bar plus F bar b bar, Z is the primary output, abc the primary input.
circling the inputs to the Z node, the output don't care.
Okay? Is something that we compute starting
with this raw material. Okay?
It's the universal quantification of the variables not used in F.
Applied to the compliment of dell Z dell F, and then what that's going to do is
that's going to let me actually go over and get some patterns that are don't
cares for the F node. So using this recipe for the little
network F equals ab bar plus a bar b, c is ab plus Fc bar plus F bar b bar, what
do we get? We look at all the variables going into
Z, we calculate dell Z dell F bar, and then we calculate the universal
quantification of the variables not used in F, and we get some patterns to the
input of F. That's going to turn out to be for all c,
because those are the patterns, those are the variables that are not used in F.
a is in F, b is in F, but there's not c in F.
of the complement of the boolean difference, so the boolean difference is
just ab plus F c bar plus F bar, b bar cofactored F equals 1.
Exclusive or co-factored F equals 0. That's the difference, then you
complement it. And then you quantify out the c.
If you do that, you will be quantifying the c out of the boolean equation ab plus
ac bar plus b bar c bar plus a bar bc. And when you're done, you get something
useful. You get some patterns that you can
actually apply to the inputs of the node that you are trying to simplify.
So that's the computational recipe so far for the output don't care.
Now as we did in the previous CDC example, it's good to just ask the
question, does this make sense? if I tell you that the ODCF which is a
function of a and b is ab. That means that we think a equals 1, b
equals 1 masks F from affecting the Z output.
Is, is that the case? Well, if a is 1 and b is 1 then the ab
term in Z turns into a 1. And very clearly it forces the Z output
itself to be a 1, and once you force the Z output to be a one with a equals 1, b
equals 1, Z doesn't depend on F anymore. So yes, in fact a equals 1, b equals 1 is
a don't care for node F. It all works because a equals 1, b equals
1 forces Z to be a 1. And if you actually look at F what you
can see is that F is actually ab bar plus a bar b.
F is actually an exclusive OR gate, right before we did any of the don't cares.
And after we find this don't care pattern, which is a equals 1, b equals 1
what we're actually going to find is that we could if we choose, we could replace F
as an OR gate because now one one is something that's a don't care pattern, we
can circle it if we like. We can decide if an OR gate is simpler
than an exclusive OR gate which is probably true.
then in logic, in actual silicon then this is probably a good simplification.
So that's what you actually get from this, this particular don't care pattern.
Now we're not quite done yet, we have a new problem.
What if F passes through many outputs? And is an input to many Z nodes, Z1, Z2,
dot dot dot ZN each of which make a primary output.
Well the big idea is that only the patterns that are unobservable at all of
the outputs can be observe nobody don't cares.
Because as soon as F affects even one of those nodes, then I can't, it's not mast
and actually have to, I can't make it adult care have to be careful that
computes to the right value. So I get a formula that looks much more
like the controllability don't cares. So the general recipe for the general ODC
is again, you quantify away all the variables that are not input to F, right?
because that's, that's what we're working on over here.
But, what you quantify is the boolean and of all of the complemented boolean
differences. So, you take all the Zs.
You calculate the boolean difference. You calculate the complement of the
boolean difference. You and them all together, okay?
And you quantify away the variables universally that don't show up in F.
And then you solve for you know, 1's in this.
Any pattern that makes that thing a 1 is an observability don't care.
It makes F unobservable at all the Z outputs.
And, that's the recipe. So, where are we with the ODCs?
ODCs give patterns input to node F that mask F at the network output.
They are not impossible. They can occur.
Don't cares, why? Because the network output doesn't care
what F is, because it, F does not affect the network output for these patterns.
The ODCs can be queued mechanically from the boolean differences of all the
outputs that are connected to F. Together, the CDCs and the ODCs give you
the full don't care set you can use to simplify an internal node.
With these patterns, you can call something like espresso and simplify the
insides of the sum of product form for the node.
Now a really good question at this point would be, are we done?
I talked about satisfiability don't cares, I talked about controllability
don't cares, I talked about observability don't cares and the answer is yes but
only if your networks look just like this.
And just like this means you have one layer of inputs in front of F, X1 to XM
and one layer of output in back of F, Z1 to ZN that generate primary outputs.
So if you only want to get the controlability don't cares from the nodes
immediately before you, and you only want to get observability don't cares for one
layer of nodes between you and the output, then all of the math and all of
the recipes I've given you are all you need.
Unfortunately, this is what a real network looks like.
And so here on this slide, in general, I'm showing you X which is in the middle
of a large network. And so there's several ranks of nodes
with many nodes at every rank feeding X and several ranks of nodes with many
nodes at every rank between X and some number of primary outputs.
And the problem is that controllability don't cares are actually a function of
all of the nodes in front of X, potentially.
And observability don't cares are a function of all of the nodes between X
and any outputs that it might influence. And in general, this is so complicated
that you can never get all of the don't cares for node X in a really big boolean
logic network. But even just representing all this stuff
can be explosively large, even for BDDs. So, what do people actually do?
Well, they generally don't get all of the don't cares.
there are an enormous number of tricks that trade off some effort in time and
computer memory for quality, for how many don't cares.
So, for example, the formulas I gave you for the controllability don't cares, the
so-called local controllability don't cares, which requires that you just look
at the outputs of the nodes in front of you.
The immediate antecedent vertices, the nodes in front of you, and you compute
those satisfiability don't care patterns. That's a really common trick, and that
works pretty well. There are also incremental node by node
algorithms that walk the network from the inputs to your F node to compute the
controllability don't cares and from your F node through layers of nodes to the
output Zs to compute the observability don't cares.
But they're just more complex and I don't have enough time to to talk about those
thing. The references I gave earlier in the
lecture are a really good source for those algorithms.
But, for us, just knowing these limited don't care recipes is actually
sufficient. You know understand why don't cares are
implicit, you understand why you must find them.
You understand how you can find them for a, a nice pretty realistic case.
And you know the big difference between the satisfiability, controllability, and,
and observability don't cares. This is great.
You're really in a very very good position now to understand pretty much
most of what happens in, in the, sort of a core version of conventional logic
synthesis. So congratulations.
Lots and lots of progress. What we're going to be doing next is
moving on toward more physical layout kind of problems.
because remember the title of the course is From Logic to Layout.
We're almost done doing all the logic stuff, we're about to move up to the
layout stuff.

You might also like