You are on page 1of 218
ALGEBRAIC STRUCTURE THEORY OF SEQUENTIAL MACHINES J. HARTMANIS. Research Mathematician General Electric Research and Development Center Professor of Computer Science Cornell University R.E. STEARNS Research Mathematician General Electric Research and Development Center PRENTICE-HALL, INC. ENGLEWOOD CLIFFS, N. J. PRENTICE-HALL INTERNATIONAL, INC., London PRENTICE-HALL OF AUSTRALIA, PTY. LTD., Sydney PRENTICE-HALL OF CANADA, LTD., Toronto PRENTICE-HALL OF INDIA (PRIVATE), LTD., New Delhi PRENTICE-HALL OF JAPAN, INC., Tokyo © 1966 by Prentice-Hall, Inc. Englewood Cliffs, N. J. All rights reserved. No part of this book may be reproduced in any form, by mimeograph or any other means, without permission in writing from the publisher. Current printing (last digi 09876543 Library of Congress Catalog Card No. 66-14360. Printed in the United States of America 02227-€ PREFACE ‘The explosive development of information-processing technology during the last two decades has stimulated the vigorous growth of an Information Science. This new science is primarily concerned with the study of infor- mation and the laws which govern its processing and transmission. A very active part of this science is the study of sequential machines or finite auto- mata which are abstract models of digital computers. The aim of this research is to provide a basic theoretical background for the study of digital com- puters and to contribute to a deeper understanding of discrete or finite information-processing devices. This area of research was started around 1954 by D. A. Huffman and E. F. Moore and has since undergone a con- siderable growth in several diverse directions. In the period from 1960 to 1965, a body of results we call “structure theory” was created and developed to a considerable degree of completeness and unity. This book is an exposi- tion on the foundations, techniques, and applications of this theory. By a structure theory for sequential machines, we mean an organized body of techniques and results which deal with the problems of how sequen- tial machines can be realized from sets of smaller component machines, how these component machines have to be interconnected, and how “information” flows in and between these machines when they operate. The importance of machine structure theory lies in the fact that it provides a direct link between algebraic relationships and physical realizations of machines. Many structure results describe the organization of physical devices (or component machines) from which a given machine can be synthesized. Stated differently, the structure theory describes the patterns of possible realizations of a machine from smaller units. It should be stressed, however, that although many structure theory results describe possible physical realizations of machines, v vi PREFACE the theory itself is independent of the particular physical components or technology used in the realization. More specifically, this theory is concerned with logical or functional dependence in machines and studies the infor- mation flow of the machine independently of how the information is repre- sented and how the logical functions are to be implemented. ‘The mathematical foundations of this structure theory rest on an alge- braization of the concept of “information” in a machine and supply the alge- braic formalism necessary to study problems about the fiow of this infor- mation in machines as they operate. The formal techniques and results are very closely related to modern algebra. Many of its results show considerable similarity with results in universal algebra, and some can be directly derived from such considerations. Nevertheless, the engineering motivation demands that this theory go its own way and raises many problems which require new mathematical techniques to be invented that have no counterpart in the development of algebra. Thus, this theory has a characteristic flavor and mathematical identity of its own. It has, we believe, an abstract beauty combined with the challenge and excitement of physical interpretation and application. It falls squarely in the interdisciplinary area of applied algebra which is becoming a part of engineering mathematics. This book is intended for people interested in information science who have either an engineering or mathematical background. It can be read by anyone who has either some mathematical maturity, achieved through formal study, or engineering intuition developed through work in switching theory or experience in practical computer design. Enough concepts of machine theory and machine design are introduced in the first chapter so that a mathematician may read the book without any experience with computers or switching theory. A preliminary chapter on basic algebraic concepts supplies enough mathematics to make the book self-contained for a non-mathematician. A good number of examples are given to supply the engineer with an interpretation or application of the mathematics. J. Hartmanis R. E. STEARNS CONTENTS oO INTRODUCTION TO ALGEBRA 2 0.1 Sets and Functions 0.2 Partially Ordered Sets and Lattices 0.3 Groups and Semigroups MACHINES 1.1 Definitions 1.2 Equivalence 1,3. Realizations 1.4 State Assignment Problem 1.5 “Don’t Care” Conditions PARTITIONS AND THE SUBSTITUTION PROPERTY 2.1 The Substitution Property 2.2 Serial Decompositions 2.3 Parallel Decompositions 2.4 Computation of S. P. Partitions 2.5 State Reduction PARTITION PAIRS AND PAIR ALGEBRA 3.1 Partition Pairs 3.2 Pair Algebra vii 15 15 2 27 32 37 37 48 51 55 58 CONTENTS: 3.3 Partition Analyses 3.4 Partition Pairs and State Assignment 3.5 Abstract Network: 3.6 Don’t Care Conditions, First Approach 3.7 Don’t Care Conditions, Second Approach 3.8 Component Machines and Local Machines LOOP-FREE STRUCTURE OF MACHINES 4.1 Loop-Free Networks 4.2 Obtaining Loop-Free Realizations 4.3 Implications of the S. P. Lattice 44 Properties of a Tail Machine 4.5 Clocks in Sequential Machines STATE SPLITTING 5.1 Structure and State Reduction 5.2 The State Splitting Problem 5.3 Set Systems 5.4 Set System Decompositions 5.5 Don’t Care Conditions, Third Approach FEEDBACK AND ERRORS 6.1 Feedback Defined by Partitions 6.2 Feedback 6.3 Feedback-Free Machines 6.4 Decompositions with Feedback-Free Components 6.5 State Errors 66 Input Errors SEMIGROUPS AND MACHINES 7.1 The Semigroup of a Machine 7.2 Realization by Semigroups 7.3 The Structure of Group Accumulators 7.4 Behavior Considerations 7.5 Decomposition into Simple Components 7.6 The Complete Construction REFERENCES INDEX u 16 88. 91 93 97 7 100 106 113 119 119 132 137 146 148 148 156 159 161 167 175 178 178 182 186 191 195 206 209 ALGEBRAIC STRUCTURE THEORY OF SEQUENTIAL MACHINES Oo INTRODUCTION TO ALGEBRA In this preliminary chapter we state the standard mathematical concepts and their basic properties that are used in the book. This is intended more as a review and a statement of notation than as a true introduction. Never- theless, this chapter is self-contained and a reader with some previous exposure to set theoretic notation should be able to pick up the remaining concepts, 0.1. SETS AND FUNCTIONS We start with a discussion of some set theoretic notation. The set S consisting of all the elements that have the property W is written as S = {s|s has property W}. Thus the set of all even numbers can be written as S = {i]i= 2k, k=0,1,2,. If s is an element of S, then we write seSorsins and if s € S implies that s € 7, then Sis a subset of T and we write Set. 1 2 INTRODUCTION TO ALGEBRA sec. 0.1 ‘Two sets S and T are equal, S = 1, if and only if SeT and TES. The set containing no elements, the empty, or void, set, is denoted by @. The intersection of S and T is the set consisting of all the elements in both S and T and we write SOT={s|seS and seT}. The union of S and Tis the set consisting of the elements in either S or 7. Symbolically, SUT={slseS or seT}. The set operations extend naturally to families of sets: US. = {| € S, for some at in A} ava 0 Sa = {58 € S, for all a in 4}. wee Two sets S and 7 are disjoint if SOT=2, and the family of sets {S.| a € A}is disjoint if the sets are pair-wise disjoint. ‘The number of elements in a set S is denoted by | S|. Let S and T' be nonvoid sets. Then a function f of S into T, written f8S->T, assigns to every element s in San element t in 7, written fe) = 1. The function f is sometimes called a map or mapping from S to T. If T={t|t=f(, sin S}, then fis an onto function. The function fis one-to-one if 5, # S, implies that f(s,) # fls.). Let f:S—>T and U be a subset of S; then a function g:U->T is the restriction of f to Uit SS) = at) for every sin U. Conversely, if fiS—>T, sec. 0,1 INTRODUCTION TO ALGEBRA 3 Sc UandT F, then a function gs j—>V is an extension of f if AIS) = a(s) for all s in S. The Cartesian product of the sequence of sets S\, S,,..., Sy is the set of all mtuples (5), 2, ..., sn) with 5, €S,, and we write SK Si Xo X SH XS Ginsey ee SOG E Se When notation becomes too cumbersome, the element (5,, 5,,-.-, S,) in x S, is sometimes written as ‘ XS or Next we turn to the description of relations which can exist between elements of two sets. Because the nonmathematician may not have had previous exposure to the formulation given here, we illustrate it first with a more specific example. If f: S—>T and f(s) = #, then we may define a relation “‘s is mapped by f onto 1.” We can write this sRt, where R now designates “is mapped by f onto.” Clearly, the function fis characterized by the set of all pairs {6.0/9 = 9 = (6 O1sRd, which is a subset of Sx 7. If we imagine that the function f is plotted in a plane with an s-axis and ‘-axis, then this characteristic set is just the graph of fon the plane. Thus this relation defined by f can be considered to be a subset of S x T. With this point of view in mind, we proceed to make our definitions. A relation between a set S and a set Tis a subset R of S x T; and for (, £) in R we write s Rt. Thus R=((s,1)|s Rt}. A relation R on SX S (sometimes called simply a relation on S) is: reflexive if, for all s,s Rs; symmetric if s Rt implies ¢ R s; transitive if s Rt and ¢ Ru implies s Ru. A relation R on S is an equivalence relation on S if Ris reflexive, sym- metric, and transitive. If R is an equivalence relation on S, then for every s in S, the set Bes) = {t|s Rt} is an equivalence class (je., the equivalence class defined by s). 4 INTRODUCTION TO ALGEBRA sec. 0.1 A partition 7 on S is a collection of disjoint subsets of S whose set union is S, ie. x = {Ba} such that Be Bs=@ for a#BP and U {BJ = S. We refer to the sets of x as blocks of x and designate the block which contains s by B,(5). When we write out a partition, we distinguish blocks with bars and semi- colons rather than with set brackets. For example, if S= {1,2, 3, 4, 5, 6, 7, 8} and partition z on S has blocks {1,3, 4, 5}, {2,6}, and {7, 8}, then we write 6; 7,8} m= {134,55 instead of x = {{1,3,4, 5}, {2, 6}, £7, 8}}. ‘Nevertheless, the reader should not forget that x is a set and has elements, just like any other set. Finally, we write : S=1(x) if and only if s and ¢ are contained in the same block of 7, i.e. $ =t(m) if and only if B(s) = B,(t). If R is an equivalence relation on S, then the set of equivalence classes defines a partition x on S, and conversely every partition z on S defines an equivalence relation R on S whose equivalence classes are the blocks of z.Thus if R defines x, then s Reif and only if s =1(z). We now describe how partitions on a set can be “multiplied” and “added.” These operations on partitions and the subsequently defined ordering of partitions play a central role in the structure theory of sequential machines and form a basic link between machine concepts and algebra. If x, and x; are partitions on S, then: (@ +7» is the partition on S such that $= t(s,-7,) if and only if s = t(w,) and s= 1 (x,). Gi) x, + 2, is the partition on S such that $= 1(, + 7) if and only if there exists a sequence in S $= SS) Sy 00-9 Sn = E for which either sec. 0.1 INTRODUCTION TO ALGEBRA 5 = Stan (Hi) OF 84 = Seay (Mea) O0 1 let By.x(s) = Bs) U {B| Bis a block of x, or x, and BA Bis) # g}. Then Barer(S) = BAS) for any i such that Bis(s) = BAs). To illustrate this, let S = {I, 2,3, 4, 5, 6, 7, 8, 9}, ey = {1,25 3A; 56; 7,89) and we, = (1,6; 23; 43; 78; 9}, Then 3 78; 9} and 1,2,3,4,5,6; 7,8,9}. Repeated multiplication and addition are represented by the following notation: my + my and For x, and , on S, we say that 2 is larger than or equal to 7,, and write Sty if and only if every block of 7, is contained in a block of x. Thus my < my if and only if x, +7, = my Mand only if x, + 2, = 2. If mw and 7 are partitions on $ and x >, then w defines a partition # on the set of blocks of 7 if we let B,(s) = B.(t) (2) if and only if s = ¢ (x). 6 INTRODUCTION TO ALGEBRA sec. 0.2 Thus two blocks of 7 are identified by @ if and only if they are contained in the same block of ~. We refer to % as the quotient partition of with respect to 7. For example, if = (1,256; 3.47} and 7 = (12; 54; 56; 7), then a > 7 and the quotient partition is a = (B, By; BB, where B, = {1, 2}, B, = {3,4}, B, = (5, 6}, and B, = {7}. In other words, % is just w reinterpretated to be over the blocks of 7. If x, >and 7, > 7, then the quotient partitions with respect to satisfy 0) mr, > wif and only if % > #5 Ty = Fey Fess Gil) Fx, =, + tty. Thus the quotient partitions behave exactly as the original. 0.2 PARTIALLY ORDERED SETS AND LATTICES A binary relation R on S is a partial ordering of S if and only if Ris (i reflexive: s Rs for all s in S, (ii) antisymmetric: s Rt and t Rs implies t = s, (ii) transitive: s Rt, t Rw implies s Ru. We refer to a set S' with a given partial ordering R as a partially ordered set. When a relation R is a partial ordering, we use the more suggestive symbol “<” instead of R and the partially ordered set is represented by the pair (S,Q. The set of all partitions on S with the previously defined ordering is seen to be a partially ordered set. Another example is provided by the set of all subsets of $ which is a partially ordered set under the ordering of set inclusion. Let (8, <) be a partially ordered set and 7 be a subset of S. Then s (in S) is the least upper bound (\.u.b.) of Tif and only if (@ 8 >t for all tin T; (ii) s/ > t for all t in T implies that s’ > s. Dually, s is the greatest lower bound (g.1.b.) of T if and only if () s y and y > x, then youyoyxax and thus x = y which shows that < is antisymmetric, Finally, x>y and y>z 8 INTRODUCTION TO ALGEBRA sec. 0,2 imply that xey=y and yezaz and consequently XZ = Xe(yeZ) = (Kez = Yer sz, Therefore x > z and the ordering is transitive. To show that x-y=gelb. (x,y), note that xeQeey) = (ex) y = ary and thus x-yS, such that Axe y) = h(x)-AQ) and h(x + y) = h(x) + AQ). In general, any operation preserving function from one algebraic system onto another is called a “homomorphism.” In this instance, function h is a “lattice homomorphism.” The lattice L, intuitively represents a simplified version or a coarse imitation of L,. In Chap. 2, we discuss how a “machine homomorphism” isolates a “subcalculation” of a machine’s behavior. A lattice L is distributive if and only if for all x, y, z in L, X+(y + 2) = (x+y) + (2) (D) and dually x + (-2z) = (& + y+ 2). (Dy) For example, the lattice of all partitions on a set S, |S| > 3, is not distributive. To show this, consider Fig. 0.1 which shows the lattice of partition on S = {1, 2, 3}. We indicate the ordering relations in Fig. 0.1 by descending lines between the lattice elements, A simple calculation shows that Hy = Hye + Hs) F ye, +7y-75 = 0 and so this lattice is not distributive. Any other partition lattice on a larger set contains this lattice as a sublattice and thus it also fails to be distributive. We shall see later in this book that this lack of distributivity in the partition lattice has some interesting implications for machine theory. r {1.2.3} {1.2} {2.3} { {3} ° malls 3}, m={i E35}, 32) @ Fig. 0.1. Lattice of partitions on Fig. 0.2, Lattice of subsets of (1, 2,3}. S= (1,23). 10 INTRODUCTION To ALGEBRA sec, 0.3 An example for a distributive lattice is provided by the lattices of all subsets of a set S. Figure 0.2 shows the lattice of all subsets of S = {1, 2, 3}. The reader with a little bit of skill at manipulating the lattice laws will have a distinct advantage in following the proofs and making applications. In order to further familiarize the reader with lattice concepts and give him a chance to practice the lattice properties, we offer several exercises. EXERCISES, 1, Show that in a lattice L = (S, +, +), ‘ m 2 the complements are not unique. 6. Show that the lattice of all partitions on SU @ contains a sublattice isomorphic to the lattice of all subsets of S. 0.3 GROUPS AND SEMIGROUPS We now define semigroups and groups and discuss some elementary ” properties of groups. These concepts are used only in Chap. 7 where we investigate the semigroup of a sequential machine and relate its algebraic properties to machine structure. Unless the reader has a firm grasp on the previous material, we recommend he skip this section for now and begin reading Chap. 1. A semigroup is just a set with an associative rule of combination, such as numbers under the rule of multiplication or functions of a set into itself uniler the rule of composition. More precisely: A semigroup G is a pair G=(S,°) where S is a nonempty set and “-” is a binary operation such that @ x-y is in S for all x and y in S, (ii) x+(y-2) = (x-y)+2 for alll x, y, and z in S. Sec. 0.3 INTRODUCTION TO ALGEBRA 11 Condition (j) is called the closure property and (ii) the associative property. When the interpretation is clear, we often leave out the “+” and write xy instead of x-y. A group G is a pair G=(S,-) such that @ (5, +) is a semigroup ii) there is an identity element e in S for which e-x = x-e = x for all xin S (iii) for each x in S, there is an inverse x! in S such that x-x-! = xhex=e An example of a group is provided o9t2545 in Fig. 0.3 where the table entry in row x and column y defines the group product xy. It is seen that 0 is the identity element, 1 and 2 are inverses of each other, and the other elements are their own inverses. This group is commonly called Sy, the symmetric or permutation group on three objects. A. subgroup (subsemigroup) of a Fig. 0.3. Group Ss. group G is any nonvoid subset of G which is a group (semigroup) under the same operation. The subgroups of S, are {0}, {0, 1, 2}, {0, 3}, {0, 4}, {0, 5} and Sy. If His a subgroup of G, then the sets onan so on anao waron Ruason -wnosow nosauwes osnuna Ha = {x|x = ya for y in H} for a in G are called right cosets of H. Similarly, we define left cosets aH for ain G. We illustrate with H = {0, 3}: Right cosets: H = HO = H3 = {0,3} Hl = H4= {1,4} H2 = H5 = {2, 5} Left cosets: H = 0H = 3H = {0,3} 1H = 5H= {1,5} 2H = 4H = {2,4}. In a group G, two right (left) cosets, Ha and Hb are either identical or dis- joint and | Ha| = | H| for all a in G. To see this, observe that if 12 INTRODUCTION TO ALGEBRA sec, 0.3 Han Hb#+ then for some x, yin H xa = yb. But then for any z in H, zx-'y is in H, and therefore 2a = 2x™!xa = 2x71yb is in Hb. Thus Ha © Hb. Similarly, we show that Ha > Hb. Since x + y if and only if xa # ya, we conclude that | H] =| Hal. A subgroup H of G is a normal subgroup if the left and right cosets are equal, aH = Ha. In this case we refer simply to cosets. The subgroup {0, 1,2} is a normal subgroup of S; and {0, 1,2} and {3, 4, 5} are its cosets. A homomorphism h of G, = (Sy, +) onto G, = (Sz, -) is a mapping h:S,—>S, such that Hex-y) = h(x): hy). If A is a one-to-one mapping, then it is an isomorphism between G, and Gy, and we write G,=G,. A congruence relation R on a group G is an equivalence relation such that x, Ry, and x,,R y_ implies xx, RY. The equivalence classes of the congruence relation R on G define a partition a on the elements of G. In terms of this partition x, the congruence con- dition may be written X= Vi Ge) and x2 = ys (x) implies x,%, = y1y2 (7) or equivalently Be(x;) = Bes) and B,(%2) = Be(v) implies BeO,2) = Beys)- This last implication makes it possible to think of the blocks of x as forming a group with operation “.” defined by the following equation BAX) BAY) = BAY); as the block B,(xy) is seen to be determined by the blocks B(x) and B,(y) independently of the particular elements x and y chosen from these blocks. We denote this group by G/z and refer to it as the quotient group of G with respect to the congruence relation given by z. The three group concepts of a homomorphism, a normal subgroup, and sec. 0,3, INTRODUCTION TO ALGEBRA 13 a congruence relation can be made to correspond to each other in a natural one-to-one way. They are in effect three points of view on the same phenom- enon. We now investigate this correspondence in more detail. If Ris a congruence relation on the group G, then H={xinG|x Re} is a normal subgroup and the equivalence classes of R are the cosets of H. Conversely, the cosets of a normal subgroup H of G define a congruence relation R on G. To prove that H = {x € G|x Re} is a subgroup, let x and y be in H. By definition, xRe and yRe and therefore xy Ree or xy Re and thus H is closed under the group operation. It is easy to verify that H7 contains the identity element e and inverses for all its elements. Therefore H isa group. To show that H is normal, we have to show that for all a in G, aH = Ha. If x is in H, then there exists a y in G such that ax = ya or axa”! = y. Since x Re, we know that axa Re and therefore y Re. Thus y is in H and therefore aH & Ha. Similarly, we can show that aH > Ha and therefore H is normal. To show that the cosets of a normal subgroup H define equivalence classes of a congruence relation, let x, and x, be in aHf and y, and y, be in bH. Then for some x}, x4, y, and ys in H, we must have the equations Xy = OX, Yi = byl, Xz = AX, Yo = by which combine to obtain XY, = axiby| and x2). = axaby. Since H is normal, there are elements xj’ and x{' in H such that xib = bxi! and x4b = bx! and substituting these two equations into the previous equations, we obtain 14 INTRODUCTION TO ALGEBRA sec. 0.3 xy, = abxily, and x,yy = abxt'y}. Thus x,y, and x,y are both in coset abH which means that XY RXVe and the proof is completed. This result shows that every quotient group G/x is (isomorphic to) a quotient group with respect to a normal subgroup H which is written: G/H. Finally, a homomorphism, h of G, onto G,, defines a congruence relation, xRy — ifand only if A(x) = AQ), and this congruence relation defines a normal subgroup H = ¢} of G such that GJH = Gy. EXERCISE, Let {77 be the set of partitions defined by the right cosets of the subgroups of G. Show that {77 is the sublattice of the lattice of all par- titions on G. Thus, if H, and H, are subgroups and z, and zr, are the cor- responding sets of cosets, then the (right) cosets of the largest subgroup of G contained in H, and H; is given by mt Similarly, the (right) cosets of the smallest subgroup in G containing H, and Hy are given by yt my EXERCISE, Let {xj} be the set of partitions defined by the congruence rela- tions on a group G. Show that {ris a sublattice of the lattice of partitions onG. NOTES A good basic reference on algebra is G. Birkhoff and S. MacLane [4] and a comprehensive treatment of lattice theory is given by G. Birkhoff [3]. 1 MACHINES 1.1. DEFINITIONS We begin by an abstract definition of a sequential machine which provides a mathematical model for discrete, deterministic computing devices with finite memory. Among the many physical devices modeled by sequential machines are digital computers, digital control units, or electronic circuits with synchronized delay elements. All these devices have the following common properties which are abstracted in the definition of a sequential machine. 1. A finite set of inputs which can be applied to the device in a sequential order. 2. A finite set of internal configurations or states the device can be in. These configurations correspond in the physical devices to various com- binations of bits in memory, pulses in delay lines, or flip-flop settings and are usually reached at the end of some basic clock period. 3. The present internal configuration and the input uniquely determine the next configuration the device achieves. Thus, the state of the device is a function of the state the device was started in and the sequence of inputs which has been applied to it. 4. A finite set of outputs which are determined either by the configuration of the device or by the transition from one configuration to the next. Thus, these devices map inputs into outputs, and the present output depends not only on the present input but also on the past history of inputs. With this interpretation in mind, we present the two classic machine models and derive some of their elementary properties. 15 16 MACHINES sec. 14 Derinmion 1.1A. A Moore type sequential machine is a quintuple M = (S,1,0,8,r) where (j) S'is a finite nonempty set of states; Tis a finite nonempty set of inputs; Gi) O is a finite nonempty set of outputs; (iv) 8:5 x 1—+S is called the transition (or next state) function; (v) 4: S—+ O is called the output function. Dermition 1.1B. A Mealy type sequential machine is a quintuple M =(S,1,0,8,) where (ji) S isa finite nonempty set of states; (i) Lis a finite nonempty set of inputs; (iii) O is a finite nonempty set of outputs; (iv) 8: S x I — Sis called the transition function; (vy) 4: S x [> O is called the output function. In those few cases when we want to emphasize the use of a Mealy machine, we designate the output function by 8. Thus M = (S,1, 0,8,%) can be either type machine, but M =(S,1, 0,8, 8) is a Mealy machine. . Notice that these definitions are the same except for part (v). Mathe- matically speaking, the Moore machine is a special case of the Mealy machine because we may take Als, x) = A8). for x in Jand s in S. There is, however, a subtle difference that this formalism glosses over; the outputs of a Moore machine are thought of as occurring while the internal configuration indicates a state, whereas the outputs of a Mealy machine are thought of as occurring while the machine is going through a transition between states. This difference of interpretation is important, because a later definition does introduce a mathematical differ- ence. The two most common representations of a Moore machine M = (S,1, 0,8,%) are illustrated in Fig. 1.1 where S= frst}, {a, b}, and 0 =(0,1). sec, 1.1 MACHINES 17 Inputs ob rfs]rJo stores s] 7 | 1 || 0 | outputs tefs dt Fig, 1.1. Representation of a Moore machine by a flow table and by a state graph. On the left is a flow table with rows corresponding to each state, columns corresponding to each input, and table entries to indicate each transition, For example, 8r,a)—s, 8(r,b) Furthermore, there is an output column listing the value of d for each state of M, for example, Mn) =) = and = X(t) = 1. On the right, the same information is placed on a state graph. The arrow labeled a pointing from the r node to the s node indicates that aina=s and the 0 at the r node indicates that Mr) = 0. Similarly, there are two common representations of a Mealy machine as illustrated in Fig. 1.2. The difference in the flow table is that an output r, =ools ool Fig. 1.2, Representation of a Mealy machine by a flow table and by a state graph. column must be included for each input. The a/0 label on the arrow between the r and s nodes in the state graph indicates that B(,a)=5 and f(r,a)=0. Notice how the state graphs emphasize when the output is thought to 18 MACHINES sec. 1.1 occur. Although the state graph has proven to be a valuable research tool, the flow table is superior for most of the concepts developed here, and it is used almost exclusively throughout the remainder of this book. In many discussions we are not directly concerned with the output of the machine, but are primarily interested in the properties of the state transitions. In order to study state transitions separately we make the following definition. Derinition 1.2. A state machine is a triplet M =(S,1,8) where (i) Sis a finite nonempty set of states; (ii) Zis a finite nonempty set of inputs; ii) 8:5 x 1 —> Sis called the transition function. Given a machine description, we imagine a “device” which is built along the ‘general format of Fig. 1.3 with “wires” to carry information in the ™ [aes T Fig. 1.3. Schematic circuit for M ~ (S, J, 0, 6,2). xer a > 20 direction of the arrows, “combinational logic” to compute the functions & and d, and a “storage element” to remember the present state of the machine. In the Moore case, the wire from the input source to the output logic is omitted. In the case of a state machine, the output logic is also omitted. We have found that it is often useful to think in terms of a little machine chugging away from state to state rather than in terms of abstract sets and mappings. For this reason, we now incorporate this view into our formalism. Dermimion 1.3. Given a machine M, we say that state s goes into state under input x if and only if t= &(s, x). For a subset B of S we define 8(B, x) = {s| 5 = (4, x), ¢ in BY and we say that the subset B goes into set B’ under input x if and only if 3(B, x) < B. There are a few more concepts we wish to introduce at this time. sec. 11 MACHINES 19 Derinimion 1.4. A machine M'= (ST, is a submachine of the machine M =(S,1,0,8,») 8) if and only if Ses ror @-0, 8: SX I> 8’ and & = 6 restricted to S’ x I’, and N28 XI’ O' and X’ = d restricted to 8’ XI’ (and similarly for state machines). DEFINITION 1.5. Machine M is strongly connected if and only if, for all submachines M’ of M, I’ = I implies S’ = S. Thus, a machine is strongly connected if and only if no proper subset of S of M goes into itself under all inputs or, equivalently, if and only if each state can be sent into any other state by a properly chosen sequence of inputs. DEFINITION 1.6. Two machines of the same type M=(S,1,0,8,4) and = -M’=(8',1', 0,82’) are isomorphic if and only if there exist three one-to-one onto mappings fuS— Ss! fil fio 0 such that ALS, 2) = STA), AOI] AIMS, »)) = MAG) ACD! We refer to the triple of mappings (f,,fz,J2) a8 an isomorphism between Mand M’. Thus two sequential machines are isomorphic if and only if they are identical except for a renaming of the states, inputs, and outputs. Machine isomorphism is the most elementary case of two machines imitating each other through the use of “combinational circuits.” The term combinational circuit refers to a device which performs the same fixed map- ping of inputs into outputs, regardless of the past input history. Thus a combinational circuit may be thought of as a one-state sequential machine. If we have a machine M' that is isomorphic to M, then by just placing a combinational circuit in front of the machine M’ and a combinational circuit in back of the machine, we can convert it to a machine that behaves like 20 MACHINES sec. 1.1 M. The combinational circuit in front computes f;, mapping / into J’, and the circuit in the back computes f', mapping O' into O. The schematic representation of this conversion of M’ to M by use of two combinational circuits is shown in Figs. 1.4 and 1.5. The resemblance between M and M’ r uw 0 Fig. 1.4. Schematic represen tation of machine M’. is so strong that we often think of them as being two names for the “same” machine, Later in this chapter, we define the most general concept of when one machine can simulate (realize) another machine by means of two combi- national circuits placed in front and back of the machine. The next and final concept of this section serves as an intermediate realization concept and is the key idea behind the elementary decompositions introduced later. DEFINITION 1.7. A sequential machine M' (S11, O78) is a homomorphic image of the machine M = (5,1, 0,8,r) if and only if there exists three onto mappings hy:S—>s! hy Iy:0 —> O' such that AB(s, a)] = 5'fhA(9), he(a)] Ads, a] = W'TA(5), hy(a)]- We refer to the triple (h,, fs, 4) of mappings as a homomorphism of M onto M’. To illustrate these ideas, consider machines A and 4’ shown in Figs. 1.6 and 1.7. It is easily checked that the following three mappings define a homomorphism of A onto 4’. sec. 1.1 MACHINES = 21 20 eo be ifsye]a 2 z}6ls]a flo ot 3}a]aj2 iia ef=]°]* a}3a]3jfelf2 ale ]eo la sf2|1}3 fe rlajs | z e{:|2}3 fo steja A Fig. 1.6. Machine A. Fig. 1.7. Machine 4’, hy: 1p yt a> 0 ys O- > 2—>p b—0 1—>z, 3—q cl 2% 4—>r 5—s 6s For example, AIKA, a) = hy(S) = 8 = BTh,(1), he(@)] = 8(p, 0) = 5 and HAMA, a)] = hel] = 20 = 2TH, ()] = A'(p) = 20. DEFINITION 1.8. A state machine M! =(S',1',8) is a homomorphic image of the machine M if and only if there exist two onto mappings hy: S—>S' hy: I—>1' such that A[Ks, a) = S'h(s), h(a). In applications we are often interested in homomorphisms between machines which have the same input and output alphabet. In such cases, if h, and h, are identity mappings, we refer to this homomorphism as a state homomorphism. If M’ is a state machine, then only h, has to be an identity mapping to have a state homomorphism. If M’ is a homomorphic image of M, then by using two combinational circuits, M can be used to simulate M’. The schematic representation of this simulation is shown in Fig. 1.8. If h, does not have a unique inverse, then h;'(a) is interpreted to be any input symbol which is mapped onto a by fi. Intuitively speaking, the machine M does more than M’ can, but it can be modified by attaching combinational circuits to imitate its homo- morphic image M’. It will be seen that this simple interpretation provides the basis for motivating and understanding many later results. 22 MACHINES sec. 1.2 Fig. 1.8. Simulation of the homomorphic image M’ of M. EXERCISE, Show that the homomorphic image of a strongly connected machine is strongly connected. EXERCISE. Show that if M, is a homomorphic image of M, and M, is a homomorphic image of M,, then M, is a homomorphic image of M;, 1.2 EQUIVALENCE Any notion of equivalence between machines, be they both Moore, both Mealy, or one of each type, must be based on some precise concept as to when two machines can do the “same thing.” Intuitively, we think of machines doing the same thing if the same inputs give the same outputs. In this section we explore equivalence from the ground up, beginning with input sequences and states. Noranon. If J is an input set, we let % represent the set of all finite non-null input sequences and let #, represent the set of all finite sequences, including the null (or length zero) sequence which we represent by the symbol A. Symbolically, Fo=F VIA} We generally represent an input sequence by an input symbol with a bar over it or as a string of input symbols without punctuation. For example, we might write x a for x, in J, and we would say that ¥ has length n. We now extend the next state function & to input sequence in the natural manner and expand A (or 8) in a way suitable for the study of equivalence. Derinition 1.9. Let § be the function § of Def. 1.1 extended inductively over the set S x ¥, as follows: @ &A)=s forall sin S; (ii) if § is defined for sequences of length k > 0 and # = x’x where #' is of length k and x is in J, let sec. 1.2 MACHINES 23 &(s, 8) = 86s, #), x). Let (Moore case) Ks, 3) = M&Ms, ¥) for xin. ¥, and (Mealy case) Als, ¥) = B&s, 2), x) for ¥ in ¥, and x in I. In plain English, &(s, #) is computed by starting the machine in state s, feeding in the input sequence %, and looking at the final state. In the Moore case, \(s, ¥) is the output associated with this last state. In the Mealy case, Als, %) is associated with the final transition. Since there are no transitions associated with the null sequence, f is only defined for ¥ € f. DeFINITION 1.10. Tf My = (Si, 1,0,8;,%,) and M, = (Sz, I, O, 85,2) are two Moore (Mealy) machines with the same input and output alphabets, then states s, in S, and s, in S, are said to be equivalent if and only if Rls, %) = Raley 8) for all ¥ in.#, (Moore case) or ¥ in.¥ (Mealy case). Notice that this definition also applies to the special case where M, = My. It is easily verified that this is an equivalence relation among the states of all machines with the input set J and output set O. Lemma 1.1. Equivalent states s, and s, of machines M, and M, go into equivalent states under each input. Proof. Let # be an arbitrary element of ¥,. Then by Defs. 1.9 and 1.10 KaLBr (ss, ), 8] = Aa (Spy XX) = Yea (S25 XX) = AelBa(Se, 2), XI and thus 8,661, X) is equivalent to 8,(s,, x). DEFINITION I.11. Two machines M, and M, (of the same type) are equivalent if and only if each s, in S, has an equivalent state s, in S, and each s, in S, has an equivalent state s, in S,. One example of equivalent machines is shown in Fig, 1.9. Here states r,s, and ipare equivalent and states, », and w are equivalent. DerInITIoN 1.12, A machine M is reduced if and only if state s, equiva- lent to state s, implies that s, = s,. The importance of this definition is brought home by the following: THEorEM 1.1. Given a machine M,, there is a reduced machine M 24° MACHINES sec. 1.2 ot oi cfs]? ]o vfe]-]o s|r fe fo vfwlo ds rtede dia wiw fe fa a a” Fig. 1.9. Two equivalent machines B’ and B”. equivalent to M,. Furthermore, if M, is any machine equivalent to M,, then there exists a state homomorphism which maps M, onto M. Proof. We divide S, into classes B, of equivalent states and take S = {B,}. Let &(B, x) be the class B, such that 8,(s, x) is in B, for some s in B,, By Lemma 1.1, all choices of s give the same B,. Take A(B,) = 24(s) or A(B,, x) = A,(s, x) for any s in B,. By definition of equivalent states, any choice of s gives the same value. Thus we have defined a reduced machine M =(S, I, 0,8,%) equivalent to M,. Let M, be equivalent to M,. Then by the definition of equivalent machines, each s in S, must be equivalent to the states in some B, and there can only be one such B, because states in other B, are not equivalent to the states in B,. We may thus set Ay(s) = By which defines the state homomorphism of M, onto M. Bed ¢ x 3 One illustration of the proof is given in e={) |e |e || 1| Fig. 1.10 which shows a reduced machine for those of Fig. 1.9. The state homomorph- Fig. 1.10. Reduced machine ism map B for B’ and B”. fy: {u, 0, w} —> {B,, By} is given by Iy(u) = By and hy(v) = h(w) = By. CoRoLtary 1.1.1. If two reduced machines are equivalent, then they are isomorphic. We are therefore justified in referring to the reduced machine. Proof. The desired isomorphism (f,, fi, fs) between two equivalent re- duced machines is given by f, =f (of Theorem 1.1) and f, and f, equal to identity mappings. These results show that, for any set of equivalent machines, there exists a unique equivalent reduced machine and this machine has the smallest number of states among all the machines equivalent to it, because any other equivalent machine with the same number of states must be isomorphic to this reduced machine. sec, 1.2 MACHINES = 25 We are now ready to discuss “equivalence” between Moore and Mealy machines. We prefer to use the term “similar” since there is no true equivalence relation. DEFINITION 1.13. A Moore machine M, = (Si, 0,81, 4) and a Mealy machine Mz = (Sz 1, O, 81 Bs) defined on the same input and output sets are said to be similar if and only if for each state s, in S, (or s, in S,), there is a state s, in S, (5, in S,) such that Ru(S:, %) = BelSe, 2) for allt e F. Note that nothing is said about X,(s,,.4) = %4(s,). Thus the output of a given state may not be compared with any Mealy output and consequently several nonequivalent Moore machines may be similar to a single Mealy machine. Figure 1.11 provides a simple example of this. Fig. 1.11. A Mealy machine, M;, similar to two non-equivalent Moore machines, Mz and Ms, THEOREM 1.2. All the Mealy machines similar to a given Moore machine M =(S,I, 0,8,2) are equivalent. One such machine is given by M’ =(S, 1, 0,8, B =X). Proof. Let M, and M, be two Mealy machines similar to M. Let s, be an arbitrary clement of S,. Let s be the state in S such that BiG, %) = Hs, 2) for all ¥ in .#, and let s, in S, be such that Bas, ¥) = Ms, ¥) for all ¥ in .g. But then 26 MACHINES sec. 1.2 Bilsi, 8) = Bele ) for all ¥ in Y, and s, and s, are equivalent. By a similar argument, it follows that for every state in S, there is an equivalent state in S, and thus M, and M, are equivalent machines. The second statement is immediate from Definitions 1.10 and 1.11. Once again, some subtleties of the Moore-Mealy contrast come to the surface. To consider Definition 1.1A as a special case of 1.1B, we took Als, x) = MS); but to construct a Mealy machine similar to a Moore machine, we sce that we must take Als, x) = M&(s, x). Thus Definition 1.13 introduces a second way of viewing a Moore machine as a Mealy machine and exposes the limitation of regarding the Moore machine as a special case of the Mealy machine. Next we take the state point of view and consider the advantages of converting from a Moore to a Mealy machine. TuroreM 1.3. Let M =(S,1, 0,8, >) be a reduced Moore machine and M’ be a similar reduced Mealy machine. Let [B,} be equivalence classes on S such that s, and s, are in the same B, if and only if (SX) = (Se, x) for all x in I. Then M’ has one state corresponding to each By. Proof. We construct M’ using {B,} for states. By Theorem 1.2, M* = (8,1, 0,8, 8 =) is a Mealy machine similar to M and if we show that the B, are the vlasses of equivalent states on M+, then the construction in the proof of Theorem 1.1 will give the desired representation of M’ and map of S into {B,}. Suppose that B(S1, X) = (Ss x) for all x in [. Then obviously 8(5,, #) = 5p, 8) for all # in.# and by definition of M*, Alsi, 8) = MBS, 9) = ME(S:, 2) = B52, 8) for all #in.g, and hence s, and s, are equivalent states in M*. On the other hand, if sec, 1.3 MACHINES 27 p= s, Wyx=H and ASH, then there exists an ¥ inf, such that Xsi, ¥) % Msi, ¥)- But then BG, X%) = Ms, x8) = st, 8) % MS, 8) = MSa, X¥) = BlS4, *2) and xx is inf; hence s, and s, are not equivalent. Thus the B, are the classes of equivalent states of M*, as was to be shown. CoROLLARY 1.3.1. A reduced Moore machine has the same number of states as the corresponding reduced Mealy machine if and only if all the next-state rows in its state table are distinct. Thus, we have an easy test for deciding if the Mealy machine gives fewer states. To illustrate this result, consider the reduced Moore machine C of Fig. 1.12. For this machine states 1 and 3 have identical next state rows, and thus it can be replaced by a similar five-state Mealy machine, which is shown in Fig. 1.13. Note that for this machine, states 4 and 5 have identical next state rows, but these states are not equivalent and it is easily seen that this is a reduced machine. Oo 1 ifeprn 6 i 6-4 2}6]5 lo fe]: Jo]: 3,2 ]t fo 2}6]5|lolo alr ]3 fo alifalrfo 5/3 |i llo s}rftfiols 6|5 |4 jlo 6[s {4 jolo Fig. 1.12, Machine C, Fig. 1,13, Mealy machine similar to C, 1.3 REALIZATIONS So far, we have treated machines on an abstract level and have discussed the case in which two flow tables specify the same machine from the input- output point of view. The purpose of this section is to define and elucidate the relation between these ideas and physical circuits, which so far have only been mentioned informally. It is through such physical interpretation that the theoretical results become interesting and useful to people interested in physical applications. The next two definitions describe how a machine M', which is not necessarily equivalent to M, can be used to imitate M after a renaming of the inputs and outputs. Again it is helpful to think of the mappings per- 28 MACHINES sec, 1.3 formed by ¢ and € of this definition as combinational circuits placed in front and back of the machine M’ to make it behave like M. Derinimion 1.14. If M and M’ are two machines, then the triple (a,0,£) is said to be an assignment of M into M’ if and only if ac is a mapping of S into nonvoid subsets of S’, vis a mapping of I into J’, £ is a mapping from O’ into O, and these mappings satisfy the following relations: @ 8a), 2)] S alB(s, »)] for all s in S and x in 7, Gi) £Q(81)) = M5) for all s’ in a(s) (Moore case), or Gi’) £E'G, Cd) = As, x) for all s! in as) and x in J (Mealy case). DEFINITION 1.15. A machine M’ is said to be a realization of machine M if and only if there is an assignment (a, ¢, £) of M into M’. If M and ‘M’ are state machines, then we require an @ and « satisfying condition i and such that a& maps S into disjoint subsets of S’. The next result states that if #M' is a realization of M, then M’ started in a state ai(s) behaves like M under the interpretation of ¢ and £ when started in s. THEOREM 1.4. If M’ is a realization of M through the assignment (,.,£), then for s! in a(s) and in Ms, 3) = EIM(S', 3). Proof. Left as an exercise, TueoreM 1.5. If M and M’ are equivalent machines, then M is realized by M! and M’ is realized by M. Proof. To prove this we show that there exists an assignment of M into M' and vice versa. Let « and £ be identity mappings and let aa(s) = {s'|s! is equivalent to s}. Then S[a(s), x] S afd(s, x)] since, by Lemma 1.1, equivalent states go into equivalent states. Since s’ and s are equivalent, MGs, 3) = MG, )- Thus M' realizes M. A similar argument shows that M realizes M’. EXERCISE. Show that if M, realizes M, and M, realizes M;, then M, realizes My. EXERCISE. Show that if M’ realizes M and s’ is in a(s,) and s' is in a(s,), sec. 1.3 MACHINES 29 then 5, and s, are equivalent states of M. Thus, if M’ realizes a reduced machine, then for any s’ there exists at most one s of M such that s’ is in a(s). It should be noted that if A’ realizes M, then these two machines do not necessarily have to be isomorphic or related by homomorphisms. There is though, as shown in the next theorem, a homomorphism which relates ‘M' to the reduced machine equivalent to M in the case when vis a one-to-one mapping. THEOREM 1.6. Machine M' is a realization of machine M such that « is one-to-one mapping if and only if the reduced form Mz of M is a homo- morphic image of a submachine M" of M’. Proof. We can assume that M is a reduced machine. Since a machine realizes its reduced form (by Theorem 1.5) and since the relation “realized by” is a transitive, AM’ realizes the reduced form of M. Furthermore, if the mapping ¢ is one-to-one in the realization of M by M’, then it is one-to-one in the realization of the reduced form of M by M’. Thus we have to show that there exists a submachine M" of IM’ which can be mapped homomor- phically onto M. Let S" = U falls € 5 of MY I" =U [oa)|x © L of M} or=0 8” = 8 restricted to S” x 1” "= restricted to S” x 1", Then M" =(S," I", 0",8", 0") is a submachine of M’. By definition of (@, ¢, &), s’ in S” implies that 8"[a(s), (2) S a[8(s, x)] and therefore 8" fas), x] SS”. To see that M” can be mapped homomorphically onto M define: h(s!)=s — if and only ifs in a(s), hy=et hy =o. Since, for all s’ in a(s), 8"[s', (2)] is in a[8(s, >] and EAs‘, 2) = Ms, »), 30 MACHINES SEC. 1.3 we obtain AEG, ODN = OCs, x) = BAAS’), alex) and Agdn'(s', €x))] = Ms, x) = MACs’), Asle(x)D- Thus (M1, he, hs) is a homomorphism. Conversely, if M is a homomorphic image of ”, then the homomor- phism = (h,, hy, hs) defines an assignment of M into M’ if we let as) = {s'|h(s!) = 3}, u(x) = some x! such that h(x’) = Shy If 5’ is in ai(s) then by definition of «, hfs!) = s. Therefore, AEs’, Cx) = S(hy(s'), hele) = Os, x) and thus . 8's’, x)] is in af&(s, x). Therefore . 8"La(s), (x)] S a8(s, x)]- Finally, EIN(s', (x))] = (5, ») and (qt, 1, ©) is an assignment of M into M"; but then M’ realizes M. EXERCISE, Show that Theorem 1.6 does not hold when « is not one-to-one. We now wish to name a special case of Definition 1.15. Dermrmton 1.16. Machine M’ is said to realize the state behavior of machine M if and only if M’ realizes M with an assignment (a, ¢, £) such that a maps each state of M onto a single state of M’ and a is one-to-one. In this case, the relations of Definition 1.14 reduce to the following equations: @® Fas), (3)] = af8(s, 29] for all s in S and x in I. Gi) LAC) = MS) for sin S Gi’) ECD), (s))] = Ms, x) for s in S and x in L. We shall sce that for some applications a complete analysis can be made for tealizations of this type, whereas the analysis for other realizations can be done only by more cumbersome techniques or by good guesswork. Further, most of the realizations of the general type have too many states and require extra memory to build. Considerable attention will be focused ‘on such matters in later sections. We show two realizations for machine D of Fig. 1.14. The more general sec, 1.3 MACHINES 31 type is illustrated in Fig. 1.15 and a state behavior realization is given in Fig. 1.16. Oh Wyeo xed ob a(1)=(00,10} «(x00 of 1] ITO 1f3fefo : = tofor|nfio 2|r jal aa il go=0 1 1foo/n |r aia |i ex(3) = {01} gm=t oo tfotfiof}t > Fig. 1.14, Machine D. Fig. 1.15, Machine D’, a realization of machine D. O% Wlxxo xt a1) = 00 tiz0 0 0 [70 for fo la=o1 eo) 0 1 {00 |r }j1 as £(0)=0 10 |tolool]1 aie10 | E(t 1 1 [00 Joo |} + o Fig. 1.16. Machine D”, a state realization of machine D. The states, inputs, and outputs of machines D’ and D” are expressed in terms of binary variables, and the functions «, 1, and £ explain how these are interpreted. After the next definition, we show how these can be expressed in function form and synthesized by a physical device. DeFINITion 1.17. Input binary variables xi for lL (0, and output functions 21 {n---s Id} —> 10,1} i (0, 1} (Mealy case) are said to define the following machine @ S={, Yn)}, the set of all 2-tuples on {0, 1}; Gi) 7 > Xm)}s Gil) 0 sp 2035 Gy) 80,8) = FO.9s, ; (*) AG) = 20) or AV, 3) = 20, 3. 32 MACHINES sec. 1.4 Thus, machine D’ is defined by the Boolean equations: Y=x Yo = Jet 37, + x z=) and D’ is defined by the equations: Yi = Wek Ye = x Z=W+Ie We say that the equations realize a machine M if they define a machine M' that realizes M. Thus the previous sets of equations both realize machine D. The term “realize” is used because the logical equations determine a schematic circuit diagram from which an engineer can build a physical device to behave like the machine. The equations for D/ lead to the diagram of Fig. 1.17. The interpretation of Fig. 1.17 is that, at each time pulse (or end + >> > Unit delay “or” gate “ond” gote Inverter Fig. 1.17. Circuit diagram for machine D’. of fixed time intervals), the contents of delays y, and y, are released, com- bined with the input to compute the next values, and the new values are stored in the delays. 1.4 STATE ASSIGNMENT PROBLEM One of the central problems in the physical realization of sequential machines is the selection of “desirable” (binary) codes to represent the internal states of the machine. Most often “desirable” means fewest number of components in the resulting realization though other criteria can be sec, 1.4 MACHINES 33 imposed and will become more prevalent with changing technical demands. A traditional measure of complexity is the number of diodes used in the realizations, where one diode is required for each wire entering an “and” sr “or” gate. Thus the circuit of Fig. 1.17 is said to require seven diodes. To illustrate the large variation in complexity that different state assign- ments can haye on machine realization, consider machine E of Fig. 1.18. ° t Kha He 4a ]3 ° 1—o000 1—-110 2{6|3 | 2— oot 2-101 A : ° 3B— 010 3—- 100 ayeyoqe: 4— ott 4—-000 syria qe 5—* 100 5—+001 6—- 101 6— 010 Fig. 1.18. Machine E. Fig. 1.19. Two state assignments for machine E. In Fig. 1.19 are shown two state assignments for E. The logical equations which define the machine obtained by means of the first assignment are eYaX + YoVaX + Vaysx AX + AFIs + Ys Ys = Vix + YePax + VaysX + IDX == Yeys- The second assignment leads to Y= yx tk Y, = yk (In both cases, there was some latitude in assigning values of Y; over those values of the y, which do not represent a state of the machine. Using standard Venn diagram techniques, we assigned these values so as to simplify the equations. In the terminology of the next section, we filled the “don’t care conditions” judiciously.) The contrast between these equations is obvious, and in a physical realization, the first assignment requires about three times as many diodes as the realization using the second assignment. In this example, the second state assignment leads to simpler logical equations because the functional dependence between the state variables has been reduced, It is seen that Y, does not depend on y, and ys, and Y, and Y, do not depend on y,. The corresponding schematic realization in Fig. 1.20 shows that this realization actually consists of two independent ma- chines operating in parallel (these concepts are made precise in the next chapter). This example, which is not an isolated one, shows that machine 34° MACHINES: sec. 1.5 Fig. 1.20, Realization of machine E using the second assignment. decomposition can lead to economical state assignments. Furthermore, one feels intuitively that the reduction of the number of state variables and inputs on which the state variables depend should simplify the logical circuits in the corresponding realization. Because the structure theory for sequential machines is concerned with realizations of machines from smaller component machines and the general understanding of the logical dependence or infor- mation flow, structure theory can be viewed as an approach to the state assignment problem. Thus, this problem supplies additional motivation for the development of a structure theory. Explicit applications of structure theory to the state assignment problem are discussed later. 1.5 “DON'T CARE" CONDITIONS In applying machine theory to engineering problems, it sometimes happens that the problem requirements do not completely specify a machine table. The machine designer may thereby be oot Jeft with several complex alternatives for com- 4) 1° pleting the machine table. The most common 2f-|4 | - option is the so-called “don’t care” (or d.c.) afi t-|s condition. These are generally represented by alela ds dashes in the flow table such as in Fig. 1.21. The dashes are interpreted to mean that the Fig. 1.21. Machine with engineer doesn’t care what transition or output dc. conditions. occurs when the corresponding input and state sec. 1.5 MACHINES 35, combination occur. Such conditions arise when the machine, in its normal operation as applied to the particular problem, is not expected to be con- fronted with that particular input-state combination, or when certain outputs are to be ignored anyway. Certain later sections of this book discuss methods of extending the structure theorems to machines with d.c. conditions. There is one aspect of completely specified machines which leads to d.c. conditions of a special type. Consider machine F and realization of Fig. 1.22. Notice that even though the machine is completely specified, the “ Fig. 1.22, Machine F and realization F’. realization in binary variables is not. The realization has a fictitious input which is never used and a state which is never entered. The result is a column and row of d.c. conditions. We refer to such conditions as incidental d.c. conditions. In Fig, 1.23 the incidental d.c. conditions of Fig. 1.22 have been filled with zeros to obtain realization F’, and the corresponding equations are XySo + xis XPDa + Fixe + Fixe Z= Pot Ie These equations require thirty diodes. Contrast this to the realization F” of Fig. 1.24 which was obtained by filling the d.c.’s judiciously. Now the equations are 1 Yi = Heys + mr Ya = Xo + XY + HX Ie Z=Vt Ve Here only twenty diodes are required and y, has been made independent of Ye. Techniques for filling d.c. conditions to simplify equations are well known and are not elaborated here. There are, however, two things we wish to point out. First, we always fill the d.c. conditions carefully and to our advantage, often without saying so (as in Fig. 1.24). Secondly, to obtain the reduced dependence realizations promised by our later theorems, the reader has to fill d.c. conditions properly. 36 MACHINES sec. 1.5 oo of tt 10 oo oF 11 10 o0| o1]oo]oo]10 fo oof or oo]rofro]o 01] 00} 01 Joo}ro] + orfoofor|11}ro}} 4 “ 41] 00] 00 }00|00 } o 11] 10/01 Jor Joo} + 10] 10] 01 Joo }oo | + 10} 10] 01 Jor joo f 4 Fig. 1.23. Machine F’ with d. c. condi- Fig. 1.24. Machine F” with d. c. condi- tions filled with zeros. tions judiciously filled. NOTES The study of sequential machines was started by D. A. Huffman [19] and E. F. Moore [25]. Huffman published the first systematic study of sequential circuits whereas Moore gave a more abstract formulation and started the formal study of sequential machines. In Moore’s formulation, the output of a machine dependent only on the state of the machine. ‘This was generalized by G. H. Mealy [24] which led to the differentiation between Moore and Mealy machines. A number of the original papers on sequential machines have been reprinted in a book edited by E. F. Moore [26]. Further results on sequential machines can be found in the books by 8. Ginsburg [9] and A. Gill [8]. The structure theory developed in this book gives one approach to the state assignment problem. This was first discussed by the authors in [12, 28] and further developed by C. H. Curtis [5, 6], R. M. Karp [20], and Z. Kohavi [21]. Other approaches to the state assignment problem are discussed by D. B. Armstrong [1, 2}, T. A. Dolotta and E, J. McCluskey {7], and D. R. Haring [10]. PARTITIONS AND THE SUBSTITUTION PROPERTY 2.1 THE SUBSTITUTION PROPERTY In this chapter we begin to develop some mathematical tools and theorems which are fundamental to the structure theory of sequential machines. These tools yield results which show how and when complex sequential machines can be realized from interconnected sets of simpler machines, how these simpler machines are related to the machine under consideration, and what the laws are which govern their interconnections. We are interested in those results which characterize possible circuit layouts, but which do not depend on which actual components will be used to realize the computing device. Thus we are primarily concerned with the nature of the computing process, and seek to determine the simpler operations from which this process is composed. The study of machine structure is begun in this section with a formal study of the intuitive concept of a “subcomputation.” Imagine a man who is adding a series of integers and a second man who is keeping track of whether the sum is even or odd. Since the result of the second man’s com- putation is evident from the result of the first man’s computation, we say that the second man is doing a subcomputation of the first man’s compu- tation. In a similar way, we want to think of a small machine as doing a sub- computation of a large machine if the state or output of the large machine determines the state or output of the small machine. The small machine then gives partial information about the large machine. One can see that we are actually close to the concept of a homomorphism. Since a machine M can be used to realize its homomorphic image M’, one can say informally 37 38 PARTITIONS AND THE SUBSTITUTION PROPERTY sec. 2.1 that M’ does a part or a subcomputation of the computation performed by M. The results of this chapter give some elementary structure of decom- position results obtainable from the homomorphism concept. The previously given definition of a homomorphism is not ideally suited for computational purposes and tests, for it involves two machines and the operation-preserving mappings between these machines. The next definition leads to a different characterization of state homomorphisms that involves only:one machine and leads to direct computational methods. DEFINITION 2.1. A partition ~ on the set of states of the machine M =(S,1,0,8,») is said to have the substitution property if and only if @ implies that &s, a) = 8(6, a) (x) for all a in J. We refer sometimes to partitions with the substitution property as S.P. pattitions. It follows from this definition that the partition x on S of M has the substitution property if and only if each input maps blocks of z into blocks of z. That is, for ain J and B, in z, there exists a (unique) B! in x such that : (B,, a) S Br. Since the operation of M determines unique block to block transfor- mations on §.P. partition 7, we can think of these blocks as the states of a new state machine defined by z and M. This justifies our next definition. DEnNITION 2.2. Let 2 be a S.P. partition on the set of states of the machine M =(S,1,0,8,»). Then the x-image of M is the state machine M, = (B33, 1,8.) with 8,(B,,x) = BL ifandonlyif — &(B,, x) S Bt. We can think of M, as a machine which does only part of the compu- mn performed by M, since it only keeps track of which block of 7 con- tains the state of M. This can also be thought about in terms of the uncertainty or ignorance which can exist about the state of M. If x has S.P. on M and we know the sec. 2.1 PARTITIONS AND THE SUBSTITUTION PROPERTY 39 block of x which contains the state of M, then we can compute the block of to which this state of M is transformed by any input sequence. As a matter of fact, M, performs this computation. On the other hand, if a Partition z does not have S.P., then this computation is not possible for some input sequence and initial block. Thus we can say informally that the S.P. partitions define a sort of uncertainty about the state of M which does not spread as the machine operates. To illustrate these ideas, consider machine A of Fig. 2.1. It is easily seen that the partitions ee sls lL have the substitution property on A. The corresponding image machines Ay, and A,, ate shown in Fig. 2.2. ot 1faf3]o o.4 o.1 2le|3]o s ’ ofr |r 3}s]2]o rLs|e alelr al2js|fs =(12,3},s=(45,6) © La |e Ei Ei 4 io p= {1,6}, = (2,5), = (3, 4} 6l3fajo Amy Ane Fig, 2.1. Machine A. Fig. 2.2, The image machines Ax, and Ay,. To get a preview of how these concepts are used later for machine decomposition, consider what happens when machines A,, and A,, are operated side by side. Each machine separately does only part of the com- putation performed by A, since it only determines the block of 7; which contains the state of A. Operating jointly, these two machines determine the exact state of A. This is because every block of z, has exactly one state of A in common with every block of 7. Thus, the states of A,, and Ay, uniquely determine a state of A and so, operating in parallel, these two machines compute the state transitions of A. These ideas are made precise and investigated in the two sections which follow. The next result shows that there exists a one-to-one correspondence between state homomorphisms and S.P. partitions. THeorEM 2.1. Let h = (fy, ¢) be a state homomorphism of M = (S,1,0,8,d) onto M’ = (S', 1,8) such that ¢ is the identity map. Then the partition = defined by s=t(@) ifandonlyif A) =O, 40 PARTITIONS AND THE SUBSTITUTION PROPERTY sec. 2.1 has S.P. on M and M'= M,. Conversely, if has $.P. on M, then M, is a homomorphic image of M and the homomorphism h = (My, ) is given by A() =B, — ifand onlyif sis in B,. Proof. Since h is a homomorphism, h,(s) = fx(¢) implies STA(S), a] = STC), a] = LB, a] = AilB(, a). Therefore &(s, a) = S(t, a) (x) and x is a S.P. partition. The isomorphism (f, ¢) of M, onto M’ is given by S(B) =s' if As) = s' for s in B,. To prove the second part, let sbeinB, and — &(s,a)in Bi. Then h()=B, and hy{&(s, a)] = By and thus As, @)] = 8.fh,6), al But then A is a homomorphism of M onto M,. The next result shows how one can combine §.P. partitions on M to obtain new S.P. partitions. ‘THEOREM 2.2. If x, and 7, are S.P. partitions on the set of states of a sequential machine M, then so are the partitions yen, and x, + 74. Proof. Assume that s=t(e) and s=t(n). Then by definition, s=t(nyn). For any input a in 7 86s, a) = 8(t, a) (1) and &(s, a) = 84, a) (or) by the hypotheses of the theorem, but then clearly &s, a) = Blt, a) (y+) by the definition of a,-7. To show that 7, -+ 7, has S.P, we recall that s=t(,t+m) implies that there exists a chain sec, 2.1 PARTITIONS AND THE SUBSTITUTION PROPERTY 41 S = Sey S182 00s Sm = such that =) of = Sy), 7 =0,1,2,...,m—1. We now use this to show that for every input a in J, 3s, a) = 84, a) (wy + 2). Since w, and x, have S.P., As, a) =Hs,a)(m,) or ls, a) = ls, a) (a) and since x, and 7, are both finer than x, + 25; we conclude that As, a) = A. 4) (1 + 7). Similarly, 8(sy, a) = 8s, a) (a, + 2) 8(S2, a) = 88s, a) (1 + 2) BG n-1s @) = A(t, a) (a, + 7). Thus, since the equivalence relation is transitive, we have &(s, a) = 8,4) (wr, +), as was to be shown. TueoREM 2.3. The set of all S.P. partitions on the set of states of a sequential machine M forms a lattice Ly, under the natural partition order- ing. Furthermore, Ly contains the trivial partitions 0 and I. Proof. From the previous theorem, we know that the set of $.P. parti- tions on S of M is closed under the “-” and “+” operations. Thus, the set of all S.P. partitions forms a sublattice of the lattice of all partitions on S, and therefore a lattice in the natural ordering of partitions. The last state- ment of the theorem is obvious. Two applications of the lattice Ly are brought out in later chapters. First, it will be seen that the plot of Ly permits a simple visualization of all the important multiple series-parallel state behavior realizations. Thus, the lattice is, in a sense, a picture of some machine structure. Secondly, algebraic properties (modularity, distributivity, complementation, etc.) of the lattice Ly are reflected in machine properties and vica versa, which supply means of the classification of sequential machines according to their Ly lattices. To practice some of the ideas of this section, consider machine B shown in Fig. 2.3. This machine has eight S.P. partitions: 42 PARTITIONS AND THE SUBSTITUTION PROPERTY sec, 2.2 xe = (12,34; 3 33,6; 455; mq = {1; 2; 3,6; 45; 7; 8}, I= (123,456,738). These partitions may be found by a constructive method discussed later in Section 2.5. The lattice Ly is shown in Fig. 2.4. EXERCISE. Construct the image machines B,,, Bry B,, and exhibit the homomorphisms which map B onto these machines. EXERCISE. Show that L, is not distributive. EXERCISE. Find all S. P. partitions for machine C of Fig. 2.5, plot Le, and determine the corresponding image machines. o4 o 4 ifs]7qo ifs]elfo 2|alelfo 3|1jejfo Ta Te z}e]1 jo ale|s fo 3{rjello sTels ° if 4{2)5 ° elr}a]r Flalalli: ™ i s|3s aio e|sj|3 jo 6 sja|a |r Fig. 2.3. Machine B. Fig. 2.4. Lattice Ly. Fig. 2.5. Machine C. 2.2 SERIAL DECOMPOSITIONS We now make precise what we mean by a serial connection of two machines. DEFINITION 2.3, The serial connection of two machines My = (Sif 0,821) and Mz = (Sry Tay Oz, Bay a) for which O.= hy is the machine M=M,@ M, = (S, X Ss, 4, On 8,2) where Alls, 1), x1 = Gxls, 2), Salt, Als, YD) sec. 2.2 PARTITIONS AND THE SUBSTITUTION PROPERTY 43 and M(s, 9, 91 = Malt, Aals, ))- A schematic representation of a serial connection is shown in Fig. 2.6. The next definition describes how state machines can be interconnected. a nel ow (8, Lee Fig. 2.6. Serial connection of M, and Ms. DEFINITION 2.4. Given the state machines M, = (Sis4,8:) and My = (Sty Jay 82) with L=S:x ly a set of output symbols O and an output function aS, x Sy x h—> O, then the serial connection of M, and M, with the output function 2, is the machine M = (S, X S,,4,, 0,8, >) where Alls, 1), x1 = OG, 2), Silt, (8, 9D and AS, x SX 0. A schematic representation of this realization is shown in Fig. 2.7. q Me Fig. 2.7. Serial connection of the state ma- chines M, and M, with output function A. It should be noted that there are two different ways in which serially connected machines can operate. If two Mealy machines are serially con- nected as prescribed by Definition 2.3, then the first machine has to compute its output before the second machine can compute its next state and output. Thus, if we assume that each machine computation requires a certain time interval, the output of the serial connection appears after two time intervals. This time delay grows with the number of the serially connected machines 44 PARTITIONS AND THE SUBSTITUTION PROPERTY sec. 2.2 and may be undesirable in some practical applications. Further, such con- nections of Mealy machines can lead to difficult or impossible timing prob- Jems in more general series-parallel connections. The situation is quite different when we consider the serial connec- tion of state machines (Definition 2.4). (This may also be regarded as the Moore case.) When an input x is applied to the connection specified by Definition 2.4, then both machines can operate simultaneously to compute their next states. This is possible since the input to M, is the present state of M, and the external input (x in J,). When this type of connection is generalized to more than two machines, the time delay for the output does not grow with the number of serially connected machines as in the previous case. Note that the machine obtained by such a connec- © 1 011 « p BY afeyelela ofe|e ofolo plalcllals ole}e}ololo}t clelalialy a oO Fig. 2.8, Machines D, and Ds. o 1 04 tion of concurrently operating machines tao 1fsTtelopo can be either a Moore or a Mealy machine, ao 2}efa|lojo depending only on whether » is a function (0 3}1]6ffo}o of the inputs. e 8 ‘ 2 5 8 3 To illustrate these points with actual 3 4 , s tok cisiellols machines, note that the serial connection of machines D, and D, of Fig. 2.8 re- Fig. 29. Machine D. sults in the machine D shown in Fig. 2.9. For example, 8(1, 1) = S[(4, a), 1] = [8,(A, 1), 8:(a, (4, 1) = [B, 8,(@, B)] = (B, b) = 4. M(A, a), 1] = Adfa, (A, 1] = Aa(a, 8) = 0. Similarly, the serial connection of the state machines £, and E, and the 1 AO Al 6,0 BY A:(A, a, 0) —> 0 (hat) —eo afeya ofe]e]o]e wooo elale olololele (4a —0 (0,0) 0 a & (80,1) 0 (6, b,0) —» 0 (BB eT Fig. 2.10. Machines E;, F and output function, SEC. 2.2 PARTITIONS AND THE SUBSTITUTION PROPERTY 45 output function X shown in Fig. 2.10 o 1 014 result in the machine E shown in Fig. tao 1fs]i]olo 2.11. For example, wo 2}4]2ifojo 8.0) = 214, 4), 0) ao aliistols [8,(4, 0), 8:(@, (4, 0) = (Ba) =3 Fig. 2.11. Machine E. ACA, a), 0] = 0. ‘We now turn our attention to the realization of a machine M by a serial connection of two other machines. DEFINITION 2.5. The machine M, @ M, is a serial decomposition of M if and only if M, @ M, realizes M. If M,© M, is a state behavior. realization of M, then we say that M, © M, is a serial decomposition of the state behavior of M. For the present, we restrict ourselves to the study of state behavior decompositions, In Chapter 5, after the full structural value of the S.P. concept is explored in Chapter 4, we consider relaxing the state behavior condition. NoTATION. We say that the state behavior decomposition of M into M, © M, is nontrivial if and only if M, and M, have fewer states than M. ‘THEOREM 2.4, The sequential machine M has a nontrivial serial decom- position of its state behavior if and only if there exists a nontrivial S.P. partition ~ on the set of states S of M, Proof. Assume the state behavior of M is realized by M,@M, and let (@, «, £) be the assignment map where at is a one-to-one map a:S—>S, x Sy ‘The mapping @ induces an equivalence relation or partition x on S if we consider all those states equivalent whose first components under @ are identical; that is s=t(x) _ ifand only ifs, = 4 where as) = (5, 5,) and at) = (4), te). To show that w is a S.P. partition, note that by definition ARs, @)] = Eli, (a), False, a(S, KQ)D ASC, a) = (Bit, e(@)), Salts, (4, MAD) t(7), then s, = ¢, and thus SxS, a) = BiH, @)). and so if 46 PARTITIONS AND THE SUBSTITUTION PROPERTY sec. 2.2 This means that and so has S.P. Finally, we cannot have = 0 because M, has fewer states than M and we cannot have x = I because M, has fewer states than M; thus, x has to be a nontrivial partition on S. This completes the proof going one way. Now we show that if there exists a nontrivial S.P. partition x on S of M, then M has a nontrivial serial realization of its state behavior. We assume that z has / blocks and that the largest block has k states. Since x is nontrivial, n > k and n > I, Let x be a k block partition on S such that mer =0 (Such a 7 can be constructed by labeling the states of each block B, of x by 1, 2,..., ™ ™ Gers ¥0)] = OalSr, ¥1), MalSes 2). Dermtion 2.7. The machine M, || Mz is a parallel decomposition of M if and only if M, |] Mz realizes M. If M, || Mz is a state behavior realization of M, then we refer to M, || Mz as a parallel decomposition of the state behavior of M. We say that the state behavior decomposition of M into M, || Mz is nontrivial if and only if M, and M, each have fewer states than M. For the present, we restrict ourselves to the study of parallel state be- havior decompositions. ‘THEOREM 2.5. A sequential machine M has a nontrivial parallel decom- position of its state behavior if and only if there exist two nontrivial S.P. partitions ~, and 7, on M such that yey = sec, 2.3 PARTITIONS AND THE SUBSTITUTION PROPERTY 49 Proof. Let the state behavior of M be realized by M, || Mz. Let (a, «,£) be the assignment map such that a:S—>S, x S, is one-to-one, The mapping @ defines two equivalence relations 7, and 7, on Sas follows: t (xr,) if and only if s, = t, where a(s) = (s,, s,) and a(t) = (4, 42), t (x,) if and only if s, = t, where a(s) = (s,, 8.) and a(t) = (4, t,)- Since the mapping @ is one-to-one onto, we know that $= t (we -7,) implies s = ¢ and thus 1, —0. To see that 7, and 7, have S.P., note that if s=tm) then as) =(S,5:) and a(t) = (51, But then af8(s, )] = 181(51, 2), B2(52, «D)] ABCs, x) = [8i(S1, 4)), Sales D)] and therefore, the first components of the next states are again identical under @ and we have &s, x) = 3, x) (wr). The same argument shows that 7, also has S.P. Finally, if M,|| Mz is a nontrivial realization, then 1S] > [SL 1ST > [Sef 15.1152] > 1S. Thus x, and 7, have less than | S| blocks and more than one block and are therefore nontrivial partitions. To show the converse, assume that x, and 7, are nontrivial S.P. parti- tions on M such that rym, = 0. To construct M, and M,, we take their image machines M,, and M,, and add outputs: My, = {Ba}, L {Bs} X 1,8: = bes €) Me = (Beds {Be} X I 82 = Brn ©) Obviously since ~,-7, = 0, the output of M,||M, determines a unique pair in S x J and thus there is a mapping £ which maps 0, x O, onto 0 so that M, || M, realizes M. More precisely, the realization is given by the mappings as) = B,,9), B(9) ox) = (x) 50 PARTITIONS AND THE SUBSTITUTION PROPERTY sec, 2.3 E[(Bess X)s (Bray 9] = MB, OV Bras X)- Since , and z, are nontrivial partitions, the decomposition is also non- trivial. Consider machine G of Fig. 2.16. For this machine the partitions m, = (012;345} and x, = (0,5; 1,4; 23} have S.P. and since m+, =0, they define a parallel decomposition of G. The two image machines (with outputs added) are shown in Figs. 2.17 and 2.18. Since G is a Moore machine, o4 o.ete ° 7 7 rs] fo o4 1 [ma [m@ flo 2}aliijo afeyayo o | 1 |m jlo afrfa ia elalellr m {eo jo jo afol}s jo A= (01,2) B= (3,45) T= (05) O- {1,4} s[2[3 jo Mm = {2,3) Fig. 2.16. Machine G. Fig, 2.17. Machine G. Fig. 2.48. Machine G;. the component machines can be Moore machines. We have chosen outputs so that Ms) = An(S1)-AalSe)- This parallel decomposition is shown schematically in Fig. 2.19. a Ce Fig. 2.19. The parallel decomposition of G. For another illustration, consider machine H of Fig. 2.20. The partitions and, = (1,2; 3,4; 5,6} both have S.P. and since yt, = 0, they define a parallel decomposition of H. The two component machines, namely H, and H, with suitable outputs, are shown in Figs. 2.21 and 2.22, Note that H, is actually an input independent machine and thus our reali- sec. 2.4 PARTITIONS AND THE SUBSTITUTION PROPERTY 51 ou 1fs]eqo o 48 2}5]a lo 1 [a [a fo 3{2ie6 lo ou a 1 m jo a}ifs lo afeyeqo m{o jaf s|a]z lo elalaliy let. t= (42), 0={3,4) a= (138) 8=(2,46) m= {5,6) Fig. 2.20. Machine H. Fig. 2.21. Machine Hy. Fig. 2.22. Machine H,. zation has the form shown in Fig. 2.23. A study of input independent machines (or clocks) and their detection is included in Chapter 4. Hy Io—_—+ % Fig, 2.23. The parallel decomposition of H. 2.4 COMPUTATION OF S.P. PARTITIONS The results of the two previous sections indicate the important role which S.P. partitions play in the study of sequential machines. In this section we show how S.P. partitions can be efficiently and systematically computed for any sequential machine. There are basically two ways of obtaining S.P. partitions. The first way is to work with the machine description until a partition is found which satisfies Definition 2.1. The second way is to combine previously obtained S.P. partitions using the sum and product operations. Since the second way is much easier than the first, the most efficient procedures are those which rely most heavily on the second way. Of course, some use must be made of the first way since no nontrivial partitions are known a priori to have S.P. In view of these observations, we suggest a general procedure following two steps: (1) For every pair of states s and 1, compute the smallest S.P. partition, zz, which identifies the pair. (2) Find all possible sums of the zz';. These sums constitute all the S.P. Partitions. To show that this procedure gives all the S.P. partitions, it must be shown that every S.P. partition is the sum of some subset of {7}. The reader may verify this by doing the next exercise. 52. PARTITIONS AND THE SUBSTITUTION PROPERTY SEC, 2.4 EXERCISE, If z is a S.P. partition on M, show that w= Zilatils = t@}. Before we illustrate this procedure, a couple of remarks are in order. 1. Note that we did not say exactly how the x, are to be computed. Several similar ways of doing this can be specified to the last detail, all amounting to looking ahead (in the state table) and identifying only those states which have to be identified. After a little practice, a person should find that he is not following one set procedure, but is instead following little short cuts dictated by his intuition. Thus hand calculations are easier than might appear at first glance. 2. Note that step 2 requires that one find all possible sums, not take all possible sums. Many of the sums are equal and it is not necessary to perform all possible derivations of that partition. This economy is obtained by taking pairs of x7, to obtain “second-generation” partitions and then using only new “second-generation” partitions to find “third-generation” partitions, and so on. at EXAMPLE. To illustrate the procedure and some rye] sje bookkeeping techniques, consider machine J of 2}s}a]i Fig. 2.24. satelsla We start by computing the S.P. partitions ob- alilels tained ‘by identifying a pair of states. slelile If zy identifies states 1 and 2, then eLsjels A1,a)=6, 8(2,a)=5 and Fig. 2.24. Machine J. 30,6) =3, (2,8) =4 imply that states 6, 5 and 3, 4 must also be identified by 7,. Therefore mits > (12; 34; 5,6}. Since 83,a)=2, &(4,a)=1, 8G, b) = 5, 8(4, b) = 6, etc., we see that mts = (1,23 3,4; 5,6}. In the actual computation, we just look at the flow table and check onto what states the state pair 1,2 is mapped by different inputs. All that we actually write is as follows: 1,2 —> 563412. Since these pairs do not overlap and are mapped back into themselves, we know that we have computed the smallest S.P. partition which identifies the states 1 and 2. sec. 2.4 PARTITIONS AND THE SUBSTITUTION PROPERTY 53 ‘Next consider states | and 3, In our shorthand 3 — 2635 A. Since the pairs 1,3, 3,5 and 2,6, 2,4 overlap we use the transitive law and obtain that ais > (1,35; 2,4,6}. Since these blocks are mapped by all inputs into themselves, we note that mis = (13,55 23 In our shorthand we write all this computation: 13 — 263,524 —> 13,5 and underline the last partition to show that it has S.P. Next we obtain the following 14 — 1,6 3,6 2,3 —> 125,46 —> 1,2,3,4,5,6. Thus xf is the trivial partition on S. We indicate this for easy reference by now placing an asterisk in front of this computation. As the computation proceeds it becomes much faster since we now can consult the previous results. For example, 13> 461328. Since 7, must identify states 1 and 3, we know that mis> x Since 1 and 5 are identified by z7!; we have that = ais Proceeding this way we obtain +76 — 3,623 25 — 1,2,5,5,6 — 12,3456. Next F375 4514 since 1,4 is starred in our list we know without further computation that zi is also the trivial partition. All the computations of the first step are summarized below. 54 PARTITIONS AND THE SUBSTITUTION PROPERTY sec. 2.4 There are only two nontrivial S,P. partitions in this list and their sum (1,2; 3,4; 5,6} + (13,5; 2,4,6} = (1.2,5,4; Thus the second step does not yield any new partitions and we have computed all S.P. partitions on machine J. To show how rapid the computations of the first step can become, consider machine K of Fig. 2.25. Part of these computations are summarized below: 123,548 —> I (using x) > Oo 7s) 2,7,8 3,4,5,6 * 3 5 6 6 5 3 4 4 3 PV ounune @rouun-n +~c0°0 0000 Fig. 2.25, Machine K- sec. 2.5 PARTITIONS AND THE SUBSITIUTION PROPERTY 55 "3,7 15... 1 BET... etc. In this way we obtain that there are four nontrivial S.P. partitions on machine K: Again the sums do not yield any new partitions, and thus we have computed all the S.P. parti- tions on K, The corresponding lattice Z., is shown in Fig, 2.26. ‘te % EXERCISE, Compute all S.P. partitions for ma- chines D, F, G, and H. n EXERCISE. Construct the parallel realization of machine K defined by x, and z,. Construct the ‘cl neativat, , ° serial realization of K by using Fig. 2.26. Lattice Lx. m=m, and 2.5 STATE REDUCTION We have seen how elementary (or two-machine) serial and parallel de- compositions can be obtained from the S.P. partitions. Here we describe another application; the use of partitions with $.P, to find the reduced machine equivalent to a given machine M (see Theorem 1.1 and Corollary 1.1.1). Other aspects of state reduction are covered in Chapter 5. The first step is to relate state reduction to the Substitution Property. To do this, we define a special partition. DerINITION 2.8. If M is a machine, we define zp to be the partition on the states of M such that t(ax) if and only if state s is equivalent to state 1. The subscript R stands for “reduction,” as suggested by our next result which links S.P. partitions and state reduction, TueoreM 2.6, If M is a sequential machine, then 7, has S.P. and M,, with the output Au(Beys X) = MS, x) for sin By, is the reduced machine equivalent to M. 56 PARTITIONS AND THE SUBSTITUTION PROPERTY SEC. 2.5 Proof. Lemma 1.1 asserts that 2, has $.P. The output is well defined because states in B,,, are equivalent and all have the same output. The proof of Theorem 1.1 shows that M,, is the reduced machine, Once the S.P. lattice of a machine is determined, finding the reduced machine is just a matter of deciding which S.P. partition is 7,. Next we work up to Theorem 2.7 which gives an easy test to determine 7, from the SP. lattice. Derinrrion 2.9. A partition 2 on the state of a machine M is output consistent if and only if =1(x) implies (s)= 4) or_~— ACS, x) = Alt, x) for all inputs x. LEMMA 2.1. A partition ~ with S.P. for a machine M is output consistent if and only if s = ¢() implies s and ¢ are equivalent. Proof. Left as an exercise. We now characterize 7. TifoREM 2.7. If M is a sequential machine, then 7 is the maximal output consistent partition with S.P. Furthermore, S.P. partition is output con- sistent if and only if <7». Proof. Any x < 7p is a refinement of the relation of state equivalence (ie., wp) and hence z is output consistent by Lemma 2.1. On the other hand, any output consistent S.P. partition 7 is a refinement on 7, by Lemma 2.1 and som m,. Therefore y= Hp The S.P. lattice is shown in Fig. 2.28 with the output consistent partitions circled. Notice how they form a sublattice. The reduced machine Ly, is shown in Fig. 2,29, Note that L,,, has only trivial S.P. partitions. The effect of state reduction on machine structure is investigated in Chapter 5. o 4 uF B= (1,2) a fe] &lo ay Ben {3} b | #3 | a || 0 ™ ° bs" (4) a | & | & io hg By {5,6} Ba | & | 8 it Fig. 2.28. S, P. lattice and output- Fig. 2.29. Reduced machine Lap. consistent sublattice for machine-L. NOTES The use of partitions with the substitution property or congruence relations for machine decomposition was first discussed by one of the authors in [11]. The results of this chapter are contained in [14, 16], some of which were obtained independently by M. Yoeli (30, 32]. 3 PARTITION PAIRS AND PAIR ALGEBRA 3.1. PARTITION PAIRS In the previous chapter, we derived the elementary structure theory of serial-parallel realizations. This was achieved through state partitions which represented self-dependent information. These concepts of information and information dependence are very basic and underlie all the structure results. In this chapter we wish to consider more powerful and more general mathe- matical tools for describing the two concepts of information and information dependence. The principles of this chapter are the hard-core ideas from which the applications of Chapters 4, 5, and 6 are obtained. We recall that if a partition on the set of states of a machine M has the substitution property, then as long as we know the block of which contains a given state of M, we can compute the block of z to which that state is transformed by any given input sequence. Intuitively we say that the “ignorance” about the given state (as specified by the partition ) does not spread as the machine operates. Therefore, the S.P. partitions on a machine describe the cases in which the “ignorance” about the exact state of the machine does not increase as the machine operates. The concept of partition pairs is more general and is introduced to study how “ignorance spreads” or “information flows” through a sequential machine when it operates. Derintrion 3.1. A partition pair (7, 7’) on the machine M = (8,1, 0, 8,2) is an ordered pair of partitions on S such that 58 SEC. 3.1 PARTITION PAIRS AND PAIR ALGEBRA 59) A(t, x) (x') 5 =1(x) implies &(s, x for all x in I. Thus (x, 7’) is a partition pair (p. p.) on M if and only if the blocks of x are mapped into the blocks of x’ by M. That is, for every x in I and B, in z, there exists a By, in 7’ such that 8(B,, ) & By In other words, if we only know the block of x which contains the state of M, then we can compute for every input the block of z' to which this state is transferred by M. Clearly, a partition m has S.P. if and only if (x, x) is a p.p. on M. Exampte. Consider machine A of Fig. 3.1. For this machine (my, m2) = (1,2; 3,4), (13; 24) (x2, m1) = (13; 24, (12; 34) (xs, m) = (14; 2,3), (14; 23) are partition pairs. It is also seen that 7, is an S.P. partition on this machine. To show a use of partition pairs, which is developed in more detail later in this chapter, let us assign the binary variables to the states and inputs of Aas shown in Fig. 3.2. The variable y, is assigned according to the partition ob ec @ 21sTayn ae Ne fa ales qed 100 ooo 2—ot b—o1 3)2}1]4]3 lo go eto “1413 i2 {tye a—it aid Fig. 3.1. Machine A. Fig. 3.2. Assignment of state and input variables for A. m, (that is y, = 0 for states in the first block of x, and y, = 1 for the states in the second block) and 9, is assigned according to 7,. Since Ty, = 0, we have assigned a different code to each state of A. Because (m,, 7.) and (a, 7) are partition pairs on A, Y, is not afunction of y, and Y, is not a function of y,. The actual equations are Y= Faye + Pe Ya = yi + a5 z=3y. The reduced logical dependence in the equations leads to a very eco- nomical assignment (few diodes). Observe how the schematic realization of 60 PARTITION PAIRS AND PAIR ALGEBRA sec. 3.1 A shown in Fig. 3.3, mimics the logical independence described by the partition pairs (r,, ,) and (7r, 7). We can also think of this as a decompo- Modul 2 agdition (Exclusive "or" Fig. 3.3. Realization of machine A. sition of A into component machines (enclosed by the dotted lines). Such “cross decompositions” cannot be detected by S.P. partitions. These decom- positions are investigated later in this chapter after some further algebraic tools are developed. EXERCISE. Determine the logical equations for a realization of A by the serial connection defined by zs and + = (1,2; 34}. Compare the number of diodes of this realization to the above one. As in the case of §.P. partitions, the partitions pairs on M can be com- bined by means of partition operations. Lemma 3.1. If (7, 2’) and (7, 7’) are partition pairs on M, then (i) (w-7, x’-7') is a p.p. on M and @) @ + 7.2" +7) isa pp. on M. Proof. (i) If s=t(-7) then t(m) and 1). Therefore for any input x, &, =A a’) and Ws, x) = Alt, @) and hence, &(S, x) = (4, 2) (x7), which shows that (a7, x'-7’) is a partition pair. (i) If s ta +7), then there exists a sequence of states sec. 3.1 PARTITION PAIRS AND PAIR ALGEBRA 6] = Se 88m 000 Sn = such that 5 =Sii(m) foréeven and 5, = S44, (7) for i odd, Therefore, for any input x, Sy 2) = S41) (we!) for deven and 865.) = Siar, X) (7) for i odd, hence As, x) = Bt, x) ("+ 7’) by transitivity and we conclude that (e+ a, 7! + r+) isapp. Next we consider the problem of determining, for a given partition 7, which partitions z' can be used to make a partition pair (7, x’) on M. At the same time, we consider the dual problem of finding partitions x to match a given x’. Eventually, we show that the possible choices for z' are characterized by a smallest x’ such that (x, x’) is a p.p. on M; and the possible w are characterized by a largest x such that (7, x’) is a p.p. First, note that for any x on S of M, (x, 1) and (, z) are trivially partition pairs. But then, using Lemma 3.1, we know that for a given partition ~, there exists a minimal partition x’ such that (x, 7’) is a p.p. on M. The partition 7 is given by x! = T] {al (x, x0) is a p.p. on M). Similarly, for a given 7 there exists a maximal partition w such that (x, 7’) is a pp. on M. This justifies one next definition. DeriniviOn 3.2. If x is a partition on S of M, let m(r) = [J fri (x, =) is a p.p. on M} and M(x) = © {rei (x, =) is a pp. on M}. We think of the m() as an operator which gives the minimal second partition (“m” for minimum) and M( ) as the operator which gives the maximum front partition (“M” for maximum). Informally speaking, for a given partition 7, the partition m() describes the largest amount of information which we can compute about the next state of M knowing only z (ie., the block of which contains the present state of M). Similarly, for a given x’, the partition M(x’) describes the least 62 PARTITION PAIRS AND PAIR ALGEBRA SEC. 3.2 amount of information we must have about the present state of M to com- pute 7’ for the next state. Thus these partitions give precise meaning to our intuitive concepts “how much can we find out about the next state if we know only...” and “how much do we have to know about the present state to compute... about the next state.” The description of m() and M(z) in Definition 3.2 is not well suited for their computation. The computation of m(z) is easily carried out by computing all sets of states onto which the blocks of 7 are mapped and then constructing the minimal partition which contains these sets. To illus- trate this, consider machine B of Fig. 3.4. Let x, = (12; 3,4; 5}. The blocks of =, are mapped onto the sets (4, 5}, {1,4} and {2,3}. Joining the first two sets because they overlap, we obtain that m(ze,) = (14,5; 2,3}. The computation of M(x’) is also quite simple. For a given 7’, we have to identify all the (present) states which are mapped by all inputs into common blocks of x’. We do this for partition x! = {1,2; 3,45} on machine B, In order to spot quickly those states to be identified, we re- place each state in the flow table for B by a symbol representing the block of B containing that state. Figure 3.5 results. Now we identify states with identical modified next state rows and obtain that Mew) = (15 23; 33). o 4 ifs]sqo ‘ ala lao 2 sta fifa 3 alt la iia ’ siz] |o 5 a= (1,2) 6 ={3,4,5} Fig. 3.4. Machine B. Fig. 3.5. Table for the computation of Mx’) 3.2 PAIR ALGEBRA In the last section, we introduced the concepts of a partition pair, an m operator, and an M operator. In this section, we want to work out some sec. 3.2 PARTITION PAIRS AND PAIR ALGERRA 63 algebraic relationships these concepts satisfy and then proceed in later sections to applications. However, later in the book, we want to study other partition pairlike systems, We want to study how input information is transformed into state information, state information to output, and input information to output. We want to investigate other concepts of “infor- mation” or “information flow.” In view of these goals and in view of other possible applications yet undiscovered, we extract the common properties of all these systems, and derive the algebraic relationships in terms of these properties. Anything satisfying the fundamental properties of a partition pairlike system will be called a “pair algebra”; the basic mathematical framework of pair algebras is to be established in this section once and for all. Later, when different interpretations of the pair algebras are given, the abstract theorems in this section translate into more practical results requiring no further proof. Thus the initial abstract approach is justified by later econ- omies. Another advantage is that this approach brings out the unifying principles behind a variety of different results. Derinimion 3.3. Let L, and L, be finite lattices. Then a subset A of L, x Ly is a pair algebra on L, x L, if and only if the two following postulates hold: Py. (%1 91) and (%, Ye) in A implies that (%+%2,4+¥s) and (x; + %2, yi + yy) are in A. P,. For any x in Z, and y in Ls, (x, J) and (0, y) are in A. Thus a pair algebra is a binary relation on L, x L_ which is closed under component-wise operations (P,) and contains all the elements specified by Py. Note that [because of Lemma 3.1 and the fact that (x, J) and (0, 7) are p.p. on M] the set of all partition pairs on M is a pair algebra on L x L, where L is the lattice of partitions on S. Thus all the results which we now derive about a pair algebra hold for partition pairs on M. For all the applications of pair algebra in this book, L, describes the ordering of the information which we can have about the machine and L, describes the ordering of the information to which this information can be transformed by M. The pairs in A then characterize some transformation of this information that transpires in the operation of M. In light of this, P, can be interpreted to mean that “any information is sufficient to compute total ignorance” (Le., (x, J) € A) and “perfect information is sufficient to compute anything” (i.c., (0, y) € A). Property P, can be interpreted to mean “the combination of the information in x, and x, is sufficient to compute the combined information y, and y,” (Le., (%*%s, ):+)2) © A)and “the com- bined ignorance in x, and », is sufficient to calculate the combined ignorance in y, and y,” (i.e., (%1 + X25 41 + Ye) € A). Except for the property (x + x2, yi + J) € A, one would expect these properties to hold for any system describing “information flow,” since the interpretations are so natural. The 64 PARTITION PAIRS AND PAIR ALGEBRA sec. 3.2 property (x, + xs: +s) € A is more of a convenience that has been added in order to obtain stronger results. ‘This addition is finally justified by the number of systems which do satisfy it. LEMMA 3.2. If A is a pair algebra on Z, x E, and (x,y) is in A, then x! x, implies that m(x,) > m(x,); Git) mx, + x2) = mn) + mG); (iv) mOr1-¥2) < mn) +m); (v) » > m(x) if and only if (x, y) in A; (vi) y: > y, implies that M(y,) > MQ»); (vil) MQ, + y2) > MQ.) + MQ); (viii) MO4-y2) = MO1)-MOr); (x) x < MQ) if and only if (x, y) in A; (x) Mim] > x3 (xi) mEMON] < ys (xii) M{m[MG)]} = MQ); (xiii) m{Mbm@)]} = med); (xiv) {MG), m[M(y)]} and {M [mG], mOo} are in Qa; (xv) if (x, y,) and (x, y,) are in Qa, then X x, implies (Lemma 3.2) that [x,, ™(x,)] is in A. Therefore (Definition 3.4), (x) > mx). Gili) Since mtx Sx and xy +X xy part (ii) implies that ix + 2) 2 m,) and a(x, + 4) > (x2) and thus M(x, + %4) > my) + m(x,). From (i) we know that Dx, mGx,)] and [x,, m(x,)] are in A, ‘Therefore (P,), [xy + x2, mm(x,) + m(x,)] is in A and thus (Definition 3.4) (x, + X2) < m(x,) + mx). ‘Combining the two inequalities, we obtain mmx, + x2) = mlx) + mx). (iv) Since Bete Sm and ye 66° PARTITION PAIRS AND PAIR ALGEBRA sec. 3.2 we know from (ii) that ani, -2) mx) and ny +5) < (x2). Therefore, nx +2) << mOn)-mO%). (v) Since [x, m(x)] is in A by (, y > m(x) implies by Lemma 3.2 that (a, ») is in A. Tf (x, y) is in A then by Definition 3.4, y > m(x). (vi) By (i) [4(9), 94] is in A and hence y, > J, implies (Lemma 3.2) that [M(’.), »,] is in A. Therefore (Definition 3.4), Mn) > MQ). (vii) Since : NtwSMy and ntyPyn we know (vi) that MQ, +¥2)>MOx) and = MG, + y:) > MO). Therefore, MO, + Ye) > MQ,) + MO»). (iii) Since WryeS yr and ey MOr-Y2)- Combining these inequalities we obtain “MOx* Ys) = MQx)- MO). (ix) By (@ [MO), ylis in A; and if x < M(y), we have, by Lemma 3.2, that (x, y) in A. If (x, y) is in A then by Definition 3.4, x MG). ‘Thus the equality holds, (xiii) This equality follows by an argument similar to (xii). (xiv) {M(y), m[MO)] is in A, and since by (xi) M{m{MO)} = MO), we conclude that it is an Mm pair. The second case follows by a similar argument, (xv) The condition x, < x, implies m(x,) < m(x,) by (ii) and, therefore, Ya Sys Since y, = m(x,) and y, = m(x,). Similarly, y, < ys implies x, MOn) + M02) = 1 + Xa Any other upper bound (x, y) must satisfy YEN +I» since by (xv), (xy) > IMO, + yd a + Yad and we have our Lu.b. By a similar argument, we conclude that BALD. {015 Ys)s Ces Y2)} = Boren, m2) ) The lattice Qq, sometimes referred to as an “Mm lattice,” takes on a central importance in the structure theory because Q, characterizes A as described next. Coroitary 3.1.1. Let A be a pair algebra on Z, x Z, and let x be in L, and y in L,. Then (x, y) is in A if and only if there is an (x’, »’) in Qa such that xx and yx yx and y' m(x)] or using more information than neces- sary [x << MG)]. In addition to being smaller than A, Qq also has the advantage of a simplified ordering relation. CoroLLaRy 3.1.2. If (x,,.¥4) and (%, ys) are in Qa, then the following three statements are equivalent: Proof. This is a direct consequence of Definition 3.5 and condition xv of the theorem. Thus the order of Qa is the same as the order on the first components of the pairs or as the order on the second components. The one unpleasant aspect is that Q, is not a sublattice of A and the 1.u.b. and g.l.b. operations must be computed by the formulas in part xvi of Theorem 3.1. The remaining corollaries of this section are specialized to the case where L, = L,. This case is of special interest in machine theory because one often wants to study the flow of state information into state information. We are now able to give a more general definition of the substitution property. DEFINITION 3.7. If A is a pair algebra on L x E, then we say x in L has the substitution property (S.P.) with respect to A if and only if (x) isin A. We write La for the set of x in L with S.P., that is Ly = (x € |, x) € A). Coroitary 3.1.3. If A is a pair algebra on L x L, then the S.P, lattice Ly is a sublattice of L containing 0 and J. Furthermore, Ly = {x € L|x > m(x)} = [x € LIx< MOQ)}. Proof. La is a sublattice by P, and contains 0 and J because (P,) A con- tains (0, 0) and (J, 1). The last statement follows directly from parts (v) and (ix) of the theorem. Thus we have characterizations of x in 1, in terms of the two operators: SEC. 3.2 PARTITION PAIRS AND PAIR ALGEBRA 69 x has S.P. if and only if x > m(2x) if and only if x < M(x). There is a third characterization which enables one to read off the S.P. elements from Q,. Corotary 3.1.4, If A is a pair algebra on L x L, then Ly = {z € L|x >z > y for some (x, ») € Qa}. Proof. Given z in Ls, choose y = m(z) and x = M(m(2)). We leave the details as an exercise. In generating the S.P. lattice for a machine M, the algorithm calls for computing aie = min fr € Ly|t > m0. But this must be done before the lattice Ly is known, and so we suggested a method of “looking ahead” to see what must be identified. The next corollary tells us in effect that this method applies to any Z, where “looking ahead” is interpreted to mean the proper use of the m operator. Norarion. For a pair algebra A on L x L and x in L, we write mx) =x and — mix) = m(m'Q)), fori > 1. CoroLary 3.1.5. Given a pair algebra A on L x L, there exists an integer K such that for all x in L and k > K, min (y € Zaly > x} = 5 mio. Proof. Let Y= x mx). Because of Vor = ¥ + mys) = yy + mys). Therefore, YSN SY and because L is a finite lattice, Ye = Yer for some k. Therefore Veer = Je = Ye + MO) and so we must have myn) m(y’), Y>Smy)>mO) — for0 K, maxty € Laly Mm) >... Bx, using the relation Mlm(x)] > x. Conversely, x< MO) implies by a similar argument [using Theorem 3.1 (xi)] that mt(x) < m*[M*(0)] <0, which completes the proof. CoroLtary 3.1.8, If A is a pair algebra on L x L, then MYx)=1 — ifandonlyif x >mM(). Proof. Exercise. CoroLtary 3.1.9. If A is a pair algebra on L x L, then M*0)=I — ifandonlyif 0 =m). Proof. This is just the combination of Corollaries 3.1.7 and 3.1.8. 3.3 PARTITION ANALYSES In this section we define several (partition) pair algebras associated with a machine and discuss their computation. DEFINITION 3.8. For a machine M = (S, I, O, 8, d), define: a. (x, 7) is an S-S pair if and only if x and r are partitions on S and for all x in 7, s = 1(x) implies &(s, x) = (4, x) (7); b. (&,2) is an JS pair if and only if £ is a partition on J, risa partition on S, and for all s in S, a= b (E) implies &(s, a) = &(s, 6) (7); c. (%, @) is an S-O pair if and only if x is a partition on S, a is a parti- tion on O, and for all x in J, $= 1(x) implies Ms, x) Ms, x) (o); a. (&, 0) is an £0 pair if and only if £ is a partition on J, is a partition on O, and for all s in S, a= b(f) implies A(s, a) = AG, B) («). Note that Definition 3.8a is just a repeat of Definition 3.1, Note also that 72 PARTITION PAIRS AND PAIR ALGEBRA mec. 3.3 in the Moore case, any (E, @) is ani J-O pair because the degenerate condition Ms) = As) (o) must always hold. In other words, given the state of Moore machine M, any information about the output can be computed regardless of the input information; or more simply stated, the output is independent of the input. ‘THEOREM 3.2. For a machine M = (S,J, 0, 8,2) (@) the set of all S-S pairs, Qs-s, (6) the set of all -S pairs, Q,-s, (c) the set of all S-O pairs, Qs-o. (@ the set of all 0 pairs, Q,-0, are all pair algebras. Proof. (a) Because (7, 1) and (0, z) are obviously in Qs_s and because of Lemma 3.1, 2s-s is a pair algebra. By interchanging the role of S and O or Sand d, the same reasoning proves the other cases. EXERCISE, Work out the details in the previous proof. We now know that all the properties of Theorem 3.1 hold for FS, S-O, and /-O pairs in addition to the S-S' pairs. The M and m operations with respect to these new pairs are distinguished by subscripts; we write, for example Myst) OF Msc). ‘We may sometimes write Ms-s(7) OF ms_s(e), although these operators are usually written without subscripts as before. Now we concentrate on the computation of the Mm lattice for the S-S pairs on a given machine M. We use the symbol Q,, to represent this lattice. Let x,; denote the partition which identifies the states s and ¢ but Jeaves the other states in one-element blocks. ‘THEOREM 3.3. If (x, 2’) is in Qy, then D {mre ) 75. < 7} Proof. Since x > 7, we know that (7,,, 7’) is a partition pair and therefore m(,,.) <7’. Hence Dla, Dlte< a} x>n, I>x>l oo O>x>0. Thus there are two nontrivial partitions with S.P., namely, {15253,5;4) and (7,4; 2; 3,5). EXERCISE. Construct some random-looking machines with five or six states, and three or four inputs. Then compute the Qz for these machines. Machines of this size generally have a rich enough structure to acquaint one with the sec. 3.3 PARTITION PAIRS AND PAIR ALGEBRA 75 procedure; yet they are small enough so that the calculations are not too lengthy. The computation of the Mm pairs in Q,.s, Qeo, and Q,9 proceeds similarly. This is iffustrated in machine D of Fig. 3.8. The corresponding Mm pairs and four lattices are shown in Fig. 3.9. In Fig. 3.10 is shown a Moore machine E with the same state transitions as D, and in Fig. 3.11 is shown the corresponding Mris.o. The Mrt;o lattice must necessarily be aobeccbe ob iafelsfalale ifr f2][3]a 2l3s}a]4 lala] 2s 2alalalalz shelrirllalala sfelalafia afa}3|2 2] 2| 2 ala}aj2fia Fig. 3.8. Machine D. Fig. 3.10, Machine E. P=; Beh i B34) a= (a: Beh 23, tb) Pai234h le ead i fe Ps @ Fig. 3.9. ‘The Mm lattices for machine D. (a) Mms-s lattice, On. (b) Mmy~s lattice. (C) Mmiso lattice, (4) Mmy—o lattice, 76 PARTITION PAIRS AND PAIR ALGEBRA sec, 3.4 a single element lattice (J,0), since any input partition with any output partition is an LO pair for a Moore machine. Fig. 3.11, The Mms.o lattice for machine E. 3.4 PARTITION PAIRS AND STATE ASSIGNMENT In this section we investigate the application of the partition pair algebras on M to the state assignment problem. This application illustrates the use- fulness of the partition pairs and it also motivates the more abstract structure results developed later in this chapter. It should be pointed out that the results of this section do not make explicit claims about the economy of the assignments they describe. They only describe necessary and sufficient conditions for the existence of assignments with “reduced” functional dependence. On the other hand, experience suggests that, whenever some degree of variable independence can be achieved, the realizations with the minimal logical dependence generally compare favorably with the best possible. The next lemma is basic to all partition pair applications. Informally, it says that if a state partition p contains cnough information to compute the next block of 7 from the input and if input partition j1 contains enough information to compute the next block of 7 from the present state, then p and p: jointly (or p X 2) contain enough information to compute the next block of +. The function f names a block of 7 and the condition SIBAs), Bu) = BeB(s, *)) tells us that f names the block of + which contains the next state, (Recall that B,(s) stands for the block of p which contains s.) The effect of this lemma is to establish that input dependence and state dependence can be reduced simultaneously and to provide a symbolic statement of this fact. “LEMMA 3.3. Given a machine M = (S, I, O, 8,2) and partitions p and on S and 1 on J, then P 7 such that x Bol), x By] = BARS, »)] for all s in S and x in J. 78 PARTITION PAIRS AND PAIR ALGEBRA sec. 3.4 Proof. Suppose that f exists. Let f’ be the restriction of f to those tuples (XB, ¥ B,,) such that OB, # and OB, AS. Then we may write Dl where p= Ta ={B1B = 98, 4 2} and = TDs = (B= 98, # 2). But F1BLS), B.D] = BBs, 2] for all s and x because f’ is a restriction of f, Therefore, by Lemma 3.3, we know that The =p < Mest) and TT =n < Mes (a). Conversely, if these inequalities hold, we know from Lemma 3.3 that we can define fipxp—r such that STBSS), Bux] = BARS, 2] for all s and x. This may be interpreted as a function on {OX Bos X By) in (X pe XH) OB, A DA By} and we let f be any extension of f” to the full set (po XH). Of course, ATX Bol), X By] = fTBLS), BL] = BABS, x) for all s and x, and so we have a desired f. The partition inequalities in the lemma and applications are often referred to as information flow inequalities. EXERCISE. Restate this lemma for S-O and I-O pairs. Now we can easily prove the central theorem of variable dependence. ‘THEOREM 3.4, Suppose that, for a machine M, state variables {y,} are assigned according to partitions {r,}, input variables {x,} are assigned according to input partitions {4,}, output variables {Z,} are assigned according to output partitions {«,}, and that some P and Q are given such that PS{yj and QS {x3}; sec. 3.4 PARTITION PAIRS AND PAIR ALGEBRA 79 then (i) state variable Y, can be expressed as a function of the variables in PU Qif and only if T] 7% e 4 such that YAX Bess XB,AQ)] = Be{E(s, a). for all s in S and a in But Lemma 3.4 tells us that Y; exists if and only if the information inequalities of part (i) hold. (To apply the lemma, let py = 7; for y, in P, let {145} be for all x, in Q, and let r = 7,.) Thus part (i) is proved. Part (ii) is proved similarly by using Lemma 3.4 restated for S-O and I-O pairs. § Exampie, Now we apply our knowledge about the information flow in machine C to obtain a good state assignment. For the sake of simplicity, we assume that the input variables have been preassigned as follows: XX a—>00 ytd e— 11 d—>10. Our objective is to make a three-digit binary state assignment reducing the number of dependences. In terms of partitions, we are looking for three partitions 7,, 72, 7 of two blocks each such that ay-T—°7% = 0 80 PARTITION PAIRS AND PAIR ALGEBRA sec. 3.4 and M(r,), M(z,), M(z3) are each greater than the product of only a small subset of the partitions 7,, 72, 73. Generally speaking, the larger the M parti- tions, the better chance of this happening. We observe, for instance, that if a variable according to some partition r is to depend only on a variable according to some other partition 7’, then + < M(r’) and M(7’) can have at most two blocks. In the case of machine C, the only such M partitions are z, and I. With this in mind, we decide to let 7, = , and later we choose 7, to be a two-block partition that is an enlargement of 7}; hence, (73,7) is a partition pair and the state variable according to 7, depends only on the state variable according to 7, and the input. The largest m partition less ~ Tatts < M(t) and we have additional reduced dependence. Finally, we have to choose 7 > mi so that r,+7+75 = 0 and this can be done if states “1” and “4” are separated. Letting 7 = (13,552.48, T1372 1 HWM Mm) 2 te-75 1-*000 Mit) 2 tT 2110 tetety #0 3011 4-0 10 5001 Fig. 3.12, Partition pairs, information-flow inequalities, and state assignment for machine C. sec. 3.4 PARTITION PAIRS AND PAMR ALGEBRA 81 This leads to an assignment shown in Fig. 3.12 along with the corresponding partition pairs and information-flow inequalities. It is interesting to observe that for either choice the number of dependencies is the same. In the first sey, =p} of ne | mteyh ef 5 | ) Fig. 3.13. (a) Realization for machine C resulting from first choice of rs. (b) Realization for machine C resulting from assignment of Fig. 3.12. case the realization has the form shown in Fig. 3.13a and in the second case the form shown in Fig. 3.135. Both choices utilize Mm pairs which are high on the Mm lattice. The actual logical equations for the second assignment are: Yi = Fes Yo = iP + X0¥s + Eyes + Kees ¥y = x13 + HHPr + Hx Z= Vs It is seen that this assignment requires only thirty diodes. For the sake of 82 PARTITIONSPAIRS AND PAIR ALGEBRA sec. 3,5 comparison, there is a bad assignment for this machine requiring sixty-five diodes. . EXERCISE. Compute the Mm lattices for machine A of Fig. 3.1 and derive the information flow inequalities that characterize the assignment of Fig. 3.2. 3.5 ABSTRACT NETWORKS The purpose of this section is to define a network as an abstract algebraic system. This is somewhat analogous to what was done previously for ma- chines. A sequential machine is really a complex physical computing device, but we isolate its mathematical properties by the simple expedient of saying that a machine is a quintuple (S, J, O, 8, 2). This enables the development of an algebraic analysis without further reference to the “real world.” Simi- larly, we wish to set aside the idea that a circuit is really a connection of wires, delays, and gates, and regard the network as an algebraic system. Such an exercise is hardly necessary for design applications, as the methods are easy to apply without knowledge of such a formalism. For this same reason, it would hardly be worthwhile pursuing a “theory of abstract net- works.” Nevertheless, there are two good reasons for presenting such a definition here. First, it underscores the fact that machine decomposition and the reduction of variable dependence are virtually identical concepts. Second, it establishes a standard notation for the information associated with a network which is useful in later discussions. Derinrrion 3.9. An abstract network A” of machines consists of: 1. {M, = (Si, ty 8)}, 1 i O, the output function. The network is said to be in logic delay form if and only if for alll i, (3,2) = 84.) for alls, 2 in S,, and xin J. (That is, the next state of any component machine is not a function of its present state.) The network is said to be in standard form if and only if the functions f; are expressed in the form AUS «6+ 5 Sry 2) = Kia) «++ 9 Ft Suds fr) The use of state (or Moore) machines in this definition is essential. The use of Mealy machines here is ruled out because of “timing” considerations. Physically, the output of a Mealy machine is a combination of the present state and present input and so its presence in a circuit presupposes that the sec. 3.5 PARTITION PAIRS AND PAIR ALGEBRA 83. present input is being applied. This output, therefore, cannot be readily combined with the next input signal. Mathematically, it is impossible to make a formulation of the definition in terms of Mealy machines so that the network can be said to “define a machine.” We return to this point shortly. Note that the state of component machine M, can govern the behavior of Mg in two ways. The first way is internal, the way the state of a machine usually affects its behavior; the second way is external, affecting the inputs. In other words, s, appears two places in Baltes felSis «++ Sey «++ 9 Bap HDDs ‘This allows some freedom in describing a circuit as an abstract net, depending on which wires one chooses to call internal and which external. When all the wires are external, we have the logic delay form. ExaMPLe. Consider network .4”» and circuit shown in Fig. 3.14. The ho te te o io 4s sfse]s [se] 5 nh] a |e sa] 51 | se | s2 | te] te | rr fe 1=0={0,1 N58 2D = i 2 felon = 5 fo it ats =f it 1 Fig. 3.14. Network Jp and circuit. 84 PARTITION PAIRS AND PAIR ALGEBRA sec, 3.5 component machines F, and F, are enclosed in the circuit drawing by a dotted line, The wire from delay s, to logic 8, is inside the dotted line because the input to F, does not depend on s,. Thus, the dependence on s, is kept internal. In Fig. 3.15, there is a network Ws, and circuit. The circuits for Ho Be ee oe hon & & A RAR HAR HG Kote he o 107 0701 SH] S2} $1 | so] 5 | S| 52 | se] Aih |te |e la selse|s|se|s | | se | se] tej nf ite lm i Fi fe 1=0=10,4) falsity 0) = U5 fx fobs 2 b5y 4) oij=t ase wane {e tdce Co Fig. 3.15. Network 4 and circuit. MN» and Ay are identical except for the placement of the dotted lines. In this case, the wire from s, to 8, appears outside the dotted line because it is now considered part of the input. The next state of F, can be computed from the input of F, without any direct knowledge of the present state, so the dependence on s, is regarded as external. Network 1 is in logic delay form. The network’, is regarded as a connection of two machines. Because output of the delay elements is sent to the other component machines without further processing, #”y is in standard form. Another view of standard form sec. 3.5 PARTITION PAIRS AND PAIR ALGERRA 85 is that the components are supplying Moore-type outputs to each other. Network 4” is pictured as in state assignment applications where the component “machines” are really just the delay elements and the preceding combinational logic. Thus, the terminology “logic delay form.” For networks in this form, it is more customary and logical to specify only the combi- national logic functions, bypassing the rather trivial component machine flow tables. The flow table representation is used in Fig. 3.15 only to em- phasize the basic sameness of the two cases. Again, 1” is in standard form. One final comment on Definition 3.9. It is not the most general definition that could be given, but it is quite sufficient for the purposes here. In fact, this book uses the “standard form” almost exclusively. Of course, networks are for the realization of machines, and so we give the abstract counterpart of Definition 1.17. DEFINITION 3.10. An abstract network WM defines the machine My = (S, 1, 0,8,%) with S=xS, I=I md O0=0, (5, X) = SUG, Se «+ Sn) A= XBL AGs «+4 Se» Dl MGS, X) = (Sr) «=» Sry X)s Network # realizes machine M if and only ° 1 if machine M,y realizes M. seth | set | suet |] 0 Example. Machine F defined by Wp ot yt | sae | sete |} Nr of Figs 3.14 and 3.15 is shown in Fig. 3.16. Symbolically, F = Mw, = My’. toh | mle | tote sete | son | swt |] 1 EXAMPLE. Consider the network 4, of Fig. 3.17. Each of the two component ma- Fig. 3.16. Machine F. chines have Moore-type outputs which are used as inputs to each other. These outputs determine the connecting func- tions f, and f, given in the figure. This network is in standard form if we take fi. =a, for = My, and other f’s as identity maps. Network 1” defines machine G of Fig. 3.18. Suppose one tries to replace Moore machines G, and G, by similar Mealy machines Gj and G;. Machine G; would require only two states because rows s, and s, of the flow table are identical, and Gj would, of course, require three states. But any hook-up of Gj and G; could not define machine G, since a two-state and a three-state machine can represent at most a six-state reduced machine and reduced machine G has nine states! 86 PARTITION PAIRS AND PAIR ALGEBRA sec. 3.5 (%,0) tb, 0) (atop (bp, 11(Do, 2) (64,1) (dy, 2) Ay {09,0} (a9, 2) (ay, 0} (oy, 2) Ae Se] 5) Ss | Se] 2 Ny) fe | ts [4 fe bo Se] se | se] | ss 2 fej ts [A Aol fs bo ss] se | se} | ss |f on sin [en ie iio 6 & I=(0,12} 0={0,1} lst 2D = (AQUt 2) falsityx) = (Ay (61 0) 95% fy 4) = tif and only if /=/ =3 Fig. 3.17. Network W«. The next step is to define some infor- (0 1 [27379 70] mation partitions which are induced on M by a network.’ in standard form that defines M. These “associated” partitions on M may (a9 3 | 4] 5] 2 |] 0] be thought of as a global characterization (on M) of the information used and com- puted in a component machine of ”. It lo ro (ut) 2] 3] 117 lo (san) 4] 5 | 6] 6 flo (sue) 5/6] 4]4]] 0] is this natural correspondence between tty 6|1]2]8 |] 0] local and global properties that allows us to approach the structure of machines with (sun) T] 415 18 jo partition algebra. (sst) 81 4] 6} 6 ll o First, recall that any function ar 9 3f2f|e fr LSB Fig. 318, Machine G. induces a partition 2 on S' by the relation- ship s=t(x) ifandonlyif f(s) =f. We now use such relationships to define partitions induced by 1”. DeriniTIon 3.11. Suppose that the state behavior of a machine M is realized by abstract network in standard form and suppose that 8 = CUS,, Sz +++ Sn) aNd f= Ath». + tn) are states of M and a and b are inputs to M, then Iet @ s=t(r) — ifandonlyif = 4; i) s=t(p,) ifand onlyif fi, (5) = f,At; (ii) a (4) ~~ ifandonlyif f(a) = fe). The partitions 7, p, and ju for 1 << i,j 7% for all andj; (v) [[ mn =0. Proof. Suppose that partitions are given satisfying (i) to (iv). To define a network in standard form, let S, Because of (iii), we may define . Fi s(Be) = Bp, (Bed). 7 and We also define Fi) = By (x). Because of Lemma 3.4, we may define a 8, such that SUB rAS)y X Bo, 8)» Buf) = Br (Bs, X)) for all s in S and x in J, Finally, define 8(X Bey X) = MBX) iF O By, = {5}, arbitrary otherwise. This completes the network. Let as) = xB,(s). Function a is one-to-one because of (iv). Taking « and £ to be identity maps, 88 PARTITION PAIRS AND PAIR ALGEBRA SEC. 3.6 the reader may easily verify that we have a state behavior realization of M. Conversely, if M,.#”, and (cz, «, €) are given, then 8, are essentially func- tions Be (rs (pds B) > 2 satisfying Lemma 3.4, and so conditions (i) and (ii) must hold. Condition (iii) is a direct consequence of the definition of associated partitions and condition (jv) must hold for @ to be one-to-one. If one is interested in keeping the wires of the component machine internal, then one always chooses pi, = L Since pis: = 7 by (ii), the Px. term could be dropped from the product in (j). If, however, one wants only external feedback wires, then one must drop the 7, from the product, as stated by the next corollary. Coro ary 3.5.1. If the network 4” of Theorem 3.5 is to be in standard logic-delay form, then condition (i) must be replaced by: @® TT pc < Mss(r) — for alli. Proof. Same as the proof of Theorem 3.5 with the B,, term dropped from 8,. The last theorem and corollary show that a network can be built to certain specifications on what information is to be stored where and what carry information is to be used in computing states. This is a considerable aid in later chapters to understanding and attaching precise meanings to such concepts as “loop-free” and “feedback.” One can, of course, define additional associated partitions to study carries to output logics, but there is no need for such concepts in this book. The reader should now be equipped to make these extensions if a need arises. 3,6 DON'T CARE CONDITIONS, FIRST APPROACH We saw in the previous section how “d.c.” conditions can appear even in problems involving completely specified machines. These conditions need not be incidental. It is therefore appropriate that we try to extend the structure theory to the d.c. case. Immediately, we are faced with the problem of defining a pair concept to account for d.c. conditions. We know of two ways of doing this using partitions, both of which seem natural. The ap- proach we take in this section is to call (7, 7) a partition pair whenever the dc. conditions could be filled to preserve this pair. This is called a weak partition pair. An alternative partition approach is discussed in the next section. A third approach is given in Chapter 5. SEC. 3.6 PARTITION PAIRS AND PAIR ALGEBRA 89 DEFINITION 3.12. If M = (S, I, O, 8,2) is a machine with d.c. conditions and ~ and + are partitions on S, we say that (x, 7) is:a weak partition pair (w.p.p.) if and only if 5 = U(r) implies 8(s, x) = 8(t, x) (7) for all x in J such that &(s, x) and 8(¢, x) are specified. Of course this definition generalizes to J-S, S-O, and J-O pairs, We refer to these concepts without writing out their formal definitions. o be To illustrate the w.p.p. concept, consider mas 1 [a T+ ]3] = chine H of Fig. 3.19. Observe that 2l-lelallo (m7) = (5 24; 3, (7; 23; 4) a}rfaltals and a{z|s]- fo (7) = (05 23;4, (1523; 4) Fig 349. Machine H are both w.p.p.’s. On the other hand, (x, + ty7 +7) = (1; at is not a w.p.p. This is one of the major disadvantages of weak partition pairs. ‘The concept of an M partition for H is ruled out because to have M(r) > 7, and M(r) > x, ensures that (M(z),r) is not a w.p.p. In other words, {7 | (x, 7) is a w.p.p.} contains several maximal elements. Now let us see what algebra can be salvaged. LEMMA 3.5. If A is the set of all weak partition pairs on M with dic. conditions, then (i) (x, D and (0, x) are in A. Gi) (x, x’) and (7, 7’) in A imply (7, 2-7’) in A. (x, x’) in A implies (xz, x’ + 7) in A. Proof. (i) is obvious and (ii) is proved similar to the corresponding results about partition pairs. Thus the weak partition pairs satisfy all but the “sum” postulate of a pair algebra, which is now replaced by a weaker form. We generalize to cover weak pairs. DEFINITION 3.13. Let Z, and Z, be finite lattices. Then a subset A of L, X Ly is a weak pair algebra on L, x Ly if and only if P,: (x,y) in A and y' in L, implies that (x, y + y’) in A (ei. y1) and (xs, y2) in A implies that (x,+x,, y1+)) in A P,: for any x in L, and y in Ls, (x, 2) and (y, 0) are in A. The ordering of A is defined component-wise, and again m(x) = T] 1G, y) in 4}. 90 PARTITION PAIRS AND PAIR ALGEBRA sec. 3.6 ‘TrrorEM 3.6. Let A be a weak pair algebra. Then ( fx, m(Q)] is in A (ii) x, > x, implies m(x,) > m(x.) Gil) mq, + x2) S m(x1) + mr) (iv) mx +1) < mx,)-m%). (v) + > m(x) if and only if (7, 7) in A. Proof. We have merely restated relations which were previously derived without using the summation postulate. We have only five conditions instead of the sixteen conditions of Theorem 3.1, and 3.6 (iii) is weaker than 3.1 (iii). Thus the importance of the summation rule is evident. Even without the M operator, however, many nice properties remain. For example, A is easily shown to be a lattice under our ordering, the bounds being given by Bb. f(x, x'), (2, 7')} = (re-7, 2" +7’) and Lub. (7, 2’), (77) = (e+ 2! +7! + mae + 2) EXERCISE. Give a proof of the above statement. Let us now consider the dependence results of the previous section. Of course, the information flow inequalities involve the M operator, but if we replace the conditions of the form “[] 7: < M(z)” by the equivalent state- ment “([] 7,, x) isa p.p.,” then we have a formulation suitable for generali- zation. However, in proving that the conditions of Theorem 3.4 are sufficiept for a realization with all the reduced dependences, we assumed that our machine was completely specified. (Specifically, this is vitial to the proof of Lemma 3.3 where &(s, 6) is assumed to have a value.) Any hope to get around this is shattered by machine J of Fig. 3.20a. For machine J, ({1,2}, (1; 2)) is a weak S-S pair, and (GD, (1; 2) is a weak F48 pair. If Theorem 3.4 were to generalize completely, we would be able to make 8 independent of both input and output, contrary to the fact that 8 cannot be constant because the table already has transitions to nonequivalent states. We can fill the conditions so that the machine is state independent or input independent (as shown in Fig. 3.20 b and c) but not both. When we talk about a realization ot state behavior realization of a machine with dc. conditions, we mean the same as Definitions 1.15 and 1.16 where it is understood that the conditions need only hold whenever &(s, a) or Ms, a) exist. Thus, machine K of Fig. 3.21 is a state behavior realization of machine J (Fig. 3.20). sec. 3.7 PARTITION PAIRS AND PAIR ALGEBRA 91 ° on on oo fe o] tfe}rfo] sfelelo r[2[r]o : sieiair (0 (b) a Fig. 3.20. Machine J with d. c. conditions Fig. 3.21, Machine K, filled in two different ways, We now give our dependence theorem, corresponding to Theorem 3.4, for the d.c. case. THEOREM 3.7. Suppose that, for a machine M with d.c. conditions, state variables {y,} are assigned according to state partitions {r,}, input variables {x,} are assigned according to input partitions {£,}, and output variables {Z,} are assigned according to output partitions {«,}; and suppose further that I =0 [[&=0 Pty, and OC {x}, ? ° then in writing equations to realize M: (i) variable ¥, can be expressed as a function of P U {x,} if and only if ([] 7; 7) is a weak S-S pair; (ii) variable Y, can be expressed as a function of {y,} U @ if and only if (T] &. 72 is a weak 1S pair; ° (iii) variable Z, can be expressed as a function of P U {x} if and only if ([| 7» ©.) is a weak S-O pair; (iv) variable Z, can be expressed as a function of {»,} U Q if and only if (T] & ois a weak LO pair. 7 Proof. The proof is so similar to previous proofs that we leave it as an exercise. The conditions [| 7 = 0 and J] & = 0 are used to ensure that {x} and {7} determine the exact input and state. Compare Theorem 3.7 with Theorem 34. Theorem 3.4 says that state and input dependence can be reduced simultaneously, whereas Theorem 3.7 says that either state or input dependence can be reduced, but not always simultaneously. This is one disadvantage of the first approach, but for applications (such as feedback) when only one kind of dependence is of interest, this approach is most direct. 3.7 DON’T CARE CONDITIONS, SECOND APPROACH In the first approach, we ignored the occurrences of the d.c. conditions. In the second approach, we give each condition a separate name, and then 92 PARTITION PAIRS AND PAIR ALGEBRA sec. 3.7 keep a careful record of it. A machine with labeled d.c. conditions is given by a machine table where some values of 8 may be from a set of labels C and some values of \ may be from a set of labels D. Given such a machine, the concept of a p.p. extends naturally. DEFINITION 3.14. Given a machine M = (5, I, 0, 8, 2) with labeled d.c. conditions C and D, and given a partition ~ on S and partition r on SU C, then (x, 7) is called an extended partition pair (e.p.p.) if and only if 5 = 1(x) implies 8(s, x) = 8(¢, x) (x) for alll x in I. Again this definition extends to other pairs. To illustrate our definitions, we show machine E with extended d.c. conditions in Fig. 3.22 which 19 with the conditions labeled. Observe s}2]3feljo (ey, 71) = (C13 243 3}, (2,35 62,4; T) Fig. 3.22. Machine L. ~~ ~. —_—-— + (xs, 74) = (1; 233 4), 15233634) are extended partition pairs. Partition 7, accounts for the fact that any choice of c, must be in the same block as 2 and 3, Partition r, says that the choice of ¢, must be in the same block as 1. This fact was overlooked in the previous section when w.p.p.’s were used, and that is why the sum law failed. Here, (x, + ym + 72) = (5 234}, (61,235 GA) is also an e.p.p. More generally, we have the followin; LEMMA 3.6. Given a machine M with labeled don’t care conditions, the set A of all extended partition pairs is a pair algebra. Proof. The definition of an e.p.p. is the same as a p.p., except that set SU Dis used instead of S. The proof for p.p. therefore carries over word for word. We now have the m operator and M operator with all the pair algebra results at our disposal. In addition to our S-S U C pairs, we of course have FSU C, S-O U D, and FO U D pairs and the corresponding operators. In our dependence theorem, when we refer to % as the restriction of 7; to S, we mean 5 =1(%) for s, tin Sif and only if s = ¢ (x). ‘THEOREM 3.8. Suppose that, for a machine M with labeled don’t care conditions, state variables {y,} are assigned according to partitions {r,} on SUG, input variables {xj} are assigned according to input partitions {}, SEC. 3.8 PARTITION PAIRS AND PAIR ALGEBRA 93, and output variables {Z;,} are assigned according to partitions {c,} on OU D; and suppose further that P & {y,} and Q © (x), then, in writing equations to realize M: () variable Y, can be expressed as a function of P U Q if and only if I F< Mp-sult) and I £5 = Mz-suclt), where 7, is the restriction of 7, to S. (ii) variable Z,, can be expressed as a function of P U Q if and only if I TS Mg-oun(@) and Il &5 we define on the blocks of 7 by: B, = B’, (#) if and only if s = s(x) for s € B, ands! & Bi; and if we use “‘m” for the operators on M and “m” for the w.p.p. operators on M (7; p, 4), then the following relationships hold: (i) Mis_s(@) = ms-x(%-p) + 7 for m > 73 (i) my_al(C, »)] = ty) + Mg-alo-7) + 7 for o > p, where (o,v) has sec. 3.8 PARTITION PAIRS AND PAIR ALGEBRA 95, its obvious interpretation as a partition on {B,} x {B,}. Proof. We prove (i) only. Note however that there are partitions on {B,} x {B,} which do not have the form (¢, ») and thus (ii) does not apply to the complete domain of my_»- First we show that (7, miG-p)- 7) is a w.p.p. Because of Definition 3.12, we need to show that BGS, x) = BSS", x)) (n(x +p) + 7) for all s in B, 0 B,, s’ in B, 0 Bi, and x in B, whenever B, Now B, = BY (7) implies by definition that Bi (f). 8 (x) for alls in B, and s’ in BY. This obviously implies 8 = (+p) for alls in B, 7) B, and s’ in B, 0 Bh. By definition of m, this implies &(s, x) = As", x) (m(or-p)) for all s in B, OB, and s’ in B, A Bh. This implies t= t! (m(ze-p) + 2) for all tin B, ((s', x)) and #” in B,(&(s', 2). This in turn implies that B-(&(S, x) = B(B(s', x) (ala) F 7) which was to be shown. Now suppose that (#, #’) is a w.p.p. on the local machine, where z’ > 7. (All state partitions for M(r; p, 11) have the form #' for z' > 7.) Suppose further that B, = Bi (a). Because (#, #') is a w.p.p. and because of Definition 3.12, B,(8(s, x)) = BB(s', x)) (@) for alls in B, OB, s' in BL 0 By, and x in J. This implies that (5, x) = Hs’, x) (7e') whenever s = s! (+p). Therefore 2’ > m(z-p) and since also x! > r, we have # > ip) rT. AW Of course, we know that there is no Af operator in general because the local machines have “d.c.” conditions. Even in those cases where there are no don’t cares, however, the M has no formula in terms of M, m, plus, and times. More specifically, this is the case where p and 7 permute (ie., B, 0B, # © for all B, and B,) and we know that #M(7) is the largest 96 PARTITION PAIRS AND PAIR ALGEBRA sec. 3.8 partition 7’ greater than x and such that m(p+7') <7, but we know of no way to obtain #' without explicitly taking this maximum. One exception to this is the case where p = I and + therefore has S.P. (m(I-7) <1). Here we have the following: CoroLtary 3.10.1. If M(r; J, 1.) is arlocal machine then: (i) Ms_s%) = Mis_Aa) +7 for x > 7; (i) mz_(8) = mys) FF for v > pw; Gil) Ms_s(@) = Mg) for x > 7; (iv) M,_s@) = Myce) for 2 > 7. (i) Substitute p = / in part (i) of theorem. (ii) Substitute «= 1 and observe mg_s(I+7) 7, so M(x) is well defined and easily seen to be A(z). (iv) Obvious. ‘NOTES Partition pairs were first introduced for the study of sequential machines by the authors in [28]. The more general concept of the pair algebra and its appli- cation to automata theory is discussed in [18]. For related mathematical concepts, see the discussion of Galois connections between partially ordered sets in [3]. LOOP-FREE 4 STRUCTURE OF MACHINES 4.1 LOOP-FREE NETWORKS In the previous chapter we saw how each network may be considered to be an interconnection of its component machines. In this chapter we study various aspects of the “loop-free” case where none of the machines are connected in a circle. Actually, this study was started in Chapter 2 when simple series and parallel decompositions were analyzed. In this chapter we extend this study to arbitrary loop-free decompositions and show that there exists a one-to-one correspondence between “nonredundant” sets of S.P. partitions and loop-free decompositions of the state behavior of a machine. In this correspondence the lattice ordering of the S.P. partitions which define the decomposition is directly reflected in the interconnection of the component machines, and a physical layout for these component machines can be read off directly from the S.P. lattice. Once this basic re- lationship between lattice structure and machine structure is established, many additional implications of the S.P. lattice are extracted. Thus it is seen that this simplest and most accessible of the information lattices says a great deal about machine structure. We need to develop some preliminary definitions and lemmas before we define a loop-free network. Derinmmion 4.1. Let Mj, Mz,..., My be the component machines of some network 7”. Then we say that M, is a predecessor of M, if and only if the output of M, is used directly as an input to M,. In other words, M, is a predecessor of M, if and only if p,,, # 1, where p,,; is the associated par- tition of Definition 3.11. 7 98 LOOP-FREE STRUCTURE OF MACHINES sec, 4.1 This definition is pragmatic in the sense that we call M, a predecessor if and only if there is a wire from M, carrying information to M,, even though the behavior of M, may actually be functionally independent of M,. The only reason for not using functional dependence is that this would later force us to exclude constructions that happen to turn out better (with less dependence) than anticipated. DEFINITION 4.2. If {M,} is the set of component machines of some net- work”, then we say that a subset C of {M,} is closed in if and only if C contains all the predecessors of machines in C. Lemma 4.1, The set {C,} of all closed sets of machines in a network ./ is closed under the operation of set intersection and set unions. Proof. Let C, and C; be closed in W”. If M, is in C, > C, then M, is in C, and C,. But then all predecessors of M, are in C, and C, and are therefore in C, 7) C;. Thus, C, 0 C, is closed. If M, is in C, U C,, then M, is in C, or C, and hence C, or C, contains all predecessors of M,. But then C, U C, contains these predecessors and we conclude that C, U C, is closed. ff This result shows that the set of closed sets forms a lattice under the set operations. Since the set operations are distributive, this lattice of closed sets is distributive. Derinition 4.3. If L is a subset of component machines for network 4, then the closure C(L) of L in # is the smallest closed subset containing ZL. Operator C is referred to as the closure operator of A”. If L contains exactly one machine Mj, we may write C(M,) instead of C({M,}). LeMMa 4.2. The closure operator C of 1” satisfies the conditions: @ Ch) 2L; (ii) L, © L, implies CWL,) © CL); Gil) CIC] = CC); (iv) CL) = 0 [C,| C, closed set in W and C, 2 1}; (V) CL, 9 Ly) = C(Ly) 0 CUE) 3 (vi) CL, UL) = C(L,) U CHL). Proof. Routine. Derinrrion 4.4, An abstract network, 47, with component machines {M,} is said to be loop-free if and only if there does not exist a pair of machines M, and M,, j # k, such that My is in C(M,) and M, is in C(M1)). sec. 4.1 LOOP-FREE STRUCTURE OF MACHINES 99 Derinirion 4.5. If a loop-free network of component machines realizes the state behavior of M, then we shall say that W is a loop-free realization or decomposition of M. We now associate a partition with each closed set. DEFINITION 4.6. Let.” be a network that realizes the state behavior of machine M and that has component machines {M,}. For each M,, let 7, be the associated partition of Definition 3.11 on the states of M. Then for any N & {M,}, define AN) = T] fil Me © NY. If W has only one element M,, we write p(M,) instead of p({M,)). Now we come to the main result of this section that says that @ maps the closed sets of {JM,} into the S.P, lattice in a systematic way. ‘TurorEM 4.1. Suppose that network .#” with component machines {M,} realizes the state behavior of machine M. Then (®) C closed in.” implies that (C) has S. ii) C, S C, implies p(C,) > p(C.); Gil) P(C, UC.) = @(C)-9(C). Proof. Let J be the index set such that g(C) = [J 7. Because the state of M, is determined by its predecessors {M,} for k in some index set J, we know from Theorem 3.5 and Corollary 3.5.1 that, TI < Moo. Because C is closed, J, S J for M, in C and hence, HO~ The < TE (Tf) I= 90). This is just statement (ii), Statement (iii) follows from the equation HG UE)= T= (II): (Ix) = ot-9€9 and the theorem is proved. If 100 LOOP-FREE STRUCTURE OF MACHINES sec. 4.2 4.2 OBTAINING LOOP-FREE REALIZATIONS The main objective of this section is to derive a converse of Theorem 4.1, This converse shows how certain sets of S.P. partitions on the states of a machine M determine the loop-free realizations of M. As is the case with all our theory, each such set determines the realization in a constructive manner. In order to obtain a strong converse, we need to pinpoint a certain type of redundancy. Let the network ” of component machines {M,} realize M. We know from the previous theorem that there exists a mapping, :p, of the closed sets of W into the set of S.P. partition on M. We now show that if this mapping is not a one-to-one-mapping for closed sets, then there exist redundant machines in the realization. W of M. Assume that there exist two distinct closed sets C, and C, in 4” such that p(C,) = 7(C,) = x. Without loss of generality, we may assume that there exists a machine M, in C, that is not in C,. By Theorem 4.1 we know that x =] {x17 = 9M), My in C,}. From g(M,) = 7, we have z 71 for some x, in 7’. Tueorem 4.2. If the state behavior of a machine M is realized by a loop-free network WW” of component machines {M,} such that is one-to- one over closed sets, then the set of partitions {[C(M,)]} is nonredundant. Proof. First of all, suppose that the set of partitions is redundant because sec. 4.2 LOOP-FREE STRUCTURE OF MACHINES 101 ACM, = pICM,)] for j # k. Because.” is loop-free, C(M,) # O(M,) by Definition 4.4 and therefore ¢ is not one-to-one. Now suppose the set of partitions is redundant because a set 7’ and a partition p[C(M,)] violate the condition. Then M, is not in C(M,) for any gIC(M,)] in T’ because pM) & pICM)]- Therefore, letting C, = L) C(M,), we know that M, is not in C, and thus closed sets C, and C, = C, U C(M,) are distinct. To show that @(C,) = (C.), we know by assumption that eIC(M) > an GIC(M) = 9(C.); and multiplying by p(C,), we get by Theorem 4.1 (iii) PCs) = AC) PCM > PAC). But g(C,) > p(C,) by Lemma 4.2 (ii) and therefore g(C,) = (C,), and @ is not one-to-one over closed sets. J THEOREM 4.3. Let T= {xy x2, ---, 2u} be a nonredundant set of S.P. partitions on S of M such that TI =0-. Then there exists a loop-free network. of n machines, M,, My, ..., Mrs such that: () realizes the state behavior of M; (ii) Mc is in C(M,) if and only if x. > 245 Gil) CUM) = 3 (iv) ¢ is one-to-one over closed sets. Proof. For any x, in T, let xf = J] {m. € T|7, > 7,}. Choose 7; to be any partition such that 7,-7} = ,. For the sake of obtaining simple reali- zations, one generally chooses such a 7, with the minimal number of blocks. Since x, has $.P., May AF) = Mwy) Say and so we can let M, be a machine obtained from M(r,; 7}, 0) by filling in the “don't cares.” Let f,, be the identity map if ~, > 7, and the constant map otherwise. Let g— A(s, x) where s is the single element in 1 B,, (arbitrary otherwise). Note that the 7; defined here are in fact the partitions associated with the M,, and that for this network, p,) = 7, for 7, > 7, and pus = 1 otherwise. Observing that reat = [trl >} = TT au m, 102 LOOP-FREE STRUCTURE OF MACHINES sec. 4.2 it is easily verified that Theorem 3.5 holds and we have a realization. Fur- thermore, (ji) holds by construction and (iii) is verified by the previous equation. To show (iv), assume 9(C,) = p(C,) for closed sets C, # C,. We may assume that there is a machine M, in C, that is not in C,. Choosing 7’ = {7)|M, € C,}, we know from Definition 4.7 that there is an M, in C, such that 2, ; for i> j may be chosen as long as I Pus < Mr). As a result of this last theorem, we now know that for one-to-one 7, loop-free state behavior realizations give nonredundant subsets of the S.P. partitions and vice versa. Furthermore, this relationship is constructive. To illustrate the network construction in the proof, let us consider two examples. First, look at machine A in Fig. 4.1. By an easy computation, we obtain the following list of S.P. partitions: me, = (14; 2,3; 58: 6.7}, ms = {1,25 34; 5.6; 78), 7, = {1,2,7,85 3,4,5,6}, x5 = (12,34, 5,678), {1,2,3,4,5,6,7,8} The lattice of §.P. partitions for this machine is shown in Fig. 4.2. Way te op Te =n aeaarvals oYausun— @yo0ua=n Youosun-— waauaaoe m=0 3 4 1 2 ? 8 wy 5 6 A. Fig. 4.1. Machine Fig. 42, The 8. P. lattice La for machine A. Now let us see how all the state behavior realizations involving one-to- one @ are schematically available from the lattice. By inspection, we easily obtain all the sets of nonredundant partitions. Some of these are listed below sec, 4.2 LOOP-FREE STRUCTURE OF MACHINES 103 and the corresponding realizations for these six sets are shown schematically in Fig. 4.3. {rss}, (tu % m1}, {053%} (as We) fs, 2}, (ats) sy 2). In Fig. 4.3, a symbol such as z,/7, indicates that the corresponding machine refines the information in 7, to z,. The minimum number of blocks Z ro Is 5 eel Bt a 7 -—I8 Eb as Te L___}ae Fig, 4.3. Information flow in six realizations of machine A. required in a partition 7 such that 4-7 = 7, is the minimum number of states that the component machine can have. We now consider in detail a realization of machine A corresponding to the set of partitions {7r., 7}. Since 75 > 7, we have a cascade connection of two machines. The first machine, A, defined by 7s, has two states, Ta34 and b = 5,6,7,8, corresponding to the blocks of 7s. Its state behavior can be read off from the state behavior of machine A (Fig. 4.1) and is shown in Fig. 4.4. The output of A, is its present state and it will be an additional input of the second machine A,. To construct A, we have to choose a partition 7 such that met = 0. Since the blocks of 7, each contain four states, the partition + must have four (or more) blocks. It is seen that there are twenty-four such four-block partitions. We choose 104 LOOP-FREE STRUCTURE OF MACHINES sec. 4.2 {1,8; 2,7; 3,6; 4,5} = (A; B; C; D}. The resulting four-state machine can be determined from the state behavior of machine A and is shown in Fig. 4.5. There are ten possible inputs cor- Oy ay Ory OXq Ory bry Dig bry bry bry afelalolale]4lejc lac ne se alalelcjelolelajolelo ofele]>]ela clojcjclelale|ojolcla elelelolols o{clolololelolciclole Fig. 4.4. Machine A,. Fig. 4.5, Machine Ap. responding to the pairs of inputs made up from the present external input and the present state variable of machine A, although many pairs have the same effect. Observe also that when the exterior input x is applied, the two machines compute their next states simultaneously since the output of A,, needed in Ay, is the old state variable of A,. The construction of all other loop-free realizations of this machine is similar to the previous illustration. Later in this chapter we discuss how the choice of the partition + affects the properties of the second machine in a serial re- oo of 10 alization, ifapaye Now let us consider machine B of Fig. 4.6. 2|tjaje We see that it has a nonredundant set of S.P. 3|1}s]e partitions {77,, 22, 73}, where afs|i]s s{siifs Fig. 4.6, Machine B. We choose our 7; as follows: 1 My = Ty 7 = (15,5524). We expect to get a realization as shown schematically in Fig. 4.7 a with closed sets @, {M,}, {MM}, (My, M,}, and {M,, Mz, M;}. Because each 7; has only two blocks, we assign a variable y, to be computed by M, and get the following assignment and equations: Yi Ye Ns sec. 4.2 LOOP-FREE STRUCTURE OF MACHINES 105 Y= kids + Hime Y,=x, Ys = Fite + Fim We have bypassed making a flow table for M,, Mz, and M, by deriving their realizations (equations) directly. Considering these machines in terms of functional dependence, we see that they hook up schematically as shown mind) m4) Ny Ms (rs) Ms (¥9) mea m0) (0) ) © Fig. 4.7. Schematic realizations of machine B, in Fig. 4.76. This looks different than the realization we anticipated (Fig. 4.7a). If we recall, however, that we intended M; to receive information from M, (we chose p,3—= 7) and add a fictitious wire along the route (Fig. 4.7c), then we see that it also fits the form we intended. Now let us take our realization at face value and consider it to be in the form of Fig. 4.76. The closed sets for this realization are @, {M,}, {Mi}, {M,, Mo}, {Mi, My}, (M,, Mz, M,}. This list includes the previous list but contains one additional set. This inclusion is to be expected because re- moving a “wire” must leave the old closed sets intact. Now compute the x5 = pIC(M)}: mi = (123; 4.535 ws = (1,45; 2,3}; 74 = (1,3; 2; 4; 5}. Comparing this with the old list, we see that 2 = 7, 25 =m», but 2} # ms. This new set is also nonredundant and one sees that our realization could have been obtained using this new set as the basic'set. Thus, this realization is not a surprise bonus, but one that could be found directly starting from the proper nonredundant set. This is always the case when minimal block ‘r, are chosen, because this ensures that the information computed by a component is not computed elsewhere, which means that g must be one- to-one, which means by Theorem 4.2 that the decomposition is associated with a nonredundant set of S.P. partitions. 106 LOOP-FREE STRUCTURE OF MACHINES sec, 4.3 4.3 IMPLICATIONS OF THE S.P. LATTICE In the previous section we took steps to eliminate realizations that contained superfluous components. This was done to eliminate an infinite class of uneconomical redundant realizations. It was shown that, to avoid these realizations, one has to start with a nonredundant set of partitions. The big question remaining for the application of this theory is which nonredundant set of partitions to choose. In order to help answer this question, we point out a couple of more subtle forms of “redundancy” that can be spotted from the S.P. lattice. Even though these new forms of re- dundancy are not as undesirable as the previous form, it is advantageous for a designer to understand them. However, because of the highly specialized nature of these results, and because they are not needed elsewhere in the book, we suggest that this section can be omitted on first reading. First, we consider the case where the computations of two machines partially overlap. For illustration, we compare the two realizations of machine A (Fig. 4.1) defined by the two sets of partitions fas, 2} and ts, 3, 3) From the set {7r5, 7s} we get a realization of machine A from two parallel machines, each having four states, since 7, and 7, are four block partitions. This shows that we use a sixteen-state machine to realize an eight-state machine. It does not necessarily follow that utilization of these two parti- tions for a realization results in an uneconomical one, because the increase in the number of storage elements may be compensated by a decrease in “logical complexity” of the machines that are used, but there is redundancy here. Since 75(= 7, -+ 2) is greater than z, and 7s, one can look at the state of either component machine and determine which block of 7, con- tains the present state of the machine. Thus, the information is computed independently by both components. Now consider a realization determined from the set (7s, 7, 72}. Here, the information about 7, is computed once in a component A, and is fed to components A, and A, that refine this information down to my and 7, respectively (see Fig, 4.3), In this realization, only two states are required in each of the three components so A is decomposed without using addi- tional memory. The difference between these realizations is that in the second case, the common calculation has been “factored out” into a single unit that supplies this information to the other machines. Whenever two component machines in a realization compute S.P. Partitions z, and z,, the sum 7, -+ 7, represents a redundant computation: and should possibly be factored out; but there are cases where factoring sec. 4,3 LOOP-FREE STRUCTURE OF MACHINES 107, out will cost additional memory. In general, one 4 (ST aT eTS|[6 should pick a subset of S.P. partitions that inclides. | 4/517] 2] + all the partitions possible without enlarging the 2]0/6|4}1 |] o memory requirements. 3}1{7{sfol|o Another type of redundancy is using a com- 4]+]o|1}7|/0 ponent for which a smaller reduced version could $|1/1/0]6 || 0 be used. To illustrate this, consider machine C of &|0}2|3|5||0 Fig. 4.8. It is easily computed that this machine has 7 [013/214 [Lo three nontrivial partitions with S.P., namely: Fig. 48. Machine C: (0.1; 2,354.5; 67}, a2 = (01,23; 45,67}, a4 = (0,7; 1,5; 2,6; 3,4}. Since {x,, 7} is nonredundant and z,+7, = 0, we know that it has a corresponding parallel decomposition into component machines C,, and C,, (using appropriate output logic). Furthermore, x, -+ 25 = I so nothing can be factored out. The same can be said about {z,, 73} and so there is also a parallel decomposition using C,, and C,,. Therefore, we see that either C,, or C,, can be used in parallel with C,,. Since 7, is greater than z,, we see that C,,, when used with C,,, computes more information than is really necessary. In Fig. 4.9, we show a realization in terms of C,, C, (based on x, and 7,) and output logic. obe 4 obecd x ifrjmpvfo] + alalolclo] o | uji{w}m]ailo slejelaje| 1 | mli|afr fv} o clajclolelo. wji{afo{m] o olajalela Machine Cy ey (1,0, 0, 1V} 16: 3,4) =(4,8,60) Output logic: 2* 24-22 Fig. 4.9. Decomposition of machine C, With this output, C, appears irreducible and one might not suspect that it could be replaced by machine C, (based on z,) of Fig. 4.10. This replace- ment is possible because we could have chosen A(II) = 1 for machine Cy, in which case the replacement would be state reduction. If we look at the S.P. lattice for machine C (Fig. 4.11) we get a graphic picture of what has happened. Both z, and 7, have identical interactions 108 LOOP-FREE STRUCTURE OF MACHINES SEC. 4.3 with zs. The moral is that when confronted with such a choice, the larger partition should generally be chosen. In lattice theory, those lattices that do not contain any sublattices iso- morphic to the lattice in Fig. 4.11 are called modular lattices. Thus we see oboe a i rfifololr]s Te, ofijijriojo mH . m= (OT 23; FSG 7) = (1, O} 0 Fig. 4.10. Machine C;. Fig. 4.11. S, P, lattice for machine C. a relationship between a pure lattice theory concept and a pure machine theory concept. More striking correlations will be pointed out later. Just because 2 component machine cannot be replaced by a reduced version does not mean that it cannot be replaced by some other machine with fewer states. An example of this is supplied by machine D of Fig. 4.12. The nontrivial S.P. partitions for machine D are seen to be: Since my+m,=T and xy-%.=0, we can obtain a parallel decomposition from {z,, 74} that cannot be factored. Such a realization is shown in Fig. 4.13. Neither of these components can o4 1[s]efo ong oO 1% zlelifo ale]afo 1 [a]ufo aiilelo alclo u|m|rjo alelsto clelels mlx [m|a 5) 3}ayo 12) 8=(3,4} C=(5,6) I= (1,4} 1={2,5) M=(3,6} 6[a}s}t Output logie: 2 = 24-22 Fig. 4.12, Machine D. Fig. 4.13. Machines D, and Dp. be replaced by a reduced version (ignoring outputs) because they have no nontrivial S.P. partitions. Yet either D, or D, can be replaced by D, (cor- responding to 7,) as shown in Fig. 4.14. The partitions 7,, 7, 2, generate the lattice shown in Fig. 4.15. From sec. 4.4 LOOP-FREE STRUCTURE OF MACHINES 109 I ons efe]ello ™ Te, slelelit @=(1,3,5)} B= (2,46) © Fig. 4.14, Machine Dy. Fig. 4.15. Sublattice of the 8. P. lattice Lp. this lattice, it is obvious that any two of the three partitions give a decom- position, as we have shown. Tt is known from lattice theory that lattices which do not contain sub- lattices isomorphic to the lattices shown in Figs. 4.11 or 4.15 are distributive lattices. Recall the definition from Chapter 0: L is distributive if and only if for all x, y,z in L, w= my xz and x + (9-2) = & + y)-@ + 2) If the S.P. lattice is distributive, the last two kinds of component machine substitution cannot occur, and there ought to be a theorem to this effect. Indeed there is, and we state and prove it next. Its narrow hypothesis limits its usefulness to our general theory, but we regard the connection between lattice theory and machine theory as interesting in itself. TueorEM 4.4. A parallel realization for a machine M based on S.P. partitions 7, and z, is called prime if and only if 7, ++ 2, =I, ie., if no machine can be “factored out.” If the S.P, lattice for M is distributive and zis an S.P. partition, then there is at most one partition x, such that (7, 77} gives a prime parallel decomposition. Proof. Suppose z, and 7, are S.P, partitions such that z-2, = 0 and a+2,=/ for i=1, 2. Then m= my l= mye + 7), and applying the distributive laws rey(aey +) = ryeey + ye = my, FOR Me Therefore x, = x,+7, and 2, < 74. Similarly, 2, <7, and hence 27, = 7.4 4.4 PROPERTIES OF A TAIL MACHINE So far, little has been said about the choice of the state partitions +, and those component machines that refine one partition into another. In this section we investigate some aspects of these machines, including the affect 110 Loop-FREE STRUCTURE OF MACHINES sec, 4.4 of restricting the input or carry information. To keep things simple, we speak in terms of simple serial decompositions of Moore machines, although the principles may be applied more generally. Throughout this section we assume that, for the machine M under discussion, z is a partition with S.P. and + is a partition such that 2-7 = 0. The partitions 2 and 7 are used to design a serial decomposition of M with a “tail machine” that takes carry information from the “front machine” and uses it to compute the block of r. The carry information from the front machine is designated by p > x. Of course, pot > m(7), since the local tail machine is M(r; p, 1), where is some input partition satisfying M,.s(j1) 7 has S.P. for M(x; 7, 1) if and only if ze-ry has S.P, on M. Proof. If 7 has $.P., then by Theorem 3.10, 72> MAY) = mO-x) +7 or 1 2 myx) + 7 > my). Since w has S.P., x > m(x) > m(y-7). Multiplying the inequalities, yen > mys) and so ry+7 has S.P. Conversely, if y+z has S.P., then ana oy) Soy from which we obtain moe) tray t 7 Sat py tn=4+ DS And so 7S mney) + 7 = mG) and 7 has S.P. If p> m, then this result does not necessarily hold. To illustrate this and to show some interesting properties of “don’t care” conditions in serial realizations we consider machine C shown in Fig. 4.8. It can be computed that this machine has three nontrivial partitions with S.P., sic, 4,4 LOOP-FREE STRUCTURE OF MACHINES 111 x, = (0,1; 23; 45; 67}, a = (0,1,2,3; 45,673, m= . To obtain a serial realization of this machine, we choose the determining partitions to be 7; 1,55 2,6; 3, mam, and r= 7, Since in this case, r has S.P., we know that the local tail machine can be operated independent of the front machine, or in other words, we may choose p = J. When we do this, we get the old parallel realization of Fig. 4.9 with (local) tail machine C,. But 1 = (A.D; BC} does not have S.P. for C,, even though yx =0, has S.P. for C. Thus, Theorem 4,5 does not extend, ‘Now let us choose p = 2 = 7, even though we do not need all that information. Local machine C; = M(r; 7, 0) receives the state of C, as part of its input and is shown in Fig. 4.16. Here, has S.P. as predicted by the 1 u i bed cule cafe ec adn afayere[o)-)-J-|-|-T-[-J- Talore [ops elelelaic|-|-|-|-lelela -|-|-|]-flo c}-|-|-|-|afe}ofe|-|-|-|-ale]ole| o ol-|-|-|-|8|4/el4)elajel|a|-|-|-|-llo Fig. 4.16. Local machine Cj. theorem. The difference between C, and C} is that C, is Cj with “d.c.” con- dition filled and equivalent inputs identified, In filling these conditions, the S.P. partition is destroyed. In regard to reducing input information, we know that M; s(x) is the least amount of information needed to run the front machine and M;.s(r) is the minimal amount needed to run the tail machine. Let us put all our knowledge together and derive a good serial decomposition of machine E of Fig. 4.17. Computing the S.P. partitions for E, we find that there is only one that is nontrivial, namely (0,3; 12; 4,7; 5,6}. Thus, we can realize machine E from two machines E, and E, connected in series. Machine E, will require four states and E, will require two states. ne 112 Loop-FREE STRUCTURE OF MACHINES sec, 4.4 We therefore plan to assign two binary variables y, and y, to distinguish between the states of E, and a variable y, to distinguish between the two states of E,. In the worst case, y, would depend on all three state variables and both inputs. However, we still have some freedom in assigning y, (we can use any two block partition that refines ~ to 0) and the input variables, so we continue to look for ways to reduce the dependence. In order to reduce the input information to E,, we start looking at -S pairs. Soon we discover that My.s() = (0,145; 23,67} = m1 and xen, =0 Therefore, we choose + = 7, which means assigning y, according to x,, and the state of the tail machine will now be independent of the external inputs, obe a 4% % of po_y4feqo oo 0 0 if7{s}st7lfo 1100 z}alolojal]s eae sjz{sfzfe|fs 3004 alelelelzflo a>71i0 * Cite) ) s}s|5|1]ol]o B01 0 sfelsfe|s|s 6011 ris{i{stalls Tord Fig. 4.17. Machine E. Fig. 4.18. State assignment and informa- tion flow for a realization of machine E, In order to reduce the number of carry variables from two to one, we must find a partition p which is suitable for defining y, and which contains enough information to compute 7. This means that we must look for a p> x that has two four state blocks such that (p-7, 7) is a partition pair. Potential p are created by identifying two pairs of blocks of x. It is easily seen that the partitions and (0.3.4.7; 1.2,5,6} obtained this way do not yield the required partition pair. On the other hand p = (0356; L247} is such that (p-7, 7) is a p.p. If we assign y, according to p, then the tail machine needs only y, as a carry. Finally, y, is assigned according to a partition 7, such that p-7, = =. The resulting assignment and schematic sec. 4.5 LOOP-FREE STRUCTURE OF MACHINES 113, realization is shown in Fig. 4.18. The reader is invited to work out the equations or machines if further verification is needed. 4.5 CLOCKS IN SEQUENTIAL MACHINES We now apply the previously developed results to the study of clocks in sequential machines. Detivimion 4.8, A sequential machine M with only one input symbol is called an autonomous sequential machine or a clock. The next result justifies the name “clock.” Lema 4,3, Let M be an autonomous sequential machine. Then, for any starting state s,, its output sequence is ultimately periodic. Proof. Let M have n states, And denote the sequence of states M goes through by Sis Saye Since there are only n distinct states, there is a state among the first (n + 1) states in this sequence that occurs twice. Let s, be the first state that occurs twice. Then, since there is only one input symbol, the state sequence must repeat after it enters state s,. Therefore, its output sequence is an ultimately periodic sequence. We refer to the period of this sequence as the period of M. Lemma 4.4. A strongly connected autonomous machine M has a periodic output sequence. Proof. Since M is a clock, the state sequence it goes through is ultimately periodic. Since it is strongly connected, the periodic part must include all states. Thus, the output sequence is periodic. The strongly connected clocks are characterized by their periods, and their S.P. lattices are casily described by our next result. Lemma 4.5. Let M be a strongly connected clock with period & and let K be the set of all integers that divide k. Then: @ For each j in K, there is a unique j block partition , with S.P. on the states of M. Furthermore, there are no other S.P. partitions for M; (ii) 4 > m, for i and j in K if and only if i divides j; (iii) The machine M,, is a strongly connected clock with period j; (iv) The S.P. lattice, Ly, is distributive. 114 Loop-FREE STRUCTURE OF MACHINES sec, 4.5 Proof. (i) Let 05 51, +++» Se-1 be the set of states in the order that the machine goes through them, starting with some given s,. We define for j in K $i, =5,(a) ifandonlyif — i, = i, (mod j). Since 86s X) = Se41 (mod ») and since =0(mod /), x, has $.P. on M. The partition x, obviously has j blocks. Now let = be any S.P. partition for M and let B, be any block of =. Let j be the smallest number of inputs that carries some state of B, into another. It is easily seen that j inputs must carry each state of B, into another state of B, and that furthermore, each block of 7 must have this same property and so, Sq(a) fandonlyif and since x has S.P., k = 0(mod j) which proves that j is in K and x = 7), (ii) i divides j implies that (mod f) implies /, = i, (mod i) a which implies from the definition that 2, > x). If > x), the uniformity of x, and 7, implies that each block of 7; contains an equal number of blocks of x; which implies i divides j. (ii) M,, is strongly connected because M is strongly connected, it has only one input by construction, and has j states because , has j blocks. (iv) Ly is isomorphic to K under the ordering relation “i divides j.” This lattice is a well-known distributive lattice. J The clock is such a simple device that one is naturally led to the prob- lem of obtaining decompositions which have a clock as one component. In the state behavior case, this is a very easy problem. ‘THEOREM 4.6. The state behavior of a machine M, can be realized by a serial decomposition of the form M, @ M, where M, is a clock if and only if M, is isomorphic to a machine M(x; I, 1) where 7 is an §.P. partition satisfying a> mI). Proof. Since, by definition, the clock M, receives no inputs or carries, it must derive from a local machine of the form M(z; J, 1) for some 7. Since M(x; 1,1) has no dc. conditions, M, must be isomorphic to M(x; 1,1). Since M(x; I, 1) is a local machine, x must satisfy the equations a>mM(Ien) =m) and x >mz(l). sec, 4.5 LOOP-FREE STRUCTURE OF MACHINES 115 ‘The first equation implies that « has S.P. and the theorem is proved. NOTATION. The partition m,.s(I) of the theorem appears often in various sections of the book so we abbreviate with the symbol z, and write xr = msl). Intuitively, z, is the partition on S of M that shows how much can be computed about the next state of M if we do not know which input symbol was used. Partition z, is easily computed by making a block of all the states of M that are contained in the same row of the flow table, and then com- bining those blocks with common elements. The clocks for state behavior realizations are thus determined by those S.P. partitions greater than ,. These obviously form a sublattice of the S.P. lattice in the same manner that the output consistent partitions did in Section 2.5. One obvious choice of x for a clock is the smallest S.P. partition x greater than 7,, as this gives the clock with the most states. This smallest x can be characterized algebraically: CoroLtary 4.6.1. Given an n-state machine M, there is a largest clock ‘M, that can be used in a state behavior decomposition of M. This is the clock M(re; I, 1) where n= z m(r,). This clock is the largest in the sense that any other such clock must be a homomorphic image of M,. Proof. The formula is a direct consequence of Corollary 3.1.5 and the discussion which follows it. The last statement holds because other clocks are determined by partitions larger than . | To illustrate these results, consider machine F of Fig. 4.19. A short computation shows that for this machine, Noewsunso weoayro-aalo 1 = (01; 2,3; 55; 67}. ™ 0.1 67 Fig. 4.19. Machine F. Furthermore, (x1) = m1, and thus we conclude that 7, has S.P. and the maximal clock of F is defined by ;. It is seen that the machine Fe; is strongly connected and thus is a clock of period four. We choose 6; 1,3,5,7} to determine the second machine in the realization of F. Figure 4.20 shows 7 = (02; 116 Loop-FREE STRUCTURE OF MACHINES sec. 4,5 an assignment determined by 7, and 7. This assignment leads to the follow- ing logical equations: Naso Y= Ys = PsX + VaVsX + YVeVs + TeX ‘The corresponding realization is shown in Fig, 4.21. oro 140 2-0 30 ast 5st 61 rot Fig, 4.20. State assignment for machine F. ° ° 1 1 ° ° 1 1 ° 1 ° 1 ° 1 ° 1 Period = 4 —| machine F. ‘Moximal, Autonomous ‘Clock Fig. 4.21, Realization of Since F,, has period four, we know from Lemma 4.5 that there exists an S.P. partition on F, clock of period two. This partition is given by An assignment using a, = {0,1,6,7; 2,3,5,4}. Tr, M, and + is shown in Fig. 4.22, This leads to the equations oo 10 2-1 31 ant S41 6-0 T+0 Fig. 4.22, Second assignment for machine F. ° ° 1 1 ° ° 1 1 -or0+0+0 Aa Sa + Is ‘ry and therefore an S.P. partition on F that defines a Ts¥ + Vi Pox + YaVsx + PYeIay Penod <4 — «4 machine F. Fig. 4.23, Second realization of sec. 4.5 LOOP-FREE STRUCTURE OF MACHINES 117 The realization is shown in Fig. 4.23. EXERCISE. Show that machine G of Fig. 4.24 can be realized by a nontrivial parallel connec- tion of a clock and an input dependent machine. As a final example, we consider machine H of Fig. 4.25 which is not strongly connected. We compute that x, => mie) = (13,141 5 * 579,11; 6; 810,12} = (By; By; By; By; Bs}. The state diagram of machine H,, is shown in Fig. 4.26 and it is seen not to be strongly con- nected. There is, though, a partition on H,, that yields a strongly connected clock (see Fig. 427. Clocks which are not strongly connected can only occur in state behavior realizations when the original machine is not strongly connected. How- ever, as the next result states, the existence of ‘a nonstrongly connected clock for a machine guarantees a suitable strongly connected clock with the same period. @-—® Fig. 4.26. Machine Hx,. Fig. CoROLLARY 4.6.2. If S.P. partition x on the M determines a clock M(re; I, 1) for M with perio partition x’ on S such that wm @esnealo =naowal- =O0000\K oasun- Fig. 4.24. Machine G. ore 1fspuqa 2{ole|r aful7 |e alejejo s|sf2fo s[3|6lo 7l4a}s}o s|niafo slz{sfo wo}6]1}o ulsfafo z|3}ufo slejelo wlofs]s s|e2lelo wl7 {so Fig. 4.28, Machine H. @Q-e b-—@) 4.27. Machine Hr,. states S of a machine d k, then there is an S.P. and M(ze’; J, 1) is a strongly connected clock with period k. 118 LoOP-FREE STRUCTURE OF MACHINES see. 4.5 Proof. Exercise. NOTES The study of multiple decomposition of machines and the relation of these decompositions to the properties of the S.P. lattice of the machine were first investigated by one of the authors in [14] and some further results were obtained in [16]. Clocks in sequential machines were studied in [13]. 5 STATE SPLITTING 5.1 STRUCTURE AND STATE REDUCTION In the previous chapters we were concerned almost exclusively with state behavior realizations. This was done because this case is conceptually and computationally easier, and the use of many codes in a realization to represent one state increases the memory requirement if done very extensively. Nevertheless, these state behavior realizations do have limitations, and there are machines shown in this chapter for which the most economical reali- zations are not state behavior realizations. In this chapter we abandon the state behavior restriction and investigate techniques which allow one to enlarge machines for multiple coding. We call these “state-splitting” tech- niques. First, we investigate the effects state reduction has on machine structure to underscore those aspects of reduced machines we want to overcome. Then we generalize the partition concept to over-lapping partitions or set systems and show that these methods can recover many of the realizations destroyed by state reduction. The set systems and their algebraic properties developed in this chapter are also used in the subsequent chapters to study feedback in machines and to relate the structure of a machine to its semigroup of state mappings. In Sections 1.2 and 2.6, we considered state reduction as the process of obtaining the reduced machine. Here we use the term to mean the process of obtaining realizations by any sort of state identification. First, we establish a special notation for the reduced realizations that can be obtained by machine homomorphism. ng 120 STATE SPLITTING sec. 5.1 DEFINITION 5.1. Let x be an output consistent S.P. partition on the set of states of the machine M=(S,1,0,8,2). Then we write Mi = {By}, 1, O, 845 %x) where 3B, %) = BL if and only if (B,, x) © By and Aa(Bus x)= (5, x) for any (all) s in By. Machine Mf is just the image machine M, of Definition 2.2 with an output added. This natural extension of M, is only possible when 7 is output consistent. In the case 7 = 7, MS, is the reduced machine equivalent to M cited in Theorem 2.6. These machines M2 are now used to generalize the concept of a state behavior realization. DEFINITION 5.2. Machine M’ is a reduced realization of machine M if and only if, for some output consistent S.P. partition 2 on the set of states S of M, M’ realizes the state behavior of M?. Note that any state behavior realization of M is trivially a reduced realization by letting x = 0. On the other hand, for all other partitions, this definition admits realizations by (sub) machines which have fewer states than M. Thus, the reduced realization is indeed a generalization of the state behavior realization. In terms of assignment mappings, the reduced reali- zations are characterized by the following elementary result. TueoREM 5.1. Machine M’ is a reduced realization of machine M if and only if M’ realizes M with an assignment (a, «, &) such that & maps each state of M onto a single state of M’. Proof. Exercise. Comparing Theorem 5.1 with Definition 1.16, one sees that the only difference between a state behavior realization and a reduced realization is that the a mapping for a reduced realization need not be one-to-one. The primary objective of the remainder of this section is to demonstrate that the structure of all reduced realizations of a machine M can be obtained directly from the structure of M itself. This leads to the conclusion that, in reducing a machine before analyzing its structure, one can only obscure some potential reduced dependence realization. Conversely, we see that it might even be advantageous to “unreduce” or “split” a machine in order to obtain a best realization. sec. 5.1 STATE SPLITTING 121 This loss of structure can be dramatically illustrated for the S.P. lattices, so we consider this case first. To do this, we recall some notation. If z is a partition on a set S and 7 is a partition greater or equal to x, then 7 can be considered as a partition + on the set of blocks of x and re is a one-to-one mapping of {x|7 > x} into {7 |7 partition on {B,}}. Thus, r —>@ is the mapping of 7 > z into its quotient with respect to z. THEOREM 5.2. Let z be an output consistent S.P. partition on the set of states of machine M. Then the S.P. lattice of M? is given by Lye = {|r has $.P. on M and 7 > z}. Proof. (A more complete proof is given in the next theorem). If 7 has S.P. on M, and 7 > z, then blocks of 7 obviously behave as the corres- ponding blocks of + and 7 has S.P. on M,. Conversely, if has $.P. on M,, then for s, tin S 1(@)___ if'and only if B,(s) s BAt) and therefore BARS, x)] = BABE, x)1 @). But then 8(s, x) = 8(t, x) (7) and 7 has S.P. on M. Note that this argument applies equally well to any Ly, even when x is not output consistent. Exampie. Consider machine A together with its S.P. lattice as shown in Fig. 5.1. We see that , is the only nontrivial output consistent S.P. partition and hence ,, = 7. A glance at the lattice tells us that the partitions for A’, ate 3, i,, ie, and I as shown in Fig. 5.2. o1 Z 1 3/7 i/o A 2|aeljfo 1 0 5 a s}r]f6fo _ O™ a4}2 {5 ]/o * Ts s|2]}4lio " Te efafsa ° a rla}alfo = e|3j}s|fo Fig. 5.1, Machine A and S. P. lattice La. 122 STATE SPLITTING sec. 5.1 1z 1 7]fo r 2lalelio cs 3{1}elo 7 asna | 2 | 4 | o efafalfs 7l@lallo e[s|s fo Fig. 5.2. Machine Az, and S. P. lattice, It is interesting to compare the usefulness of the two structures. Since x4 >, > 0, we know that there is a three-machine serial realization of A based on 7s, 7, and 0, The uniformity of these blocks makes the reali- zation especially suitable for binary assignments since each component machine needs but two states. Such a realization is shown in Fig. 5.3 where 10 1 0 KAR 2014 BRAY 3-0 0 0 Wyn Hist eth WoT 4-0 014 240% 5101 6100 71100 e114 Ae et eee 0 LI Fig. 5.3. State assignment, equations, and realization of machine A. the assignment, Boolean equations, and schematic diagrams are given. The state behavior equations require nineteen diodes. The partitions on A®,, however, do not lend themselves to realizations because their irregular makeup requires four binary variables to exploit. The best known binary assignment for machine A®, was found by an anonymous referee and requires sec. 5.1 STATE SPLITTING 123 twenty-two diodes using shared logic. Thus, we see that even the most useful structure may be lost under state reduction. The next two theorems give a complete description of the effect of state reduction on the various information lattices. This is followed by an exact statement of our claim that the reduced dependence decompositions using reduced realizations can be detected from the unreduced structure. Finally, we give some special conditions under which state reduction is safe. The main point, however, has already been illustrated once by the previous theorem and example. Consequently, the remainder of this section can be omitted on first reading. ‘TueoreM 5.3. Let 2 be an output consistent S.P. partition on the states of machine M and let Ly =r € Lu|t > 2}, Ph, = {(t, 7) © Pylt > x and xo > x}, Oy = {(7,7') © Our = MG + x)}. ‘Then the mappings hy = 1 7: Ly — Lage fy = (5.0!) > (4, ¥): Py — Pigs hy = (1,7) —> (4,7 Ft): Oe —> Ore are lattice isomorphisms, Proof. Because + —> 7 is a one-to-one mapping which preserves the lattice operations, we need only show that the A, map onto the sets that we claim they are. Let m be the operator for M and let m be the operator for M,. Because M2 is the same as M(x; 0, I) except for outputs, we know from Corollary 3.10. that ma) =mG)+x fort > x. Now 7 in Ly implies 7B mr) which implies t+ar>mo)t+x which implies because + > 7 that mr) +2 which implies TDM) + x = mr) Ly, and so h, is indeed a mapping into Ly,,. Conversely, which implies # © Lu, implies

You might also like