Laboratoire de l’Informatique du Parall´ elisme

Ecole Normale Superieure de Lyon Unite de recherche associee au CNRS no 1398


Cellular Automata as Languages Recognizers
Marianne Delorme Jacques Mazoyer
July 1998

Research Report No 98-36

´ Ecole Normale Sup´ erieure de Lyon
46 All´ ee d’Italie, 69364 Lyon Cedex 07, France T´ el´ ephone : +33(0) T´ el´ ecopieur : +33(0) Adresse e ´ lectronique :

Cellular Automata as Languages Recognizers
Marianne Delorme Jacques Mazoyer July 1998

Language recognition is a powerful tool to evaluate the computational power of devices. In case of cellular automata, a speci c problematics appears in the one dimensional case where the space is bounded. A state of the art is presented. In the two dimensional case, few results are known. The main di culty is to inject an order on a plane of cells.


Keywords: Cellular automata, systolic arrays,languages recognition,

Reconna^tre des languages est un outil puissant pour evaluer la puissance de calcul d'un systeme. Dans le cas des automates celulaires une problematique speci que appara^t des la dimension un. Un etat de l'art est presente. En dimension deux peu de choses sont connues. La di culte majeure est d'envoyer l'ordre naturel des mots sur un plan de cellules.


Mots-cles: Automates cellulaires, reseaux systoliques, reconnaissance de
langages, complexite.

Cellular Automata as Languages Recognizers
Marianne Delorme Jacques Mazoyer 22nd July 1998

1 Introduction
The computational power of a computation model may be roughly de ned by \what it is able to compute". At this level, cellular automata have the same computational power as Turing machines, PRAM or boolean circuits for example. In order to get more subtle understanding and results, one has to compare their performances, on the computational functions or problems, according to criteria, which depend, more or less, on their own features or on features of some variants. Time and space are the natural and basic resources of all models, and they give rise to complexity classes, that may be re ned according to some characteristics of the model. In the cellular automata case, the input-output location issue is worthy to take into account. Actually, cellular automata are in nite objects and inputs are encoded as nite words, that means nite sequences of letters. So a rst question is: where to start to give the input word to the system? An origin cell has to be chosen. But another question arises: how to go on with this necessary sequential - process of entering the following input letters? It can be done time after time on the same cell (sequential input mode, which corresponds to the time \sequentiality") or, at the same initial time, each input letter ai connected to a distinct cell at site si in such a way that i 6= j implies si 6= sj and that the letters order should be expressed by links between the corresponding sites (parallel input mode, which corresponds to some \space sequentiality"). Let us note that this distinction, which leads to major di erences for cellular automata, does not make sense for Turing machines. This comes from the fact that such machines have only a nite
This report will appear as a contribution in the book Cellular Automata: a parallel model., M. Delorme and J.Mazoyer Eds, Mathematics and Its Applications, Kluwer


space and time for MA to simulate it). Considering space unbounded computations will give rise to time complexity classes high enough in the Turing hierarchy. where it is the computation potential that is studied. where k0 is some constant integer only depending on the neighborhood size 26].number of heads (so a number necessarily independent of the input length) while a cellular automaton has as many cells as the input length. Ibarra's contribution. in such a way that if sA (resp. requires to bound the computation space. Another classic problem is the output one: where or how to locate the result of a computation. and cellular automata have no chance of accelerating computations. Finally to evaluate the performances of a computation model on a decision problem comes down to evaluate the perfomances of the model as language recognizer. We will sum up the \state of art" in stressing connections with complexity. sMA) and tA (resp. 2 . and to translate some classical issues in the Turing machines eld to the cellular automata one. It immediately follows from the above relations that computation needing exponential time on cellular automata are also devoid of interest. whatever the length. which is developed in the following O. tMA ) denote the space and the time used by A for some computation on a nite con guration which terminates (resp. These results have some interesting and deciding consequences: For cellular automata the \space" resource will not produce proper and new space complexity classes. So only time resource remains in the lists. to hope for cellular automata speci c classes. tA tMA k0 tAsA . A lot of papers have been dedicated to language recognition. To limit the di culty. sMA = sA. Nevertheless that does not make the unbounded case futile. Thus. It is not di cult to verify that any one-dimensional cellular automaton A can be simulated by a one-tape-Turing machine MA. it is usual to restrict the complexity measure to decision problems which only require canonical \yes/no" outputs. Because it allows to specify the links between the Turing and cellular automata models in the computability frame.

and develop another one. k 2 N.Suppose that the cellular automaton space is constant. and that tA = nk . Then. then recall some proofs methods already used in other chapters of this volume. There. nk tMA nk+1. considering linear time. we will make an incursion into the two dimensional case which starts again to be explored (see 29]. the most exciting of which being: is linear time distinct from real time? is there any relevant time distinct from real time? We know that the rst neighbors neighborhood allows to simulate any other neighborhood. 2 Language recognition on CA and OCA Let us recall that a one-dimensional cellular automaton A is de ned by a couple (S. is enough. still remain tickling open problems. All this will be done in the one dimensional case. equal to the input length n. We will now give de nitions. But this neighborhood is not minimal. although interesting issues remain open. which does not appear elsewhere in this book. We will put to light the links between the obtained classes and those of the Chomsky's hierarchy. q0) t q-1 q0 CA case q1 q-1 q0 OCA case Figure 1: Communications in CA and OCA cases 3 . especially through their complexity classes. which moreover gives the maximal acceleration. Paying attention to minimal ones leads de ning and studying one-way-cellular automata and comparing them to the classical ones. Ibarra and 33]). examples and results. the next chapter by O. We will see that taking constant space equal to the input space su ces to get the interesting classes and open questions. with i = 3 in the case of cellular automata (CA for short) and i = 2 for one-way-cellular automata (OCA for short). This shows that to get a signi cant speed up with cellular automata. q1) δ(q-1. Finally. q0. ) where S is the set of states and the local transition function from S i into S . Communications in both cases are represented on Figure 1 below: t+1 δ(q-1. which has been well studied.

Secondly. and this. as we have to cope with nite data. So. The rst one is: are space unbounded computation allowed? the second one: how are the words given to the automaton? 2. in case of language recognition. But. initial and accepting or rejecting con gurations have to be de ned. nally. The A-con guration at time t. That set up two problems.In order to specify languages by CA or OCA. What does that precisely means? Intuitively one wants to recursively master the number of \active" cells during a computation on a word. So. Let be a nite alphabet. that is real and linear time classes. It is done by means of some given function together with two other special encoding functions begin and end which allow to distinguish the beginning and the end of the input word. it is su cient to restrict con gurations to applications from Ni into S . we suppose that A has a quiescent state qe. according to the length word. Parallel and sequential modes In parallel mode. it is necessary to make clear how such a machine behaves on a nite word in such a way that it can decide whether the checked word belongs to the considered language or not. one has to limit computations to bounded ones. L a language on and A = (S. t 0. the initial con guration of the automaton corresponding to a word a1 a2 : : : an is de ned by: 2. De nition 1. Actually we will speak of bounded computation when the number of active cells is the input size. It will be su cient to capture the signi cant classes.1 Bounded and unbounded computations One has already remarked that. ) a cellular automaton (CA or OCA). in order to get signi cant time complexity results by means of cellular automata. what will be done in the sequel. let also note that to simulate a CA on which computations (on nite initial con gurations) are not bounded by a CA on which the active cells are always on the right of the rst initial active cell is possible in folding the space. First of all. Let us notice that in the OCA case the active cells will to be found on the right of the initial cell. the letters have to be encoded as states of A. as well as in the CA case for bounded computations.2 Parallel and sequential inputs 4 . as for Turing machines. will be denoted ct.

cn?1(1) = end (an). the whole initial information. uPOCA and uSOCA. 0 < i < n ? 1 and c0(i) = qe for i. It can help to note that: On PCA. cn?1(1) = 0end (an). the whole information induced by the input is on the initial cell as soon as the last letter is entered. Where does this information can be found? When is it reasonable to expect it? In order to better understand conventions adopted in the de nitions to come. The patterns on cell x do not represent the state of x but only the numbers (coded as patterns for a better visibility) of the cells information in x is depending on. On SCA. starting at time 0 from a convenient initial con guration will enter.3 Dependencies in CA and OCA . Intuitively. which is the entry cell. c0(1) = begin (a1). a word will be said recognized or accepted by some automaton when the automaton. 5 2. 1 < i. c0(n) = end (an). 1 < i < n and c0(i) = qe for i. where 0 (ai) and 0end (an) depend on (ai ) and on the computation Look at Figure 2. c0(i) = (ai). after a nite time. of the automaton on the letters already entered. { In the OCA case. it can be worthy to well conceive how information goes according to the model. 0 < i < n ? 1 and c0(i) = qe for i. In sequential mode. to OCA working in bounded parallel (sequential) mode by POCA (SOCA). and we have to distinguish OCA and CA.c0(1) = begin (a1). { In the CA case. So. ci (1) = (ai+1). ci(1) = 0(ai+1). c0(1) = begin (a1). contained in the n active cells of the initial con guration. induces information on every active cell as soon as the (n ? 1)-th computation step. we have to take into account the result of the t to t +1 computation step on the rst cell. uSCA. we have to specify the state of the rst cell for the n rst con gurations. From now on we will refer to CA working in bounded parallel (sequential) mode by PCA (SCA). Let us set up some notations. a con guration carrying some special information. 1 < i. i > n. and to corresponding devices working in unbounded mode as uPCA.

PCA SCA time POCA cells SOCA Figure 2: Dependencies on CA and OCA 6 .

if. m juj Incidentally. while in the unbounded case. note that we implicitly only consider non empty words.On POCA. that is in time proportional to the input length or in \minimal" time. What happens when a word is not recognized? In the bounded case.4 Acceptance or recognition 7 . We are now able to formalize what will mean for a word to be accepted or recognized by some of the above types of cellular automata. Word acceptance A word u is accepted or recognized by a CA. there exist a time t and a positive integer m such that the m-th cell enters an accepting state at t. A word u is accepted or recognized by an OCA. What about recognition times? We already have said that two classes are of major interest: the classes of languages recognized in linear or real time. there exists a time t such that the initial cell enters an accepting state at t. there is a time tA such that cell 1 (in case of CA or cell m in case of OCA) will never enter any accepting state if that has not happend before tA computation steps. A starting at time 0 from the initial con guration de ned by u. and so only accepted words have a status. only accepting states are signi cant. This de nition is valid in both bounded and unbounded cases. Figure 3 illustrates the following de nitions. as the states number is nite. De nition 2. These facts make some simulations intricate enough. which do not depend on the input mode. Cellular automata implied in recognition procedures have some distinguished states: a quiescent state and some other states called the accepting states. A. 2. Let us make the notions precise. In the unbounded case. m = juj. In case of bounded computation. while on SOCA each cell i holds the whole entered information. A starting at time 0 from the initial con guration de ned by u. A. if. but on the CA and OCA cases as well as on the computation type. cell i only gets information from the cells situated at its left. if no rejecting state is speci ed. and then if a word is not recognized. it is known to be outside the checked language.

8 . The black straight lines represent the inputs lengths. in case of CA and OCA according to the input mode.Bounded devices Unbounded devices Parallel inputs CA Useful area of the space time diagram Sequential inputs Active area of the space time diagram Parallel inputs OCA Sequential inputs Figure 3: Useful (darker colored) and active areas of space-time diagrams and location of acceptance (indicated by a black point).

Linear time languages L belongs to the class LPCA (LSCA) of languages recognized in bounded parallel (sequential) linear time if there exists an integer k such that. such that each word u of L is accepted in time at most kjuj. It is all the more interesting that each candidate.1 Linear time 3. but no such language has never been exhibited. A language L is said to be a linear time language if there exists an integer k. t kjuj. there exists t. there exists t. That gives a new reason to pay attention to the sequential mode and to OCAs for which things are a little clearer. k 1. which is one of the most striking question remaining open in this eld. 9 3. L belongs to the class LPOCA (LSOCA) of languages recognized in bounded parallel (sequential) linear time if there exists an integer k such that. Thus. such that ct+juj (t) is an accepting state. L belongs to the class uLPOCA (uLSOCA) of languages recognized in unbounded parallel (sequential) linear time if there exists an integer k such that. it has not been proved they are distinct in the cellular automata case in parallel mode. A language L is said to be a real time language when each of its words is accepted \as soon as possible" or in \minimum" time. has ultimately been proved to be a real time language. such that ct (juj) is an accepting state. such that ct(1) is an accepting state. there exists t. for each u of L. t kjuj. Up to now. for each u of L. for each u of L. We start with de nitions. De nitions in the unbounded case is analogous and gives rise to the classes uLPCA and uLSCA. up to date. Computability theory asserts there exist languages not recognizable in real time on CA. unbounded computations are no more relevant. t kjuj.2 Real time . More precisely: De nition 3.3 Linear and real time These classes are distinct for most of the classical computation models. sum up the established connections between classes and the still open questions. then give some examples. in this case.

which is not trivial for OCA and changes the problematics. Some remarks could complete them. information going only to the right. So. 10 . for each u of L. with their abbreviations1. In the Figure 5. The notations we have chosen are not the historical ones nor the ones used in the other chapters of this volume. First. Real time languages L belongs to the class RPCA (RSCA) of languages recognized in parallel (sequential) real time if. it is natural to take a decision when the whole initial information reaches the last active cell. But in the two-way case. a \central" one. 1 L belongs to the class RPOCA of languages recognized in parallel real time if. cjuj(juj) and c2juj(juj). we have to add a unit time to give the automaton the time to know that the input is completed. c2juj?1 (juj) is an accepting state. for each u of L. we could have chosen any active cell. CA and OCA obviously mean 1-CA and 1-OCA: we have chosen not to mention the dimension when it is 1. L belongs to the class RSOCA of languages recognized in sequential real time if. These de nitions are illustrated on Figure 4. or rather. On the other hand. cjuj?1 (1) is an accepting state. That would have set up the problem of knowing the central cell(s). cjuj?1 (juj) is an accepting state. in the one-way case. in the above de nitions we have to consider successively cjuj(1).De nition 4. We are nevertheless keeping them up in the sequel because they are very expressive and simple. and we give in the following array the correspondance with those used elsewhere through the volume. Notions we are mainly interested in are now sum up in Figure 5. if the input is not encoded by means of end . for each u of L. in view of uniformity.

11 . in real-time recognition.CA OCA Parallel inputs Sequential inputs Figure 4: Useful active areas of space-time diagrams and location of acceptance. in case of CA and OCA according to the input mode. Automaton type Input mode Time Space CA OCA 2-CA 2-OCA P (parallel) L (linear) u (unbounded) S (sequential) R (real) (bounded) (any) Figure 5: Usual abbreviations.

b] that the result is inferred from 4] and c] that it is a consequence of 5].1 Examples They are gathered together in the two following arrays. where the main results are proved. 2 12 . The reader is refered to the source papers for proofs or to 6]. 4. generally.Input mode Automaton type Present notations Other notations in this book Parallel Parallel Sequential Sequential Parallel Parallel Sequential Sequential CA OCA CA OCA CA OCA CA OCA 1D-CA 2D-CA PCA POCA SCA SOCA LCA OLCA LIA OLIA MCA OMCA MIA OMIA 2-PCA 2-POCA 2-SCA 2-SOCA 4 Results Let us start with some examples in order to situate well known languages in the di erent introduced classes or to put to light some other ones used to separate some of these classes. and the usual proof methods are explained2. not the initial ones. the proofs. though largely inspired by the original ideas are. But. there. The letters which can be found inside are to be understood the following way: a] signi es that the proof is built by means of signals.

1g+ . 13 . 1g . and K the language fu= u 2 n primeg fa = n fan bn cn = n 2n Languages RPOCA RPCA RSCA ? + + 0g 1g 1g ? + + 4] 9] 5] 5] + + + + + + + + + 30] + + + a] 2] 10] fan bn+m am = m. n > 0g fu=u = 1n 0y10n . juj1 = jBin(u)j1g. Then we have: Languages RPOCA RPCA RSCA K ? + ? c] 11] 31] L LL + + + + + a] a] ? 32] a] The main results in language recognition domain have focused on complexity classes comparisons. where the numbers labeling some of the arrows refer to papers or give easy justi cations. let us denote L the set fu=u = 1n 0n . n > 0g. n fuu= u 2 fuu= u 2 fuuR = u 2 fuuR = u 2 fujuj = u 2 fu = u 2 juj + g g ? ? + a] + ? + 3] 3] + g g 3 + + 9] + ? + 30] 3] 3] + g g ? ? c] b] + ? 30] If juj1 denotes the number of 1 occurrences in the word u . The known inclusion or no-inclusion relations as well as some still open problems are summarized in Figure 6. y 2 f0. as follows.

(5) Proofs can be found in 2]. obviously. 21]: Every language recognized on a CA (uCA) (P as S) in time t(n) is recognized on a CA (uCA) (S as P) in time t(n)+n. { Every language recognized on a SOCA (POCA) in time t(n) is recognized on a SCA (PCA) in time t(n). 21]. (1) results from 2] and 15]. 15] or 7]. grey ones no-inclusion and grey ones with question mark. Dark arrows mean inclusion. open question. (7) is founded on the following propositions which use both folding of the working area and cells grouping 2]. (3) fan= n primeg does not belong to RPOCA 5] but belongs to RSCA 10] which is included in RPCA. (4) fuuR= u 2 + g4 does not belong to RSCA but belongs to RPOCA 3]. (6) is founded on the following results.uPOCA (8) ? uPCA POCA ? ? ? ? PCA LPOCA (1) ? ? (3) (3) LPCA RPCA RPOCA LuPCA LuPOCA (2) (2) (3) (2) (4) (4) (5) (6) LuSOCA LSOCA ? RSOCA ? (5) RSCA LSCA LuSCA ? (7) SOCA ? SCA uSCA uSOCA Figure 6: Results graph. for one-dimensional cellular automata. 4 where juuR j 3 and j j 2 14 . (2) Proofs can be found in 15] or 7].

d] means that the result is a consequence of two facts: PCA = DSPACE (n) (that can be deduce from simulations in 26] and CS = NSPACE (n) 34] for example. According to Figure 7. connections between the Chomsky's hierarchy and the interesting classes via cellular automata are not quite easy to interpret. Rat RPOCA Rat RSCA AlgL RPOCA 27] Alg SOCA 15] Alg 6 RSCA 3] RSCA 6 Alg 3] RPOCA 6 Alg 3] Alg 6 RPOCA 32] PCA = SCA CS So. The known results are summarized in the following table. each candidate for di erentiating RPCA and LPCA has been proved belonging to RPCA! Figure 7 is an other way to present the former results. The graph of gure 6 shows the crux of di culty the node \RPCA = LPOCA = RSOCA" remains. Actually. { Every language recognized on a SCA (PCA) in time t(n) is recog- 4.nized on a uSOCA (uPOCA) in time 2t(n). (8) is inferred from classical theorems of Computability. Note also that. the simulations need the working area to be enlarged. We can add that the unary languages in RPOCA are the unary rational languages 27]. algebraic languages could to be low in complexity. better focusing on the open remaining problems. in the last proposition proof. which may lead to ambiguities or errors. but how really deep? It is an exciting question. and it could be a gap between Alg and CS . and including the comparisons with the Chomsky's classes.2 Comparisons with the Chomsky's hierarchy classes 15 . Alg (AlgL) the one of algebraic (or context free) (linear algebraic (or linear context free)) languages and CS the one of context-sensitive languages (see 12]). Moreover. up to now. where Rat denotes the class of rational (or regular) languages. We have here to put to light an important remark about the working area and the fact we go from bounded to unbounded computations through some simulations.

for one-dimensional cellular automata.Rat AlgL RPOCA RSCA Alg RPCA LPOCA RSOCA ? LPCA LSCA ? LSOCA POCA SOCA ? ? CS ? PCA SCA uPOCA uSCA uPCA uSOCA r.e Inclusion No inclusion ? Unknown Figure 7: Main complexity classes and their relations. 16 .

The only algebraic method is due to S. As for us. Both questions are open since 1971 27]. Cole. on . using signals and synchronization. up to now. L a language on and k an integer. we will illustrate the rst cited method in proving that uLPCA = LPCA. jwj k. 5 About proofs Most proofs are founded on the main following methods: \folding" the working area. but we have chosen to emphasize these ones in order to put light on the result just below. An immediate consequence of the RPCA closure under reversal would be that RPCA is closed under concatenation. 17 . actually. uw 2 L $ vw 2 L. We refer to the chapter in this volume by V. who develops and uses analogous processes. grouping cells. 27] PCA + + + POCA LSOCA RPCA RPOCA RSCA 15] + + ? + 2] 3] 15] ? 32] ? 3] + ? ? + + + + + ? Among the results proved in 16]. Actually more is known. Terrier. closure properties are naturally studied. Ibarra.4. we have to stress the following one which is examplary: RPCA is closed under reversal if and only if LPCA = RPCA. can lead to results which resist to other investigation methods.3 Closure properties In formal language theory. for the complexity classes we priviledged. The following table shows some results known. and they. Classes reversal Concatenation Boolean Op. and the reader is refered to the next chapter by O. It consists in estimating the number of equivalence classes of relations Ek de ned as follows: let be an alphabet. then two words u and v are equivalent modulo Ek if and only if for every word w .

if x is any cell of the rst stripe of width n. c kL.1 Folding the working area n case k = 2 case k = 3 Figure 8: Useful computation areas during the recognition of a word in parallel unbounded mode and linear time kn. More generally. built from A and such that the useful computation area would be the rst stripe of length equal to the input length. from which uLPCA LPCA can be easily deduced. More generally the following lemma holds. ti is the state h2n ? x + 1. c 2 N. up to the right edge of the (k0 + 1)-th stripe. We can see the useful computation areas on Figure 8.Actually. as soon as it is remarked that if L is recognized in time kLn then it is recognized in time cn. The idea to de ne the new automaton is to generalize the above remark in folding over the stripes of width n as shown on Figure 9. is recognized in tu steps with tu kL n. We are looking for an automaton. if kL is odd and kL = 2k0 + 1. it is not di cult to verify that the useful computation area is inscribed inside k0 + 1 stripes of width n. with kL = 2 and kL = 3. ti of A. 5. 18 . the j -th component of hx. juj = n. and to consider as states (k0 + 1)-uples such that. The case kL = 3 in Figure 8 invites to simply fold back the second n-width stripe on the rst one and to build an automaton the states of which are 2-uples such that the 2-th component of hx. That means that there exists a cellular automaton A and a positive integer kL such that each word u in L. to prove uLPCA = LPCA comes down to prove uLPCA LPCA. which implies that assuming kL odd is not restrictive. So. ti is the state of the cell of the j -th stripe which is superimposed on x in the folding (see Figure 10). suppose that L is in uLPCA.

5n-1 n Figure 10: Examples of cells that will be superimposed in the folding. A A A+B Area of computation. 19 .A A+B A+B+C A B C D D C A+B A+B+C D A+B+C A+B+C+D A+B A n Area of computation in time 7n. the whole area folded Figure 9: Folding over the computation area. Area of computation foldedtwice. Area of computation folded once.

cA (x)(z ) = qe and cA k (x)(z ) = (qe. ( ) ( ) ( ) ` . Let us now precisely de ne A k . qe). Then there exists a cellular automaton A(k) = (QAk+1 . qe. qk Ak 1 1 1 1 +1 k+1 k+1 k+1 distinguish two cases according whether i. i odd In this case. z 2 Z.Let k be any positive integer and A = (QA. h1. . q c ). tiA k )i = hx + (i ? 1)n. : : : . then. A k ) such that: for each integer n. n 2 N and sequence (x1. We will ( q . qe). ng.If (q1 k 1 k qi? = A(qi`. tiA when i is even. tiA k )i = hin ? x + 1. the useful computation area has been folded over an even number of times and the computation is done in the initial (A) order. qe . : : : . q ? ). ( ) then. : : : .ii . if the con gurations cA (x) and cA k (x) are de ned by for each z. 20 . : : : . : : : . : : : . qe ) (qe. q r ) 6= (qe . with a distinguished state qe such that qe = (q1. and the hypothesis made on qe ensures that (qe . : : : . q3) implies q1 = q2 = q3 = qe. cA (x)(z ) = xz and cA k (x)(z ) = (xz . q2. : : : . qe). q r ) = (q ?. Figure 11 illustrates the idea under the formal de nition. : : : . (2k + 1)n ? 1iA . for z 2 f1. A) a cellular automaton working in parallel mode. q2. : : : . q r ) 6= (qe . then . Concerned cells are not edge ones. z < 0 and z > n. qir) with q0 ? = The considered cell is on the left edge of the active area. qe ) and (q r . (2k + 1)n ? 1iA k = (h1. let us set ? order ` ` ). : : : . it holds: ( ) ( ) (hx. q r ). qe ). q ` ) = (qe . xn) of QA states. qi? = A(qic?1. : : : . q1 1 marks the limits of the computation area. x2. 1. qic. : : : . qe. q1 c .i . In to de ne A k . 1 i k + 1 is odd or even (which respectively corresponds to the fact that the i-th stripe of the useful computation area is folded from the right of the rst one or from the left). (q r .If (q1 k 1 k c = qe . tiA when i is odd. (hx. (q c . : : : . 2 i k + 1 can be any state in Q. : : : . : : : . q ` ) 6= (qe . qic. : : : . qir ). ` . qk+1) ( ) where qi. Actually. ( ) ( ) ( ) Fact 1. qe ) and (q r .

iii . ( ) ` . : : : . qe ). q ` ) = (qe . q ` ) 6= (qe . : : : . the computation is done in the order opposite to the initial one. : : : . This corresponds to the case n = 1. qe ). : : : . qic. q r ) = (qe .If (q1 1 k k ? qi = A(qi`. 21 . qe ). qic+1). : : : . q ` ) 6= (qe . qe ) and (q r . : : : . : : : . qic+1). alors . Concerned cells are on the right edge of the computaion area. q r ) 6= (qe . : : : . : : : .1 23 1 32 1 23 1 23 Figure 11: Basic features in order to conceive A k . ` . : : : . qi`).If (q1 1 k k qi? = A(qir. : : : . qe ) and (q r . q r ) 6= (qe . : : : . qe ) and (q r . ` . 2. then . i even In this case the useful computation area has been folded over an odd number of times.ii . qic+1).If (q1 k 1 k qi? = A(qir. : : : . qic. then . qe ) and (q r .If (q1 1 k k ? qi = A(qic?1. q ` ) = (qe . : : : . The di erent considered possibilities are those of the previous case. ` . : : : . qic. qic. q r ) = (qe . qe). then .i .iv . : : : .

we have to set up some process which converts the sequential input corresponding to u into some con guration realizing the parallel input corresponding to u on B. 2(n-1)) an a4 a4 a3 a2 a1 a1 a4 a2 a3 a2 S1 Site (1. : : : .0) (a) (b) Figure 12: Data moves in order to convert a sequential input into a parallel one. qe ). 3(n-1)) a1 a2 an a4 a4 a3 a3 a2 Site (n. : : : . in linear time. via a Firing Squad optimal time solution. : : : . namely Figure 12.iv . then . : : : . with the wanted parallel input on the rst n cells. When the last input letter an enters cell 1 at time n ? 1. qic+1). And then a synchronization is launched. qic. If we want it to be accepted on some SCA B. ` . It is shown on Figure 12 a. at speed 1=2. When the rst input letter arrives on the cell 1 at time 0. which ends up. in linear time. q r ) = (qe . A.2 Using signals and synchronization Time This method is used in proving LPCA LSCA for example. Suppose that u = a1 : : : an is a word recognized on some PCA. We will only give a hint of the proof by means of gures. The recognition process on A 22 .` . qe ) and (q r . 5. qi`). q r ) = (qe . which meets signal S2 on cell n at time 2(n ? 1). q ` ) 6= (qe .iii . cell 1 sends a signal S2.If (q1 k 1 k qi? = A(qic?1. qe ) and (q r . qe ). then cell 1 sends a signal S1 at maximal speed. : : : . q ` ) = (qe . at time 3(n ? 1). qic. then . n-1) an a1 a2 a4 a1 a3 a1 Sequential input a2 S2 a2 a1 Cells a1 Site (1. a2 Parallel input Site (1. : : : .If (q1 1 k k qi? = A(qic?1. : : : . : : : .

we will here restrict the matter to the latter ones. It was extensively in 8] and 22]. moreover. and the word u will obviously be recognized on B in linear time. Actually. 23 . In the frame of 2-dimensional cellular automata. inputs and outputs problems naturally become a little more involved. 6 Languages on 2-dimensional cellular automata Languages on 2-dimensional cellular automata can be understood either as sets of 2-dimensional patterns (also often called images) or as standard languages of nite words. although some work has been done on images ( 29]. So we will only recall its principle on Figure 13. l l l l l l l l l l m m m m m m m m m m n n n n n n n n n n p p p p p p p p p p l kl jk l jk l jk l jk l jk l jk l jk l jk l j k j p np mnp mnp mnp mnp mnp mnp mnp mn m 5.can start. 18] and 33]). Figure 12 b shows how the input letters successively appear on the diagonal stemming from site (1. This method is used to prove uLSOCA = LSOCA for example. a new one may arise because the standard rst neighbors neighborhood of dimension one splits into von Neumann's and Moore's neighborhoods.3 Grouping cells i n p u t a a a a a xx xx xx xx xx b b b b b b b b b b c c c c c c c c c c d d d d d d d d d d e e e e e e e e e e f f f f f f f f f f g g g g g g g g g g h h h h h h h h h h i i i i i i i i i i j j j j j j j j j j k k k k k k k k k k i n p u t c bc abc abc abc abc abc xxbc xxbc xxbc xx b xx f ef def def def def def def def def de d i hi ghi ghi ghi ghi ghi ghi ghi ghi gh g u-LOCA diagram with: time = 2 n space = 3 n LOCA diagram with: time = 2 n space = n Figure 13: A hint for proving uLSOCA = LSOCA. but. 0).

depending on the way words are encoded on the plane.1 Input and output modes 6. who proved that 2-SCA are strictly more powerful than SCA 3]. while in 33]. in fact the accepting cell. Let us only emphasize here that the chosen neighborhood is still the von Neumann's one. We will not give formal de nitions. to decide the acceptation. computations are bounded ones. O. The case of one-way communications is especially studied. Innovation in parallel mode is that there are many ways to enter the input. in 13]. in case of Moore's and von Neumann's neighborhoods. Let us remark that broking the input word brings a new parameter in. it is the snakelike (snlo) one which is used. a second cell has to be chosen.First. The choice of the accepting cell depends on the way the word is displayed on the plane and also on the communication way. Figure 16 shows real time in case of cellular automata. that means the minimal time necessary to the accepting cell to know the whole input. The proof (founded on the previously mentioned algebraic method) consists in producing a language which is recognized by some (k + 1)-SCA and not recognized by any k-SCA (with von Neumann's neighborhood). but only some examples illustrating how real time depends upon di erent computation features. Secondly. where the recognition will be decided if there is a reason. as the distinguished cells. yc ? 1)) if (xc. In his following contribution. telling how the input is entered. in this frame. Ibarra uses linelike (llo) or brokenwordlike (bwlo) order.2 Real time What does mean real time in this frame? As usual. which means. Some of them are represented on Figure 14. yc). which can be seen as distinguishing two cells: the second one. But. and more generally that (k +1)-SCA are strictly more powerful than k-SCA. The last one is the spirallike order (splo) 6]. the vector ((xc ? 1. all the authors choose some distinguished cell. that means to enter letters one after the other (sequential mode). but one can imagine other sort of \wires". checking words of length n in parallel \linelike order" input mode. (xc. 6. possibly. to enter the rst input letter (parallel mode) or. Cole. yc) is the considered cell. the accepting cell being the n-th one. Examples are given on Figure 15. The most important result for sequential input mode has been obtained by S. 24 . which is used to communicate with the outside.

25 . in parallel mode on 2-dimensional cellular automata.a21 a22 a20 a19 a11 a12 a23 * * a18 a17 a16 a13 a14 a15 a21 a22 a23 a16 a17 a18 a11 a12 a13 a6 a1 * a19 a14 * a20 a15 a10 a9 a8 a7 a6 a1 a2 a3 a4 a5 snakelike order a7 a8 a9 a10 a2 a3 a4 a5 broken wordlike order a17 a16 a15 a14 a13 a18 a5 a4 a3 a12 a19 a6 a a2 a11 1 a20 a7 a8 a9 a10 a21 a22 a23 a1 a2 a3 a4 a5 a22 a23 line word order spiral like order Figure 14: Di erent standard inputs of nite words.

1 Moore’s neighborhood Von Neuman’s neighborhood Figure 16: Real time computation examples. Real time : 2 n . 26 .1 Real time : n .am an n al+1 a1 a2 one-way one-way an a1 a2 al am a1 al+1 al a1 a2 two-way two-way accepting cell mark Figure 15: Localization of the accepting cell.

which is not real-time in case of Moore's neighborhood. while 2PCA!. d pne) as on Figure 15. 27 6. on standard inputs. with the rst cell as accepting p one.3 Some results and open questions 6.4 Images languages . are strictly more powerful than analogous 2-POCA. the famous problem RPCA =? PCA can be transposed in dimension 2 to the di erent relevant explicit cases according inputs forms and neighborhoods.V N . According to 13]. and it has been shown that rational languages are recognized in real time 19]. which consists in separately comparing classes determined via cellular automata and classes determined via another computation model (Turing machines par excellence).V N = DSPACE (n2) and 2SCAV N = DSPACE (n2).V N = DSPACE (n3=2) and 2SOCAV N = DSPACE (n3=2). and though we decided to focuse on languages of nite words. 3. Let us sum up some interesting comparison results on standard computational complexity classes. some of them show the gap which exists between the dimensions 1 and 2.V N = 2PCAsnlo.In p case ofp length n words displayed in snakelike order inside a square of size (d ne. 2.V N and DSPACE (n) = 2PCAsnlo. That comes from the fact that 2POCA!. nor whether there exists some relation between 2POCA and PCA. To nish. It is not known whether 2-POCA are more powerful than POCA. One observes here another e cient proof method. 1. the real time is d ne ? 1 in case of Moore's neighborhood and 2(d ne ? 1) in case of von Neumann's one. Incidentally. it is worthy to stress the following result in 33]: There exists a two-dimensional language recognized in real time in case of von Neumann's neighborhood. the 2-PCA with von Neumann neighborhood. When words of length n are displayed in spiralike order real time is O(dpne). Obviously.V N which can be deduced from the fact that DSPACE (n) = 2PCAblo. apart the fact that 2PCAblo.

Computational cmplexity of cellular automata:an overview. Vol.1981. of Waterloo. J. no. One-way bounded Cellular Automata. no. Variation of the ring squad synchronization problem. Mathematics and Its Application. Res. Vol. Real time recognition of some langages by trellis and cellular automata and full scan Turing machines. 1996. 29: 35-39. On real time cellular automata and trellis automata. 4: 349-365. and Culik II K. Kluwer: 5{49. 10] Fischer P. 2] Cho rut C.M. no. 1980. Rep CS. and Culik II K. 21: 393{407. 1965. 4] Culik II K. Gruska J. Univ. no. no. Automata and Langages. An introduction to Cellular Automata. IEEE Transactions on Computers. Vol. no. 44: 54-69. and Salomaa A. Information and Control. Vol. 1984. in Semigroups. 1994. 1969. Introduction of automata theory. 1986. 13] Ibarra O. no. Real time computation by n-dimensional arrays. 12: 388-394. 1998. Acta Informatica Vol. Mathematics and Its Application.A. 1: 1-16. no. Complex Systems. and Mazoyer J. 1979. 81-34 . 6] Delorme M. Reconnaissance de Langages sur Automates Cellulaires. and Mazoyer J.C. 8] Delorme M. J. and Ullman J. To appear inCellular Automta: a parallel device.C. 7] Delorme M. Vol. languages and computation. 5] Culik II K.Vol. 3] Cole S. Addison Wesley. 1989. Information Processing Letter. 9] Dyer C. An overview on language recognition on one dimensional cellular automata.Almeida Ed: 85-100. Kluwer: 5{49. To appear inCellular Automta: a parallel device. Dept of Computer Science. 11] Hemmerling A. A simple universal cellular automaton and its one-way totalistic version. 94-46. 30: 152-157. Research report LIP-IMAG. Vol. 12] Hopcroft J. 1987.. World Scienti c. 28 . EATCS. Systolic Treillis Automaton (for VLSI). Generation on primes by a one dimensional real time iterative array. no. 1998.References 1] Albert J. Vol.

24] Mazoyer J. 1985. Vol. Jiang T. Vol. 17] Ibarra O.I. On one-way cellular arrays. Theoretical Computer Science.A. and Nakamura A. Stanford University. and Jiang T. Unplublished paper.14: 426{447. 25] Roka Zs. 1998.1988. Vol. 1992. Eds. and Vergis A.. Computations on cellular automata. Temuco (Chile). 1994.35 no.M J.. Cellular automata. in Algorithmique parallele. no. 1992. A linear speed-up for cellular automata. 29 .A.14] Ibarra O. 16] Ibarra O. Vol. in CIMPA School on Parallel Computation. Sequential machine characterization of treillis and cellular automata and applications. 20] Mazoyer J. 22] Mazoyer J. To appear inCellular Automta: a parallel device. a computational device. 23] Mazoyer J. Entrees et sorties sur lignes d'automates.I.1987. 13: 95-121. 1977.M J. To appear in Theoretical Computer Science. and Reimen N. and Moran S. 19] Mazoyer J. 21] Mazoyer J. 3: 697-726. Simulations between Cellular Automata on Cayley Graphs. 1960. Theoretical Computer Science. Technical Report 2. no. Comput. 1998. Relating the power of cellular arrays to their closure properties.16 no. 57: 225238. Nivat M. Cellular automata theory. 6: 1135-1154 . Kluwer: 77{118. no. Masson. To appear in Theoretical Computer Science. J.M Vol.C. Information Sciences. 26] Smith III A. Parallel language recognition on a plane of automata. On the power of one-way communication. 1998. Comput. and Terrier V. 1998. Signals on one dimensional cellular automata. 101: 58-98. and Jiang T. 1988. 18] Inoue K. 15] Ibarra O. and Robert Y. Vol. 47{65. Kim S. Mathematics and Its Application. S. no.A. Cosnard M. Some properties of two-dimensional online tessalation acceptors. S.

30 . Complex systems Vol. Vol. 1971. East Lansing. 141: 331{335. 30] Terrier V.27] Smith III A. Vol. Language recognizable in real time by cellular automata.: 144{152. Theoretical Computer Science. Language not recognizable in real time by one-way cellular automata. 18: 339-353. Theoretical Computer Science Vol. 1995. 28] Smith III A. Journal of Computer and System Science. Two dimensional cellular automata recognizers. Proceedings of the 12-eme Annual IEEE Symposium on Switching and Automata Theory. 156: 281{287. Two-dimensional formal languages and pattern recognition by cellular automata. 1971. no. To appear inTheoretical Computer Science. no. On real time one-way cellular automata. 8: 325{336. 1996. 1986. no. and Wechsung G. 1971. Computational Complexity. no. Reidel. Vol. Michigan. 29] Smith III A. Journal of ACM. 33] Terrier V. Real time language recognition by one-dimensional cellular automata. 1994. 34] Wagner K. 4: 299318. 31] Terrier V. no. 32] Terrier V. Simple computation-universal spaces.

Sign up to vote on this title
UsefulNot useful