You are on page 1of 573
A FIRST COURSE IN STOCHASTIC PROCESSES SECOND EDITION SAMUEL KARLIN HOWARD M. TAYLOR ACADEMIC PRESS New York San Franciseo Londo A Subsiory of Haroourt Hrove Joranunich, Publish ‘Coryntanr © 1975, BY AcsoEMIC Press, INC, No PART OF THIS PUBLICATION MAY BE REPRODUCED O8 (OB MECHANICAL, INCLUDING PHOTOCOPY, RECORDING, OR ANY ENEORMATION STORAGE AND RETRIEVAL SYSTEM, WITHOUT PERMISSION IN WRITING FROM THE FUBLISUER. ACADEMIC PRESS, INC. FIP Avene, now Yeu, New York 0005 United Kingdom Ediuion published by ACADEMIC PRESS, INC, (LONDON) LTD. 35/28 Ona Road, Londos NWI LLibsary of Congress Cataloging in Pubic A Data Xarlin, Sermuet, (ute) | fsst course in stochastic processes Second edition. Inchudes bibliographical references 1. Stochastic processes, L.Taylos, Howard M., joint author. IL Tile 4274.37 1974 S192 7-ss0s ISBN 0-12-398552-8 [PRINTED IN TIE UNITED STATES OF AMERICA CONTENTS Prefuee Preface to First Edition Chapter 1 ELEMENTS OF STOCHASTIC PROCESSES 1. Review of Basie Teraindogy and Properties of Randoon Vaclbles sind Distriution Fuuetiene 2. Two Simple Examples of Stochastic Processes Proleces Nowe Ufereuoes Chapter 2 MARKOY CHAINS ‘aston Probably Mateees of « Mackov Clin 1% m6 3 ra 4 © a Claeifention of Stats of a Marky Chain Recurrence Examples of Reesrrent Markov Ghats Mere 92 Reearence mentary Problems Problems Notes Referens Chapter THE BASIC LIMIT THEOREM OF MARKOV CHAINS AND APPLICATIONS 1, Discrete Renewal Bxoation 2. Proof af Theos 1) 3. Absorption Probabilities 4 Criteria for Reearonen 5. A Qnowing Example (6, Another Queceing Blodel 7. Rendora Wall Elementary Probie ” : a Prolene : : Ll Nowe Ieference Chapter 4 CLASSICAL EXAMPLES OF CONTINUOUS TIME MARKOV CHAINS A Connter Movi! iat nd Data Procenne Exaonplen ef Hirth and Doth Procote isth and Death Process with Absozhing State nite State Consinaoay Tie Merkoy Chait Plernen Probleme Notes Reference ny Prables| conTENTs 0 2 or 2 2 7 0 40 a at co * 1 106 . 108 1 ne 16 ur ie im ut us 150 1st 138 16s 166 contents Chapter ‘RENEWAL PROCESSES 1. Definition ofa Kemeenl Props and Related Concepts 2. Sour Fxampiie of Rraewal Processes . 5. Mowe om Some Special Renewal Crocessee - : 44 Kenewal Eqotions and the Elementory Renewal Thoareat 5. The Renewal Theorem 6. Agplentons of ho Renews! Thaw 1. Generaicstions aad Variation on Reaewal Prooeaes 8. More Elnierate Applicatione of Renewal Thoory 9, Supurpuaition af Honesal Pieces Elementary Problems Pronlees efeconee Chapter 6 ‘MARTINGALES 1, Pelimionsy Definitions and Examplve 2, Suparmantingalir aud Submartingulee 4. The Optional Sampling Theor 4 Some Appleton of the Optional Sampling ‘orem 5, Murtogale Convergence Thoorerns : : : 6. Appliations and Extensions of the Martingale Cenvergenee Theses 1. Martingale widh Roope ta Fields 5, Osher Maringales lementary Proldems : Problems Refornoen Chapur 7 BROWNIAN MOTION 1. Maekgronnd Materia Jsint Prabiiite fr Drones Motion 1 Goatinsiy af Pas end the Maxim Variables 1. Variations gad Extesens 5 Chnnpnting Some Functions of Ravwnisa Mion ty Mastingale Methods h Muttitisiosionsl Bnosevinn Movion . 7. Brian Pathe or in 1 19 ar a? at oa san ae 2a 288 233 268 a et on aa 325 36 339 539 a0 Ms is 31 397 365 a Problewe Note Ralerencee Chapter 8 BRANCHING PROCESSES 1. Deseret Tima Branching Proomsne . 2, Generating Fusctina Relatios for Branching Processes 13. Extinction Probabilities © Examples 5. Teepe Branching Processes 6. Muld-Type Dreuching Procesee 1. Goathnion Tine Sranching Procsasex ® Tatinetne Prabalilis foe Coutauows Time Beraching Prosetns 9, Limit Theotems for Cantiouowe Tine Bvsushing Posse: 19, TworType Contianous Time Reming Process 11, Branching Proosowa with General Varoble Lifetine Elmatary Probleme . Problems : Notes ference Chapter 9 STATIONARY PROCESSES 1. Deaton and Examples 2. Mena Squate Distance {5 Mean Sqance Eeror Predition 4 Prolietion uf Covariance Statioanry Prunet 5. Rrgude Theory and Stationary Proceanes 6. Applcntions of Exgie Theory 1. Spsetal Analyris of Cavaritnee Stationary Proceeee 6. Geunian Sy 8 Stationary Point Processes 1W, The Lovl-Ceoseng Problem =. : -Blsnentary Probes & Peake Notes Roseanne conrente 83 a Pa sa aw. 398 00 woe a2 46 49 a1 338 “a contents Appendix REVIEW OF MATRIX ANALYSIS 1. The Spectra Theatem 2 The Probes Tacory of Ports» Maries Index Fr 06 sit ss PREFACE ‘The purposes, set forth in the simmltancousty theory and app! Dish and elucidate each other. We have made three main kinds of changes. First, we have enlarged on. ‘the topics treated in the first edition, Second, we have added many exercises and problems at the end of each chapter, Third, and most important, we have supplied, in new chapters, broad introductory discus sions of severul clastes of stochastic processee not dealt with in the frst dition, notably martingales, renewal and iluctuation phenomena asso ated with random sums, stationary stochastic processes, and diffusion theory. Martingale concepts and methodology have provided « far-reaching apparatus vital to the aualysis of all kinds of fanetionals of stochastic procestes. In particular, martingale constructions serve decisively in the investigation of stochastic models of diffusion type. Renewal phenomena ture almost equally important in the engineering and managerial sciences especially with reference to examples in reliability, queweing, and inves tory systems, We disenss renewal theory systematically in an extended chapter. Another new chapter explores the theory of stationary processes and its pplication» to certain classes of engineering and econometric, problems, Still other new chayters develop the structure and use of ” el, and style of this new edition conform to the tenets inal peeface. We continue with our tack of developing tions, intertwined so that they refar- PREFACE diffusion processes for deseribing certain biological and physical systems and fluctuation properties of fume of independent random variables ‘useful in the analyses of quencing systems and other facets of operations research. ‘The logical dependence of chapters is shown by the diagram below. Section J of Chapter 1 ean be reviewed without worrying about details. Only Sections 5 and 7 of Chapter 7 depend on Chapter 6. Only Section 9 of Chapter 9 depends on Chapter 5. ‘An easy one-semester course adapted to the juniomsenive level could consist of Chapter 1, Seetions 2 and 3 preceded hy a cursory review of Section 1, Chupter 2 in its entirety, Chapter 3 excluding Sections 3 and/or 6, and Chapter 4 excluding Seetions 3, 7, and 8. ‘The content of the last part of the consse is left to the discretion of the lecturer. An option of ‘material from the early sections of any or all of Chapters 5-9 would be suitable, ‘The problems at the end of each chapter are divided into two groups, ‘The first, more or less elementary; the second, more difficult and subtle. “The scope of the hook is quite extensive, and on this account. it has been divided into two volumes. We view the first volume as embracing the main cutegorios of stochastic processes underlying the theory and rnost relevant for applications, Lv A Second Course we introduce additional vopies and applications and delve morn deeply into some of the issues of A First Course, We have orgenized the edition to attract a wide spretrum of readers including theorists and practitioners of stochastic analysis pertaining to the mathematical, engineering. physical, biological, social, and managerial seiences. ‘The second volume of this work. A Second Course in Stochastic Processes, will include the following chapters: (10) Algebraic Methode im Markov Chains: (11) Ratio Theoreme of Tran ties and Applications: (J2) Sums of Independent Random Variables as a Markov Chain; (13) REFACE Order Statistics, Poisson Processes, and Applications; (14) Continuous Time Markov Chains; (15) Diffusion Processes; (16) Compounding Stochastic Processes; (17) Fluctuation Theory of Partial Sums of fudepen- dent Identically Distributed Random Variables: (18) Queueing Processes. As noted in the first preface, we have drawn freely on the thriving literature of applied ani theoretical stochastic processes. A few represen- tative references are included at the end of each chapter: these may he profitably consulted for more advanced material. ‘We express our gratitude to the Woizmann Institute of Science, Stanford University, and Cornell University for providing a rich intellec- ‘tual environment, and facilities indispensable for the writing of this text. ‘Tho first author is grateful for the continuing grant support provided by ‘the Office of Naval Research that permitted an unencumbered concentra- tion on a number of the conecpts and drafts of this book. We are also happy to acknowledge our indebtedness to many colleagues who have offered a variety of constructive criticisms. Among others, these inelude Professors P, Brockwell of La Trohe, J. Kingman of Oxford, D. Tglehart und S. Ghurye of Stanford, and K. Ito and S. Stidham, Jr. of Cornell. We also thank our students M. Nedzela and C, Macken for their aceittanee in checking the problems and help in reading proofs SaMuEL KARLIN Howarp M. Taytor PREFACE TO FIRST EDITION tochastic processes conecen sequences of events governed hy proba- Dilistic laws. Many applications of ntochastic processes occur in physics, engineering, hiology, medicine, psychology, and other disciplines, aé well as in other branches of mathematical analysis. The purpose of this book: is to provide an introduction to the many specialized treatises on stochas- tie processes. Specifically, T have endeavored to achieve three obje (1) to present a systematic introductory account of several principal areas in stochastic processes, (2) to uttruct and interest students of pure mathematics in the rich diversity of applications of stochastic processes, and (3) to mako the student who is more concerned with application aware of the relevance and importance of the mathematical subleties underlying stochastie processes. ‘The examples in this book are drawn mainly from biology and engineer jing but there is an emphasis on stochastic structures that are of mthe- snatical interest or of importance in more than one discipline. A number of concepts and problems that are currently prominent in proba reacarch are discussed and illustrated. Sinec it ie not possible to diseuss all aspeets of this ficld in an elementary. . come important topics have been omitted, notably stationary stachastic processes and raartingales, Nor is the hook intended in any sen: as an authoritative work in the areas it does cover. On the contrary. Ls primary aim is simply to bridge the gap between an elementary mi PREFACE To FIRST EDITION probability course and the many excellent advanced works on stochastic proces Readers of this hook are assumed to he familiar with the elementary theory of probability as presented in the first half of Feller’s classic Introduction to Probability Theory and Its Applications. In Section 1, Chapter 1 of my book the necessury background material is presented and the terminology and notation of the book established, Discussion jn small print can be skipped on first reading, Txercises are provided at tho close of each chapter to help illuminate and expand on the theory This book can serve for either a one-remester or two-semester course, dopendiag on the extent of coverage desired. In writing this hook, I have drawn on the vast literature on stochastic processes. Fuch chapter ends with citations of hooks that may profitably We consulted for further information, including in many cases biblio graphical listings, Tam grateful to Stanford University and to the U.S. Office of Naval Research for providing facilities, intellectual stimulation, and financial support for the writing of this text. Among my academic colleagues I am grateful to Professor K. L, Chung and Professor J. McGregor of Stanford for their constant encouragement and helpful comments; to Professor J. Lamperti of Dartmouth, Professor J. Kiefer of Cornell, and Profestor P. ‘Ney of Wisconsin for offering a variety of constructive criticisms; to Dr. A, Feinstein for his detailed checking of substantial sections of the manu- and to sy students P, Mileh, B. Singer, M. Feldman, and B. Krishnarnoorthi for thoir helpful suggestions and their assistance in organizing the exercises. Finally, Tam indebted to Gail Lemmond and Rosemarie Stampfel for their superb technical typing and all-around administrative exre, Sanur Karsan A_FIRST COURSE IN STOCHASTIC PROCESSES ! I Chapter 1 ELEMENTS OF STOCHASTIC PROCESSES ‘The first part of this chapter summarizes the necessary background material and establishes the terminology und notation of the hook. Tt ix suggested that thy reader aot dwell here assidously, but rather quickly Tt can bo reviewed further if the need should ari later. Section 2 introduces the celebrated Brownian motion and Poisson processes, and Section 3 aurveys some of the broad types af stochastic processus that are the main vonorm of the remainder of the book. ‘The last section, included for completeness, diseusies some technical considerations in the general theory. The seetion should be skipped on a first reading. 4: Review of Basie Terminology and Propertios of Random Variables and Distribution Functions The present scetion contains a brief review of the basic elementary notions and terminology of probability theory. The contents of this section will be used freely throughout the hook without further referenc We urge the student to tackle the problems atthe close of the chapters they provide practice and help to illuminate the concepts. For more detailed treatments of these topies, the student may consult any good standard toxt for a first course ia probability theory (sce references at lose of this chapter). ‘The following concepts will be assumed familiar to the reader: (1) A ceal random variable X. (2) The distribution function F of X [defined by #7) = Px{X <2] and its elementary properties. (8) An event pertaining to the random variable X, and the probabilit thereof. (1) E{X), the expectation of X, and the higher moments E(X"} (5) The law of total probabilities and Bayes rule for computing probabilities of events, ‘The abbreviation #:v. will bo used for “real random variables.” A rev. ELEMENTS OF STOCHASTIC PROCESSES or denumerable set of distinet values Aysday- such that @j= PAX = A}o-0, F= 12,3... and Dyaj= 1. If PX = 2} =0 for every value of 2, the rv. X is culled continuous. Tf thore is a nonnegative function p(t), defined for —o v}>$ and PAX Sy Sh If X is a random variable and g is a function, then ¥—g(X) is also a random variable, IfX is a discrete random variable with possible values 1s,» 105 then the expectation of «(X) is given by BUX] = Sale) PX 23 ay provided the sum converges absolutely. If X is continuous and hus the probability density function py then tho expectation af g(X) is computed from FlelX)] = | atedpate) a. (12) ‘The general formula, covering both the discrete and continuous cases is HX) = fata) ays) a3) REVIEW OF BASIC TERMINOLOGY 3 where Fis the distribution function of the random variable X, Techni cally speaking, the integral in (1.3) is ealled a Lebosgue-Stieltjes integral. ‘We do not require knowledge of such integrals in this text but jaterpret (1.3) to signify (1.1) when X isa discecte random variable and to represent (12) when X possesses a probability density function py. Let Fy(3) — Pr{¥ , Peply) = if PA¥=y}>0, wherever py() > 0, and with Note that Fy), satisfies (GPA) Fyy2|5) ie a probability distribution funetion in x for each fixed y: (CP2) Fyis(z|y) is a function of y for each fixed x: and {C.P.3) For any values x, y PrX Se ¥ | Paylin) Py) whore y(n) ==Pr{¥ -0. In advanced work, (¢.2.1-8) is takon as the basis for the definition of conditional distributions. It can be established that. such conditional distributions exist for arbitrary roal eandom variables X and Y, and even for real random vectors X=(X,. .... Xj and Y=(¥,, .... ¥,). ‘Tho application of (C.#.3) in the case y— 20 produens the law of total probability Pr{X 0, and as a fixed arbitrary probability density function when py(y) = Let g be a function for which the expectation of g(X) is finite. The ‘conditional expectation of g(X) given Y= y ean he expressed in the form Ele(8)[¥ =91 =f a(0) dF oC) When X and ¥ are jointly continuous random variables, lg X)]¥ = yl may be computed from El XY = 9] = fatale y) de (x)p goa, y) dx afar ree ys, sy Pel) and if X and Y are jointly distributed discrete random variables, taking the possible valucs 5 %,, .... then the detailed formula reduens to BUROO|Y —y) — als) PIX D ales) PriX a, ¥ PHY=y) In parallel with (C.P.1-3) we see that the conditional expectation of (2) given Y= y satisfies (CE) Elg(X)[¥— yl] is a function of y for each function g for which Elie] < 03 and ) For any bounded function & we have Eg XM} =f Ble | ¥ = ya yaF yy) where F, is the marginal distribution function for Y. Lect aw validate the latter formula. in the continuous ease. We will stipulate that the set of values y for which py{y) > is an interval (a, 6) where — 00 0. « : 1. ELEMENTS OF STOCHASTIC probability density functions, and then substitute (1.5). This [Ele COLY = ay} dPY(y) = {mscre—s Tk(y)prly) dy A sto Pa asso) dy = i f BVM) pyle 9) de dy = EtsiMY)) In the last step we have used that pyre, ») >0 only when - Land every set igs oon i, of m distinet positive integers, the joint distribution function Fy,.4,, of the random variables X;,,..., Xi OF course, some consistency requirements mmst he imposed ‘upon the infinite family Fy, x,, namely, that F, ss Apen Byes se A) Maa Aye Apsas vee An) im Frans 0 1. ELEMENTS OF STOCHASTIC PROCESSES and that the distribution funetion obtained from tigiAt > Aas vey Aad by interchanging two of the indices f, and i, and the corresponding vari- ables 2, and A, should be invariant. This simply means that. the manner of labeling the random variables X,, X,,... s mot relevant. The joint distributions (F,,,__x,) are called the finitedimensional distributions associated with {X-},."In principle, all important probabil- quantities of the variables (X,)¢., can be computed in terms of the Gnite-dimensional distributions. Prax D. CHARACTERISTIC FUNCTIONS. An important fonction associated with the distribution function F of anv. X is its characteristic function (() (abbreviated c.f), where tis 4 real variable — oo 0,i—= 1,2 and O

@ao avomumy Ao ns) z a ie) Pd20cpeypeueny : ae ES) re ta) OTR Wy dy etd ‘ v we ssn — ume ogounf manag, sopumcan f2 einen sro SOT wosmay aarp meet mse aire suoyanquisiq Amigeqoud exo981q perewunooua Apuenbedd eos watave REVIEW OF BASIC TERMINOLOGY 7 Let |, || be an n x n aymmetric positive definite matrix, let by," be the iaverse matrix of ‘a,| and let B— det! be the determinant of [bjpy. Let m,, 7= 1 be any real constants, The random variables Xq. ony Xq are said to have a joint normal distribution if they possess a probability density function of the form VA 1 Pleo He) 9g Os 9 A) bp— mig] sly — my) i] The joint characteristic function is fonteS XI) Ola td = ~wslefim sinytl From this one can compute EX ]=m, for on and EUX,— m)(X,— my] =a, which justifies the name covariance maarix for the matrix |/a,|) From the nature of the characteristic function it is easily checked that X, have a joint normal distrilmation if and only if ¥— aX, +--+ |'a,X, Nas a normal distribution for every choice of real numbers (b) The Multinomial Distribution ‘Uhis is a disercte joint distribution of r vasiables in which only nome exative integer vahies 9, .... 2 ure possible, Tt is dofined by PrLX, where p,> Oi ‘Tho joint generating function is given by AASt5 8) = BLE oo 8] = (pis be tPes) A sequence {ay} of real numbers is suid to converge to a real number a, written lim,..,, G4, if for every positive ¢ there exists a number N(o) such that ja, —@) < N(o). There are several ways to general ine this concept to random variables, Let Z, Z,, Z,, ... be jointly dis tributed random variables. {a) Convergence with probabidity une We say Z, converges to Z with probability one if Prflim Z,=Z)=1. In words, lim, ... 54 == for a set of outcomes Z. having total probability one. a Fs {b) Convergence in probabitity ‘We say %, converges to Z in probability if for every positive o Jim Pr{|Z,— Z| >0}=0, or conversely, if for every positive « Sima Pr{lZ, A ceal. In words, by taking # sufficiently large, one can achieve arbitracily high probability that Z, is urbitrasily dose to Z. (0) Convergence in quadratic mean We say Z, converges to Zin quadratic mean if E(|Z,— 2)7]=0. In words, by making n sufficiently large. one can ensure that Z, ix arbitrarily close to Z in the sense of mean square difference. (A) Convergence in distribution (= Convergence in late) Lot Fi) =PrZ< 6 and Fy) —PriZ, <0) k= 1 2... We say 4, converges in distrilmation to Z (or F, converges in distribution to F) if lim F(a) = Fee) for all t at which is continuous. Tt can be proved that if Z, converges to Z with probability one. then 2, converges to Z in probability, and that this in turn implies that Z, converges to Zin distribution, Thus convergence in distribution ix the 1, REVIEW oF BASIC TERMINOLOGY weakest form of convergence, In fact it can be shown that every family {F,} of distribution functions contains a sequence {F,,} that: converges to wfunetion Fat all at which Fis continuous (tke Helly-Bray Lemma) but F may not he a proper distribution in that F(a} may bo less than on. ‘Many of the basic results of probability theory are in the form of limit theorems and we will mention a few here, (We do not gtate these results under the weakest possible hypotheses.) ~ Tet X,. Xz, ., be independent identically distributed random variables ‘with finite mean m, Let S,—=X, + +Xq and let X,= S,)n he the sample mean. Law of Large Numbers (Weak), X, converges in probability to as. That is, for any positive ¢ lim Prf|S,— Lav of Large Numbers (Strong). X, converges to m with probability one. That is Pe fim &, Central Limit Theorem. Suppose cach X, has the finite variance o?, Let and let Z be a normally distributed random variable having mean zero and variance one, Then Z, converges in distribution to 2, That, is, for all real a, lim Pr{Z, } ) which gives the inequality. If X is a random variable with mean y and variance o* and we apply (1.16) with Z=(X— 1}? wo obti PrfZ> 22} = Pr(|X— nh >e} ‘The Schwarz Inequality. Let X and Y he jointly distributed random variables haying finite eecond moments. Then (ELXY]}? = ELX*] £[Y*]. Proof. For all real £ 0< EUX 4AYP] = EXP] 4 27P [XY] + PELY Considered as a quadratic function of 2, there is, then, at most one real root. Equivalently. the discriminant of the quadratic expression is non- positive. That is HELLY)? < SERIE? ‘which completes the proof. 2: Two Simple Examples of Stochastic Processes The developments in this hook are intended to serve as an iotroduction to various aspects of stochastic processes. The theory of stochastic pro= cesses is concerned with the inventigation of the structure. of familion of candom variables X,, where ¢ is a parameter running over @ suitable index set. Sometimes, when no ambiguity can arise we write X(j) instead of X,. A realization or sample function of « stochastic process (X,,46 T} is an assignment, to euch £7, of a possible value of X,. ‘The index set # may correspond to discecte units of time T—{0, 1,2,3,...} and {X,} could ‘then represent the outcomes at successive trials like the result of tossing 1 coin, the successive reactions of a subject to a learning experiment, oF successive observations of some characteristic of a population, ete. ‘The values of the X, may he one-dimensional, two-dimensional, or n-dimensional, or even more general. In the case where X, is the outcome of the nth toss of u die, its possibile values are contuined in the set {1,2, 3,4, 5, 6f and a typical realization of the process would be 5, 1, 3, 2.2.4, 1,6, 3, 6,.... This is shown schematically in Fig. 1, where the ordinate for =n is the value of X,. In this example, the random rie, 1 variables X, arc mutually variables X, are dependent. Stochastie processes for which T—[0, 02) are particularly important in applications. Here # can usually be interpreted as tine. We will content ourselves, for the moment, with a very brief dis- cussion of some of the concepts of stochastic processes and two examples thereof; u summary of various types of stochastic processes is presented at the end of the chapter, while the examples themselves will be treated in greater detail in succeeding chapters. independent but generally the random Example 1. A very important example is the eclebrated Brownian motion process. This process hus the following charaetreisties: (a) Suppose ty <¢) 4,, depends only on ty =f, (and not, for example, on t)- (2) Pe(X, —X, Sa] = BBG — 9] | expl—u42BE — 9] de, 1 > 5 (Bis a positive constant). Assume for each path that X, 0, 6°(X) ~ Bt, where Bis a fised positive constant. It can be proved that, if 0-< ty < fy 0, if we suppose the medium to be in equilibrium, Finally, it is intuitively clear that the displacement X,— X, should depend only on the length t — s anil not on the time we begin observation, ‘The Brownian motion process (also called the Wiener process) has proved to be fundamental in the study of numerous other types of, stochastic processes, In Chapter 7 we will discuss morn fully an example of the one-dimensional Brownian mation process Example 2, Another basic example of a continuous time ('—[0. 20)) stochastic provess in the Poisson process, ‘The sample function X, counts tho number of times a specified event occurs during the time period from 0 to 1. Thus, each possible X, is represented as u nondecreasing stop function, 2. TWO SIMPLE EXAMPLES OF STOCHASTIC PROCESSES Figure 2 corresponds to a situation where the event occurred first at time f,, then at time t,, at time (, at time t,, ete.; obviously the total { He. 2 number of oceurrenees of the event increases only in unit jumps, and X =O. Concrete examples of such processes are the mumber of x-rays emitted by a substance undergoing radioaetive decay; the mamber of telephone ealls originating in a given locality; the occurrence of accidents at a certain intersection; the occurrence of errors in a page of typing: breakdowns of a machine; und the arrival of customers for service. The justilication for viewing these examples as Poisson processes is hased on ‘the concept of the law of rare events, We have u situation of many Bernoulli trials with small probability of success where the expected mumber of sucersses is constant. Under these conditions it is a familiar theorem that the actual number of events occurring follows a Poisson law. In tho caso of rudiouctive decay the Poisson approximation is excellent if the peroid of observation is very short with reapect to the half-life of the radioactive substance. We postulate that the mumbers of events happening in two disjoint intervals of time are independent [see (a) above]. Analogously to (b), we obo assume that the random variable X.,— Xj, depends only on t and ot on fp or on the value of X,,. We set down the following further postulates, whieh are consistent ‘with the intuitive descriptions. given above: L. ‘The probability of at least one event happening in a time period of duration his Pk) = ah +o(h), h-+0, a> ” 1. ELEMENTS OF STOCHASTIC PROCESSES [e(t) =o(0), #+0 is the usual symbolic way of writing the rolation Kim,-.0 g(t = 0). IL. The probability of two or more events happening in time h aff). Postulate II is tantamount to oxcluding the possibility of the sinul- ‘taneous occurrence of two or more events, In the concrete ilkustrations ited above, this requirement is usually satisfied. Let P,(@) denote the probability that exactly m events occur in time PA) = P(X, =m), ‘The requirement TT can be stated in the form Y Pai) = off), and clearly BA) = Py) Pall) Foe Beeause of the ascumption of independence, Plt +8) = Pal Polt) = Pe((l — p(t). and therefore. Patt +) — Pal) i) Patti Po _ py hl, But on the basis of Postulate I we know that p(h}/h—>a. Therefore, the probability Po(!) that the event hat not happened during (0, 1) satisfies ‘the differential equation Pol = —aP ols whose well-known solution is Po(?) = ee". The constant ¢ is determined by the initial condition P,(0) = whieh implies e= 1. Thus, Po(t) = We will now caloulate P,(0) for every m. It is easy to see that Pelt $A) = PAPA) + Pe (PA) +S Py AVP) (21) By definition Pa(h) ~ 1 — p(t). The requirement 11 imple that P.(diyme pl) + oft) und “ ™ 2.2) by PAP SD PAA) = oth), sinee obviously P(t) <1. (21) into the form PAC M — PaO = Pol MPA) — M+ PPA +S Pf PAND = PaO) + Parl Pi) + YP APO) cfore, with the aid of (2.2) we rearrange = —aPyldh HaPe (Gh Holt) ‘Thorefore Pal +h) — Pals) ar) tar) a a0 and, formally, we get Pat) = —aPpt) +P y(t, m= 1, 2.0, (2.3) subject to the initial conditions PaO) 0 m=1,2.., introduce the functions In order to solve (2.3), w On() = Pa(e™ m=O, Le Breve Substituting the above in (2.3) gives Pal) = AQ, e MAN Devore (2a) where Qo() =1 and the initial conditions are Q,(0) =0, m= 1, Solving (2.4) reeursively we obtain Qe or Asatte so O()=at se 0 =5F 0.0)= oF ‘Therefore we Pal) OF ova Tn other words. for each , X, follows a Poisson distribution with param tor at, In particular, the meun number of occurrences in time t is at Py 1. ELEMENTS OF STOCHASTIC PROCESSES Often the Poisson process arises in a form where the time parameter is replaced by a suitable spatial parameter. The following formal example illustrates this vein of ideas. Consider an array of points distributed ia a space E (E is a Kuclidean space of dimension d> 1). Let NV, denote the number of points (finite or infinite} contained in the region R of E. We postulate that Vy is a random variable. The collection {Ny} of random ‘variables, where I varies over all possible subsets of E, is said to be a homogeneous Poisson process if the following assumptions are fullilled: (i) The numbers of points in nonoverlapping regions are independent random variables. (Gi) For any region R of finite volume, Ng is Poisson distributed moan A(R), where F(R) is the volume of R, The parameter A is fixed and measures in a sense the intensity component of the distribution, which is independent of the size or shape. Spatial Poisson processes arise in considering the distribution of stars or galaxies in space, the spatial distribution of plants and animals, of bacteria on a slide, ete. These idews and concepts will be further studied in Chapter 16, 92 Classification of Cenoral Stochastic Processes ‘The main clements distinguishing stochastic processes are in the nature of the state space, the index parameter T, and the dependence relations among the random variables X,. STATE spaces This is the space in which the possible values of each X, lie. In the ease that S= (0, 1,2, ...), we refer to the process at hand as integer valued, or alternately ar a discrete state process. If $— the real line (—c0, 20), then we eall X, a real-valued stochastic process. If Sis Euclidean & space then X, is said to he a k-veetor process. As in the case of a single random variable, the choice of state space is ‘ot uniquely specified by the physical situation heing described, although usually one particular choice stands out as most appropriate. INDEX paRaMeren 7 It T=(0,L,...) then we shall always say that X, is a discrete time stochastic process, Often when Tis disercte we shall write X, instead of X,. If T= [0, co), then X, is called a continuous time prooess. 3. CLASSIFICATION OF GENERAL STOCHASTIC PROCESSES 2 We have already cited examples where the index set T is not one dimensional (spatial Poisson processes). Another example is that of waves in oceans. We may regard the lutitnde and longitude coordin- ates ae the value of 4, and X, is then the height of the wave at the Tocation t Wo now deserihe some of the classical types of stochastie processes characterized by different dependence relationships among X,. In the examples, we take T—[0, 26) unless we state the contrary exphivitly. For simplicity of exposition, we assume that the random variables X, are veal valued. (a) Process scith Stationary Independent Increments Af the random variables Xp Ky Xp Kye MON, are independent for all choices of 1,,---, 4, satisfying RRS hy then we suy set contain hut X, ix a process with independent increments, If the index a smallest index fp, it i also assumed that Xx, Xu — Kegs Xa Kegs oe Be Xi, are independent. If the index set is discrete, that is, T'= (0, 1, ...), then 4 process with independont increments reduces to a sequence of inde- pendent randora variables Zo po Xray EA. 2 3, eae in the sense that knowing the tions of Zoy Za enables one to determine (a8 should be fairly clear to the reader} the joint, dlisteibution of any finite set of the X;. In fact. Xap Btn thy ia OL an It the distribution of the increments X(t, +A) — X(4) depends only fon Uke length é of the interval and mot on the tbe process is sti to have stationary increments, For a process with stationary inerements the distribution of X(t -[ 5) —X(t,) is the same as the distribution of Xlty +4) — Xit,). no matter what the values of t,t and h. Tia process {X,,1€T}, where T=[0, co) or T= (0 1, 2...) has Mtationaey indepondont merements and has a finite mean, then it is slementary to show that E[X,]—me-+mt where mp—E[Xo] and m= F[X,]— my. A similar assertion holds for the variance: oh =o +o where 3 ~ E[(Xo—mo)*] and a = BUX, —m)"]— a9. Wo will indicate the proof in the ease of the mean, Let f(i)= F[X,] ~F[X,]. Then for any t wad s FO+9) = EX, — Xo] =FAX.,,—X,+X,— Xo] Xo] 1 = FIX,..— X)+ FLX, = B[X, — Xo] + EX, (using the property of stationary increments) —fO +F6). ‘The only solution, subject to mild regularity condit equation ft-+5)=f() +S) is {=f} +t. Wo indicate the proof of the above statement assuming f(t) differentiable, although much Tess would suffice, Differentiation with respect to «and independently in s verifica that £3) =e Integration of +a. Bat, f(0) =2/(0) pl ‘The expression f() = F(t for the ease at hand is FLX] — my = (EG) — mot or EX] = mg + mye as desired, Both the Brownian motion process and the Poisson process have stationary independent increments, (b) Martingales Let {X,} be a real-valued stochastic process with discrete or continuous parameter eet. We say that {X,} is a martingale if, E[\X,|J <0 forall and if for any t, <<) < for all values of a, "", @,. Martingales may be considered as appropriate ‘models for fair games, in the sense that X, signifies the amount of money that @ player has at time 1, The martingale property states, then, Uiat the average amount a player will have at time 1,,, given that he has amount a, at time é,, is equal to a, regardless of what his past fortune has been. The reader ean teadily verify that the process X,— Z,+-"* + Z,,m~ 1.2, ...yi8 a discrete time martingale if the Z, are independent and have means zero. Similarly, if X,,0-t. do not depend on the values of X,, u 8 G2) called the transition probability funetion und is basic to the study of the structure of Markov processes. We may express the condition (3.1) a+ follows: Prac X,and arbitrary selections fy, f3 «54, rom T. This condition asserts that in essenee the process is in probabilistic eqnilih and that the particular times at which we examine the process are of no relevance. In particular, the distribution of X, is the same for each A stochastic process X, for 1 Tit said to be wide sense stationary oF covariance stutimary if it possesses finite second moments and if Cov(X,. Xie) = AX, X15) — EX )E(N...) depends only on h for all te T. A stationary process that has finite second moments is covariance stationary. There are covariance stationary processes that are not stationary. Stationary processes aco appropriate for describing many phenomena that occus in communication theory, astronomy. biology, and ometimes reonomics and are discussed in more detail in Chapter 9 A Markov process is said to have stationary transition probabilities if P(x, #5, A) defined in (8.2) isa function only of ¢— 3. Remember that P(x, s: f, A) is « conditional probability, given the present state. There- fore, there is no reason to expect that. a Markov process with stationary transition probabilities is a stationary process, and this is indeed the ease. Neither the Poisson process nor the Brownian motion process is stationary. In fact, no nonconstant proeess with stationary independent increments is stationary. However, if {X,, © [0 90)} is Brownian motion or a Poisson process, then Z,—X,,,—X, is a stationary process for any fixed A> 0. (©) Renewat Processes. A renewal process is a sequence T, of independent and identically distributed positive random variables, representing the lifetimes of some “units.” The first unit is placed in operation at. time zero; it fails at time T, and is immediately replaced by a new unit which then fails at time T, + T,,, and so on. thus motivating the name “renewal process.” The time of the nth renewal is S, = Ty - +--+ Tye IGATION OF GENERAL STOCHASTIC PROCESSES 21 A renewal counting process N, counts the number of senewals in the interval (0, t]. Formally, N= a for S, Sto 8.4400 12 riG. 8 Ofien the distinction is not made between the renewal process and the associated renewal counting process, and no real confusion results. Renewal processes occur directly in many applied areas such as management seicnce, economies, and biology. Of equal importance, often renewal processes may be discovered embedded in other stochastic processes that, at first glance, seem quite unrelated. Chapter 5 is devoted tw renewal processes. ‘The Poisson process with parameter J is a cenewal counting process for which the unit lifetimes have exponential distributions with common parameter 2. (F) Point Processes» Let S be a set in n-dimensional space and let f be a family ofsubsets of S.A point process is a stochastic process indexed by the sets Ae af and having the set {0, 1, .... 0} of nonnegative integers ws its state space. We Lhink of “ points” being seattered over $ in some random tanner, and of N(A) as counting the number of points in the set 4. Since N(A) is a counting function there are additional requirements on each realiration, Wor example, if 4, und 4, are mutually disjoint sets iu of whose union 1, UA, is also inf, then we require N(A,U Aa) = Mc) + NA). sand if'the enupty set @ is in af, then N(@) — 0. Suppose Sis n svt in the real line (plane, 3 dimenssonal epacs) and for y rubsct, ACS, let F(A) be the length (rospectively area, volume) 2 ELEMENTS OF STOCHASTIC PROCESSES of A. Then {N(4), ACS} is a homogenrous Poisson point process of intensity 120 ifs {) For each 4S, N(A) has a Poisson distribution with parameter 30d), and (i) For every finite collection {4,, random variables N(Ay)eo5 N(A,) ar Poisson point processes urise in considering the distribution of stars or galaxies in space, the planer distribution of plants and animals. of bacteria on a slide. ete, These ideas and concepts will be further studied in Chapter 16, Volume TI. Every Poisson process {X,:12[0, 0,0-0, OST} the value lim... Pr{4;}, but itis by no means elear that lim,..., Pr{A,} Jim, Pr{4;}. and in fact, the two limits need not be equal if no “smoothness” assumptions are made conceming the sample functions of the process. There are various sufficient conditions for the equality of the two Limits: one of them is that lim,.., Pr{LX,—XJ >} =0 for every @> 0 and every t. With this condition, the problem ean be forme lated s0 that no incoasistoncy arises if we define Pr{X,>0. 0-< <1} as the common value of the two limits, and in fact, if 4. ts, is any dense set of points in the interval [9, I], then Pr{X, 20, 0-5¢< 1} = Tinay sg Pr{X,, 20, $= 1, 2, sn} The nub of the matter is that while the axiom of total probabil enubles us to evaluate probabilities of events co cerning a sequence of s.°s in terms of the probabilities of events involving finite and hence demumerable subsets of the sequence, the event {X,=0, 0a}— 3 Pax Dk. Hine: Wogin with BEX) = $n X= Wh S$ PHC my 1 be Mont, *Stechaatie Pres.” Wiky, New York, 1953 = ELEMENTS OF STOCHASTIC PROCESSES 2. Suppose a jar has w chips numbered 1.2, nA person draws a chip, roturns it, draws another, returns it, and sv on until he gets a chip whieh has heen drawn before aad then stops. Let X be the number of drawings requited +t accomplish this abjective. Find the probability distribution of X, Hint: It's easiest to first compute Pr{X >}. Solution: AAI) = 06. ae for 28, cut Lhe 3. Show that the expoctation of the random variable X of Problem 2 is Yee) Hint: Use Elementary Problem 1. A. The number of accidents occurring in a factory in a week in a zandom le with mean js and variance a, The numbers of individuals injured in different aecidents nze independently distributed each vith mean and variance 1°. Determine the mean and vaziance of the unmber of individuals injuzed in 2 week. Solution: F{injuries) = ps 15. The following experiment ix performed. An observation is made of a Poisson sandom variable X with parameter 2. Then a binomial event with probability p of success is repeated X times and a successes are observed. What is the distribution of o? Hint: Use the goncrating function for the random sum of random variables Solution: Poisson, with parameter Jp, 6. Show that the suns S, X, with zero moan form an 1 Aastene By] <0 for k= 1, 2s 1. Prove that overy stochastic process {X(); t=O, 1...) with independent increments is a Markov process. (Remark; Thitis not true of stochastic processes IX; 00 <4 28}, 8. Consider a population of m couples whero a boy is bom to the ith couple with probability p; and ¢, is the expected number of children born to this couple, Assume p, is constant with timeforall eoupled and that sexes of 2uco: children born to « particular couple are independent r.v's, Farther, sesame ELEMENTARY PROSLEMS * Suppose ¢ Soluion: S=S, —(Sf- Pole 9 If the parents of all couples decide to have children until a bay is hom and thon have no fucther children, show that no farther children. If their first child is a girl they will continue to have children. on Joy is born aud thea have av further children. Compute S corresponding to t Disth control bebavior. Solution: whee g = Sleat—» 1, Suppose the parents of all children decide that if their first child is a boy they will continue to have ehildren until a gial is born and then have no further children. IC their frst child ie a gir! they will have no further children, Compnte 'S corresponding to this behavior. Solusion: . Eva S=5, 12, Show that, depending on the value of {Ps .--, Py} Scam satisfy either Ny" S, or S, ~ S, , where 5, is the sex ratio of Elementary Problems & and S; is, the sex ratio of Hlementary Problem 10. 14, Suppose chat a child boru to the ith set of paremts in a population of rots of parents has probability p, of a birth disorder, i= 1,2,..., m. Assume thut the birth of one affceted child deters parents ftom further reprodvetion, Lot x =the number of offspring in a single family when no affected children hom, Assume that with respect to any given birth, p; does not depend on. * ELEMENTS OF STOCHASTIC PROCESSES preceding births, Show that ny = betel number of aes tlle Expected total nuzaber of elildcon born : " 50 —anip, () Assume that the bith of two affected children (but not onc) will deter yporents from further zepewduction. Show that under thin kind of sckotive Frmication 5, 20-4) —pai-'} E ean Problems The following intograls may be useful in some of the problems, and are recorded here for future refercuce. ‘The gamma funetie Altand by feteta, apo. or argos, [(a) ~/VBe"*¥* 1? (Stings fms), When + Tiare ele 2) 2D “The Bet integral ie given by TE frtacartec wi p> 40. 1. Let bebe indepen nam vais fray dnd (1) ‘What is the probability that ax? ++ br + has real r00ts? Answer: (5+ 3 log 4)/36. 2% For each fixed 1>0 let X heve a Poisson distribution with parameter A. Suppose 2 iteelf is « randons variable following a gamma distribotion (i.e. with density where 1 is a xed positive constant), Show that now Tik+m) @ Ps wore ty . ‘When n isan integer thie is the negative binosaial distribution with p 3. For each given p let X have a binomial dictribution with parameters p a itself binomially distributed with parameters q and Mf, M > ically that X has w binomial distribution with parauacters ( pyand M. (b) Give a prubabilistie argument for thie result, 4, For cach given p, let X have a binomial distribution with parametors p and Suppose p it distribated according to a heta distribution with parameters and s. Find the rejulting digteibution of X. When is thie distribution uniform, one bee NP Answer: NYTer aN APIA 1) ony — (NREL Pe Ar 9, ri) anata ot PENH) SIAN EN) when 5. (a) Suppose X is distebuted according to a Poiscon distribution with parameter 7. The parameter 1s itl « sandom vanable hove diwibution lewis exponcaral with mean > Ife Find the disrbution of X- @) Whar tJ follows « gamvoa dictlbution of order ith sale parenctor este the density of Lis pa Tern” fords 0:04 LE0. Ane: (a Pr{x -ay Meet) 1 yer, © Mantel) 6. Suppuse we have ‘Vehips marked 1, 2, ... N, respectively. We take a rauitom, ample of size Qn — 1 without replacement. Let ¥' be the median af the random sample. Show that the probability function of Y is (a) Verity 2n— IN +1) Bn and Yay EY) = 2 ELEMENTS OF STOCHASTIC PROCESSES 7. Suppose we hove N chips, nombered 1, 2, ....V. We take @ random snanple of size m without replaceanent. Let X he the largest amber in the random sample. Show that the probability function of X is (=) (") nye vnexy N= 1) TET NS Ne Tei Fay Pe{X =k} = for Rem Atlin and that EI 8 Let X; amd X; he independent random variables with uniform distribution over the interval [@ —4,0-+4]- Show that X,—X, has a distribution inde- pendent of f! and find its densiey function, Answer? w fi Hy ox = lly, F201 » 9. Let X he a nonnegatice random variable with cumulative distsibution fame- tion P(x) = Pr{X <= x). Show am [t— ree Him: Waite EX] f sare | ( fajare. WO, Le X bea mmngate random variable an It Xe—mingXo} a if Xee -|e it XSe where e is a given constant. Express the expectation E[X,] in terins of the cuaulative disteibution function Ff) = Pr(X- x}. Anscer FTX I= { [L— Fea de. LL. Let X and Y be jointly distributed disezete random variables having possible values 0,1,2,.... For |el- 00 im such a way that m[(m + m)—> py and my{m +n) — py i = 1,2, 7, show that the distribtion of Problem 15 approaches the negative multinomial. A7. The random variable X, takes the values kfa,k—1,2,—n, each with probability In. Find its characteristic function and the Limit as x—> a2. Adentify the random variable of the limit characteristic function. Answer: 4a) 9, = (he) 1 a expan) 1" () wniform (0,1) 18, Usiug the central Timit theorem for suitable Poisson rnadom varinbles, prove that Smt Mae" mma 19. The random vniablesX an Y have the allowing properties: X is positive, ie, P(X 0} ~ 1, with continuous density fanetion f(a), and Y1X har a une fori dstibution ou (0,4). Proves It Y and AY ave sudepondently te trite, chen Say=a'xe™, 2 >0, a>. ¥20. Let Lhe gamma distributed with urder p and lot V have the beta distribra- tion with parametere g and pg (0<<9 0. Assume f(0) > 0. fine: Show fest that the joint density function af U end Vis Suva. 8) = Slee + |u|). Next, equate this with the product of the marginal densities for U, Ps ze Lie Xml Vt to lneendan tomgae tp, dom lic whan habia he epee 0) (*") ety {or all nonnegative integers x and y where mt and m are given pestive intogers. Assume that Pr{X —0} and Pr{¥-=0} are steietly positive. Show that both X and Y have binomial distributions with the same parsmeter p, the other pezameters being m anil 2, respectively. BB, (a) Let X and Y Ine independent yandom variables such that Pr{X = alX $ ssty= , PAK = Of, PLY == at, FO~9, 0, HO.12,.. ana sappose , merce ran [O)re-e’ esas 6, bok Prove that Siar G =O 12 whose a= pf —p) and 0-0 is arbitrary. (b) Show that pis detennined by the eon Ly or) =i Hint: Lor F(s) =D flijs!. (6) = Delis. Eetablish first the eelation Flu) Fle) = Flop + (1 —p))Glep + — pa). a 1. ELEMENTS OF STOCHASTIC PROCESSES 24, Let X be a nonnegative integer-valued random variable with probability gonerating function f(s) —Sta9 qe". After observing X, then conduct, X hinowial trate with probability pof success. Let ¥ denote the resulting number ‘of eaceesses, {a) Determine the probability generating fonction of Y. (b) Detarmine the probability goneratiug funetion of X given that Y= X. Solution: (a) fl —p-+ ps); ) fpslifir). 25. {Continuation af Problem 24) Suppose that for every p (0-

0. 26. ‘There aro at least four sehocls af thought on the statistical distribution af stack poice differences, or more generally, stochastic models far sequenecs of stock prices. In terms af number of followers, by far the most popular approach ig that of the so-called “technical analysist”, phrased in terms of shart term ‘trends, suppart and resistance levels, technical rebounds, aud so on. Itojecting this technical viewpoint, two other echools agree that scquensss of prices deseribe a random walk, when price changes are statistically independent of pravious price history, but these sehocle disagree in their choice of the ap- propriate probabibity distributions. Some authors find price changes tw have a formal distribution while the other group finds @ dietribution with “fatter {ail probabilities, and perhaps even an infinite variance, Finally, a fourth group (overlapping with the preceding twa) admits the random walk as a first-order approsimation but uotes recognizable second-order effecte. "This exercise is to show « compatibility between the middle two groupe. It has been noted that those that Bnd price changer to he normal typically measure the changes over a fixed mamber of transactions, while those that find the larger tail probabilities typically measure price changes over a fixed time period that may cuatain a random auuber uf transactions, Let Z be price change. Use as the measure of'*fatness "(and there could be dispute about this) the coefficient of axcese vas [malm)"]— 3, where mis the Mth moment of Z about ite mean, Suppose on ench transaction that the price advances hy one unit, or lowers hy fone unit, each with equal probability, Let A be the number of transactions and wetite Z =X, fo +Xy where the Xys are independent and identically distributed xaudom variables, each equally likely to be +1 or —1. Compute for Z:(a) When Nie afixed mumbora, and (by When IV has a Poisson distyibn ‘with mean a. 27. Consider an infinite number of urns into which se toss halls independoatly. sn such ¢ way that a ball falls into the kth ure with probability 1/2 f= 1,2, 3, "For each positive futeger NY, let Zy be the number of urns which eoutain at least one ball after a total of N balls have been tossed, Show that H9)— S012"), sof that there exist constants C, 50 and €, > 0 euch that Clog N= BlZy)0, q.20, 720, and q+r, tp = 1, i=1, 2, Po 20, ry 20, re-+ pro = L. Specifically, if X, =i then, for i> 1, PAX SEPT K y= PAX Pry = 1X, = with the obvious modifications holding for ‘The designation “random walk” soems apt since a realization of the process deseribes the path of a person (suitably intoxicated) moving randomly one step forward or backward. ‘The fortune of a player engaged in a series of contests is often depicted by a random walk process. Specifically, suppose an individual (player A) with fortune k plays # game against an infinitely sich adversary and bas probability py of winning one unit and probability g, = 1 —p, (& > I) of losing one unit in each contest (the choice of the contest at each stage may depend on his fortune), and ro~ 1. The process (XyJ. whe representa his fortune after n contests, is clearly a random walk. that once the state 0 is reached (i.e. player A is wiped out), the process ins in thut state, This process is also commonly known at the * gambler’s ruin, ‘The random walk corresponding to py—ps #—1—p=q for all > 1 and ry=1 with pg describes the situation of identical contests with a definite advantage to player A in each individual trial. We shall prove in Chapter 3 thut with probability (g/p)*, where xo represents his furtane at time 0, player A is ultimately ruined (his entire fortune is lost), ‘with probability 1— (qip)**, his fortune increases, in the tong run, favor of the without limit, If p 0,720, and 2p --r= 1. Conventionally, “ symmetvie random walk" refers only to the case r= 0, p= 4. Motivated hy consideration of certain physical models we are led to the study of random walks on the set of the nonnegative integers. We 2. EXAMPLES OF MARKOV CHAINS o classify the diferent processes by the nature of the zero state, Let us fy attention on the random walk desexibed by (2.2). If py = 1, and there- fore rp =0, we have a situation where the zero state acts like a reflecting harrier, Whenever the particle reaches the zevo state, the next transition automatically returns it to state onc. This corresponds to the physical process where an clastic wall exists at zero, and the particle bounces off ‘with no after-ctfects. Tf po= 0 and ro—=1 then 0 sets as an ubsorbing barrier, Onee the particle reaches zero it remains there forever. If p, > 0 andr, > 0, then Qis a partially reflecting harrier. ‘Whou the random walk is restricted to a finite number of states $, say 0, 1,2, ....4, then both the states 0/and a independently and in any combination may br reflecting, absorbing, or partially reflecting barriers. We have already encountered a model (gambler’s xuin, involving two adversaries with finite resourers) of a random walk confined to the states ‘S where 0 and a are absorbing [see (2.3)}. A classical mathematical model of diffusion through a membrane is the famous Bhrenfest model, namely, a random walk on a finite set of states whereby the boundary states are reflecting. ‘The random walk is restrieted to the statos #— —a —@-+ Tyee he, Te seogt with, tra sition probability matrix if j=i+l. P, “ if joi. 0, otherwise, ‘The physical interpretation of this model is as follows, Imagine two con- tuiners containing a total of 2a balls. Suppose the first container, labeled A, holds i halls and the second container B helds 2a — k balls. A hall is elected at random (all selections are equally likely) from among the Autality of the 2a balls and moved to the other container. Each selection genorates a transition of the process, Clearly the balls fluctuate between the: two containers with a drift from the one with the larger concentration ‘of bolls to the one with the smaller concentration of balls. A physical nystem which in the main is governed by a set of restoring forees esken- ally proportional to the distance from an equilibrium position may nomctimes be approximated by this Ehrenfest model. ‘The classical symmetric random walk in n dimensions admits the fol- lowing (ormutation. The state space is identified with the set of all integral uttice points in B" (Buclidean a space): that is, a state is an n-tuple 2. MARKO CHAINS k= (ky,ky.-sk) of integers. The transition probability matrix is defined by 1 ® if Sk, Py iin Rbk otherwise. Analogous to the one-dimensional case, the symmetric random walk, in EP reprevents a discrete version of n-dimensional Brownian motion. 1G. A DISCRETE QUEUEING MARKOY CHAIN Customers arrive for service and take their place in a waiting ac. During cach period of time a single customer is served, provided that at least ane customer is present. If no customer avaits sorvice then daring this period no serviee ie performed. (We can imagine, for example, @ taxi stand at which « cab arrives ut fixed time interval: to give service. If no one is present the cab immediately departs.) During a service period new customers may arzive. We suppose the actual number of arrivals in the nth period is a random variable £, whose distribution function is independent of the period and is given by Prik customers arrive in a serviee period) — Px, ROM a 20 and =1. 4) ‘We also assume the r.v."s , are independent. The stute of the system at the start of each period is defined to be the number of customers waiting in line for service. If the prevent state isi then aftor a lapso of one period the state is f-1+E if GEL, J \g fi where & is the number of new customers having urrived in this period while a single customer was served. In terms of the random variables of the process we can express (2.5) formally as Xan OG +E es where ¥* =max(¥. 0). In view of (24) and (2.5) the transition prob- ability matrix may be trivially caloulated and we obtain (25) oe ene en Pall = 2. EXAMPLES OF MARKOV CHAINS. 5s It fs intuitively clear that if the expected mumber of new customers, Tho kas, that arrive during a service period exceeds 1 then certainly ‘with the passage of time the length of the waiting line inereazes without Simic. (On the other hand, if DP.» faa, <1 then we shall see that the length of the waiting line approaches an eqpilibrinm (stationary state). Tf ‘ka, = 1. a situation of gross instability develops. These statements will bbe formally elaborated after we have sot forth the relevant theory af recurrence (ser Section 5, Chapter 3). . INVENTORY MoDEL Gansider a situation in which # commodity is stocked im order to satisfy a continuing demand. We assume that the replenishing of stock take place ut successive times #425... and we assume that the cumlative demand for the commodity over the interval (I, ) is a random variable ¢, whoce distribution function is indopendeat of the time period, Pram, k= 01,2, 28) where 1, 20 and YP.g a, —1, The stock level is examined at the start of each period. An inventory policy is prescribed by specifying two nonnegative critical values ¢ and S>-s. The implementation of the inventory policy is as follows: If the available stock quantity is not greater than s then immediate procurement is dane so a to bring the ‘quantity of stock on hand to the level S. If, however, the available stock is in exeese of s then no replenishment of stock is undertaken. Let X, denote the stock on hand just prior to restocking, at f,. The states of the process {X,} consist of the possible values of the stock size SSL th 0 A ee where a negative value is interpreted as an unfulfilled demand for stock, unediately upon restocking. According to the 2.9) where é, is the quantity of demand that arises in the nth period, based ‘on the probability law (2.8). If wo assume the &, to be mutually indopend- ‘ent, then the stack values Xq,X,,X,,... plainly constitute a Markov chain whose transition probability matrix can be ealeulated in accord: ‘ance with the relation (2.9). . success AUNS. Consider a Markov chain on the nonnegative integers with transition probability matrix of the form Poe 0 0 PO ag 0 (Pyl=|p 9 0 a mf 20) PO Oo 0 0, 72> O-and 9. m= 1, $= 9, 1,2, ne The zero state plays « distinguished role here in that it ean be reached in one transition from any other state, while state i+ 1 can be reached only from state i. ‘This example is very easy to compute with and ve will therefore frequently hustrate concepts and results in terms of it. A special caze of this transition matrix arises when one is dealing with success runs resulting from repeated trials each of which admits two pos- sible outcomes, success ($) or failure (F). More explicitly, consider a sequence of trials with two possible outcomes (8) or (F). Moreover, suppose that in each trial, the probability of (S) is x and the probability of (F) is B= 1—a, We say a success run of length x happened at trial if the outeomes in the preceding ¢-+1 trials, ineluding the present trial as the lust, were respectively, F, 8, S, .., 8. Let us now label the prosent, state of the proccss by the length of the success run currently under way. In particular, if the last trial resufted in a failure then the state is zero. Similarly, when the preceding r+ trials in order had the outcomes F,S.S, 05, the state variable would carry the label r, The process is clearly Markovian (since the individual trials werr independent of each other) and its transition matrix has the form (2.10) where Pe=B 2a OL F. BRANCHING PROCESSES Suppose an organism at the end of its lifetime produees a random number ¢ of offspring with probability distribution Pre Bag, be O12 an) where, at usual, a, 20 and Sf.ga,— 1. We assume that all offspring uct independently of cack other and at the end of thoir lifetime (for 2. EXAMPLES OF MARKOV CHAINS simplicity, the lifespans of all organisms are assumed to be the same) individually have progeny in accordance with the probability distribu- tion (211), thus propagating their species. The process {X,}, where X,, is the population size at the ath generation. is a Markov chain, Jin fact, the ony relevant knowledge rogarding the distribution of mae Xqys eve Xqey Kgs My 0. One of the questions of interest is to detcrmiae the probability under the condition Xp—i that the popalation will attain fixation, ie., a pure population composed only of a-genes or A-gones. It in also pertinent to determine the rate of approach to fixation. We will examine such que: tions in our gencral analysis of absorption probabilitie A more realistic model takes account of mutation pressures. We assume that prior to the formation of the new generation each gene has the possibility to mutate, that is, to change into w gene of the other ki Specifically, we assumo that for each gene the mutation aA occurs with probability q,, and A-a occurs with probability 2,- Again we as ume thut the composition of the next generation is determined by 2 independent binomial trials. The relevant value of p, and g, when the parent population consisis of j argenes are now taken to be p= iye—m) + (L Fee (h Aeon os ‘The rationale is as follows: We assume that the mutation prestures porate first, after which a new gene is determined by selecting at random ‘from the population. Now the probability of selecting an a-gene after the mutation forces have acted is just 1/2N times the number of a-genes present: henee the average probability (averaged with respect to the possible mutations} is simply 1/21V times the average number of a-zenes after mutation, But this wwerage number is clearly j(i — a) + (2N — Jaa. which leads at oner to (2-14). ‘The transition probabilities of the associated Markov chain are eal: culated by (2.13) using the values of p, and g, given in (2.14). RA. Hier, “The Genoteal Theory of N Lauiog and New York 1962, ol Selection." Oxford (Carendon) Press, 2. EXAMPLES OF MARKOV CHAINS " Tfx,x2 > 0 then fixation will not cecur in any state. Instewd, us n 20, the distribution function of X, will approach a steady state distribution of a candom variable £ where Pr(=k}=x, (k—=0,1,2,...,2i¥) (Stoo me = 1, xq > 0). The distribution fonetion of Z is called the steady state geno frequency distribution. ‘We return to the simple random mating model and discuss the concept of a seleetion force operating in favor of, say, a-genes. Suppose we wish to impose a selective advantage for a-genes over A-genes so that the relative number of offspring have expectations proportional wo L-+s and 1, respectively, where s is small and positive. We replace p;—jJ2N and g,=:3 —j/2N by Otay “wes UO Ps Pp and build the next generation by hinomial sumpling as before. If the parent population consisted of j a-genes, then in the sext generation the expocted population sizes of a-genes and A-genos, respectively, are ato; @v-s INS; aN psy 2) ‘The ratio of expected population size of a-genes to A-genes at the (n + )th generation is = explains the meaning of selectivity. H. GeNeTIC moDeL ‘The gene appears to be composed of a number of subunits, say for dlefiniteness N, When a cell containing the gene prepares to split, each rubunit donhles and each of the two cells receives a geno composed of te produce mutant units and nonmutant units produce nonmutant 18, Moreover, the subunits are assumed to be divided between the two sew genes candomly as if hy drawing from an umn, We shall trace a single line of descent rather than all the population as it multiplies. To deseribe the history of the line wo consider a Markey chain whose state space is inlentified with the values 0, 1,2, ..,.V. Then the gene is said to be in state iif its comp. subunits. The transition probabilities are computed by the formula CCX i) je p,-to. =) ‘The derivation of Py is as follows, Suppose the parent gone is in state i: then after doubling we obtain a totality of 2i mutant units and 2.V — normal units. The nature of the daughter yenc is formed by selecting an arbitrary N unite from this collection. In accordance with the hyper- geometric: probability law, the probability that the daughter gene is in state j is given by (2.15). The states j—1.2...05.N—1 are called mixed, and states 0 and NV will be called pure. State IV is of interest in that a gene all of xehose subunits are mutant may cause the death of its possessor, while state 0 implies that u gene of this type will produce no more of the mutant form, We will later determine the explicit probabilities that, starting from state i, the gene ultimately fixes in state 01 or state N, 2.15) 3: Transition Probability Mutrices of a Markov Chain A Markoy chain is completely defined by its ono-step transition prob- ability matrix and the specification of a probability distribution on the state of the process at time 0. The analysis of « Markov chain concerns mainly the caleulation of the probatilities of the possible realizations of the process, Central in these calculations are the n-step transition probe ability matrices, P*"'= |/Pt,, Hore Pt; denotes the probability that the process ors from state ito state j inn transition. Formally, PY = Pee =F Ma Fh G1) Observe that we are dealing only with temporally homogeneous processes having stationary transition probabilities, since otherwise the left-hand side of (3.1) would also depend on m, . ‘The Markovian assumption allows us to express (3.1) immediately in terms of ||P] stated in the following theorem, Theorem 3.1. If the one-stop transition probability matrix of a Marker chain is P= \\P, |], then Pye S Pury G2) 4. CLASSIFICATION OF STATES OF A MARKOV CHALH ‘for any fixed pair of nonnegative integers r und s satisfying r+ s=n, where wwe define afk sj Prin aay From the theory of matrices (seo tho appendix), we recognize relation (3.2) as just the formula for matrix multiplication, so that P= P*; in other words, the numbers Pf, may be regarded as the entries in the matrix P*, the nth power of P. Proof. We carry out the argument in the case a—2, The event of going from state § to state j in two transitions can he realized in the mutually exchisive ways of going to some intermediate state k (k= 0, 2, ..) ia the first transition and then going from state k to state j in the second transition, Because of the Markovian assumption the probability of the second transition is Pj,, and that of the first transition is clearly P,,. If we use the Jaw of total proba Eq. (3.2) follows. The argument in the general case is identical. If the probability of the process initially being in state distribution law of Xq is PX process being ie. the Dv then the probability of the PsPhe = PAX, = Kh (G3) Besides determining the joint probability distributions of the process for all time, usually a formidable task, it is frequently of interest to find the asymptotic hehavior of Pf as n> 0. One might expect that the influence of the initial state recedes in time and that consequently, as n +204 Pi; approaches a limit which is independent of i. In order to nalyze precisely the asymptotic behavior of the process we necd to jorluoe some prinviples of classifying states of a Markov chain, 41 Classification of States of « Markov Chain State j is said to be accessible from state ¢ if for some integer n> 0, Phy > 0: ic. state j is accessible from state i if there is positive prob- ability that in a finite muraber of transitions state j ean be reuched starting from state i. Two states i and j, each secossible to the other, are said to communicate and we rite ie j, If two states i and j do not communi- Py =O forall 20 P= 0 forall n=0 or hoth relations are true. ‘The concept of communication isan equivalence relation, (fot =5,afe fai rata fn Zi (i) TEE e+ j.then jo i (symmetry), from the defi tion. iii) i+ j amd jo &, then ic & (transitivity). ‘The proof of transitivity procecdsas follows: i+ j and j+ kimply that there exist integers n and m such that Pj > (and P4 > 0. Consequently hy (8.2) and the nonnegativity of each P*,, we conclude that flexivity), a mnsequenee of the definition of jon of communis Pye F, PePAE PLP RS O. A similar argument shows tho existence of tas desired. ‘We can now partition the totality of states into equivalence classes. ‘The states in an equivalence class ure thoto which communicate with each other. Tt may be possible, starting in one class, to enter some other class with positive probability; if s0, however, it is elearly not possible tr ele the two classes would together form fa single class. We say that the Markov chain ie irreducible if the equivalence relation induces only one class. In other words, a process i irreducible if all states communicate with each otho To illustrate this concept, consider the teansition probability mateix integer v auch thut Pf, >0, aa 00 aa 00 : Po PHlo o Dolo to! mf 00 Oo} oo Lo where P, is an abbreviation for the matrix formed from the initial two rows and cohrans of P, and similarly for P,.. ‘This Markov chain clearly divides into the two classes composed of states {1,2} and states (3, 4, 5}. 4. CLASSIFICATION OF STATES OF A MARKOV CHAIN a If the state of Xj lies in the fiest class, then the state of the system there- after remains in this class and for all purposes the relevant transition aiatriy is P,, Similarly, if the initial state belongs to the second class, then the relevant traasition matrix is P,. This is a situation where we have two completely unrelated processes labeled together. Tn the random walk model with transition mateix states, 190 o 2 o of 0 gO po goo) 1 P=/|0 4 0 p ooo) 2 a. - 4g 0 pla Ovreee OO 1) @ ‘we have the three elasces {0} (152s... 4 — I} and {a}. This is an example where itis possible to reach the first class or third eluss from the second class, but it is not possible to retum to the second clase from either the first or the third clase Direct inspection shows that the queueing Markov ebain example C of Section 2 is irreducible when a, >0/for all k. Under the same condi ‘we may easily verify that the inventory model (Example D) is irreducible: the success run Markov chain model (Example 1) under the condition > 9 PL> 0, F= O01, is also deveducibte PERIODICITY OF A MAAKOV CHAIN We define the period of state i, written d(i), to be the greatest common ‘sor (gc) of all integers m1 for which Pf > 0. [If Pj = 0 for all n 21 deline d(i) = 0,] If in the random walk [see €2.2)] all r, ‘every state has period two. If for a single state, io, ™ >0, then every tate now has period one, since regardless of the initial state j the system can reach state ig and remain in this state any length of time before returning to state j Ina finite Markov chain of n states with transition matrix Ah state bas period m. e 2. MARKOV CHAINS We stato, without proof, three basic propertics of the peviod of a state. (In this connection, consult Problems 2—4 of this chapter.) Theorem 42. If ij thon df} = lj). This assertion shows that the period is a constant in each class of ‘communicating states. ‘heorem 4.2. Ifsate has period di) then there exists an inger NV depend dng on such that for all integers n = N Py >0, ‘This asserts that a return to atate i can goeur at all fiiontly large suultiplen of the period d(. Corollary hl. Lf PY > 0, then Pee“ 0 for all n (a positive integer) sufficiently large. A Markov chain in which each state has period one is called aperiodic. ‘The vast majority of Markov chain processes we deal with are period Random walks usually typify the periodic cases arising in practice, Results will be developed for the aperiodic case and the modified con- clusions for the general case will be stated usually without proof. The industrious reader ean easily supply the required formaal proof, 5: Recurrence Consider an arbitrary, but fixed, state i. We define for euch integer sn —1]Xq In other words, ff is the probability that, starting from state i, the fiest retum to state # occurs at the ath transition. Clearly f\= Pi, and ft aay be caleulted secusvey aconding to Pix where we define f= 0 for all i. Equation (5.1) is derived by decomposing the event from which P is computed according to the time of the fist return to state i Indeed, consider all the possible realizations of the process for which Xj—i, X,=i and the first return to state i occurs at the kth transition, Call this event #,. The events By (&= 1,2, 0) are clearly mutually exelusive, The probubility of the event that the first rotura i at the kth transition is by definition f§. In the remaining n—k transitions, we are dealing only with those realizations for which X,. ‘Using tho Markov property, we have Pr{E,} = Pr{fiest retum is at Ath transition |X = i} P(X, =i]X,-= i} ince by definition f2— 0. ‘We next introduce the related generating functions Definition, ‘The generating function P,,(s) of the sequence {P} is EPs for slr. (5.2) P, Tn a similar manner we dofine the generating femetion of the sequence {90} (for the definition of {7} when ij, sce immediately below Eq. 6) Fy) Efe for lc 63) Recall the property (sor page 12 of Chapter 1)f that, if As) & ast and BGs) § ae Ga) then AG)BG) 0 we can find an N{e) such that Dein dy) elf for all NN. Choose such an N. ‘Then write Tem] < elf j& asta] = |B, at) s au] 6.3) =|%, a(t 0 +3, edt 1 Now, for 0 j there exists m, w 21 such that PE>O and PIA, Tet 20. We obtain, by the usual argument (seo page 60), Pay" > PEPSPs, and, on summing, kore Honeo if D2» Ph diverges, then Tio Pj also diverges. This corollary proves that reoureenen, like periodicity, is a ease prop- erty: that is, all states im an equivalence class are either recurrent or monrecorrent. y PEPLPY, — PRP*, x Py. Remark. The expected number of eetuens tw state i, piven Xo— i, is 4. EXAMPLES OF RECURRENT MARKOV CHAINS o $y. Than Throeem 5.1 states thet estate fa recurrent if and oa it the expected mumbor of returns is infinite. 6: Examples of Recurrent Markov Chains Example L. Consider fist the one-dimensional random valk on tho posie ive and negative integers, where at each transition the particle moves with probability p one unit to the right and with probability g one unit to the loft (p-|-q = 1). Hence ent Patt n= 0,12 and PB are (6.1) We appeal now to Stirling's formula, mt ~ nite" V2a. (6.2) Applying (6.2) to (6.1) we obtain pag oO" _ Oe” Van Van It is readily verified that p(1—p) = py <4 with equality holding if und only if $- Hence Sig Pjq = 00 if and only if feom Theorem 5.1, the one-imensional random walk is recurrent if and only if p ==}. Remember that recurrence is w class prope tively, if pz£q there is positive probability that a perticle i the origin will drift to +o if p> q (to —ob if p- Poo = Poll — Pid Opa a) = (L Polk ps) +L =P): a>. Pe (6.19) Let [oro pa (1p, 7 ‘Then if we sum the fhg's we have Spe te a A te) Hee a) be Et 0) “Site~1- (ox ‘To complete our argument we need the following: Lemaa 61. If 0 J, Thus tim Fara > jim a $ pg>en which contradicts 4,0. Retuming to (6.18) and applying Lemma 6.1, we deduce that Le fio = if und only if Sy pr= oo, or state 0 is recurrent if and only if the sum of the p/s diverge We insert parenthetically the remark that, given any set fo, yf euch that a; 0 and 2, a; <1, we can exhibit a set of p's for which "oa, in the Markov chain discussed above, We let Sia = Pom ayy Seo = 1-- po)ps = a2 and then determine that see $50= (L — poll —P.)p2 = a3 nd this implies that bs 3 Procevdiay in this manner we can derive an oxplicit sot of p/’s satia Opel, n 2. MARKOV CHAINS 7: More on Recurrence ‘The next theorem shows that if recurrence is certain for a specified state, then the state will be occupied infinitely often with probability 1. We dofine a purticte starting in state ¢ re turns infinitely often to state ¥ Qu = Pr Theorem 7.1, State i is recurrent or transient according to whether Qu: or 0, respectively. Proof Let Qj be defined as Of = pelt Particle starting in tate ize ] =P ¥\tums to state éat leant V times) We can write = SNOT = ORT. whore f= She idity of this formula rests on decomposing the event of Gy according to the first return time. Proceeding recursively, we obtain CEP QE? =. = LEO Bot Q! = f% by definition, Hence Hi = Since limy....Q% = Qu, we have Q\,= 1 oF 0 according to whether f or <1, respectively, or equivalently, according to whether state i is roousrent or trant ‘Theorem 7.2. If ic+ j and the elass is recurrent, then Son End ‘We omit the simple proof. ‘We define the symbol Q,; to be Pe{partiole starting in state i visits state j infinitely often}. An immediate consequence of Theoeorn 7.2 ia Corollary 7.1. If +> j and the clase is recurrent, then Q),= 1. Proof. Tis easy to soe that Qu FQ ELEMENTARY PROBLEMS ” Since j is a reourrent state, by ‘Theorem 7.1, Qjy~ 1. By ‘Theorem 7.2, foals hence Qj 1. Elementary Problems 1. Determine the transition matrix |'P,l| for the following Markov chains: (a) Consider a eequence of tosses of a coin with the probability of “heads” 1p. Attime n (after m tosses of the coin) the state of the process is the auuuber of heads in the n toscos minus the number of tails. (b) Vilack balls nad N white balls are placed in two urns so that each wen, contains NV halle. At each step oue hall is celected at random frou each ur aud the two balls interchange. The state of the system is the number of white balls in the first wen. (c) A white rat is pat into the maze shown, The rat moves through the compartments at vandor, Le., if there are k ways to leave a compartment he chooses cach of these with probability 1/k. He makes one change of compart mont at each instant of time. The state of the system is the aumber of the Mompartnent the rt Salton umber of heads miaas mower 9 6) 2 =) Earn oer as 'o otherwiee * MaRKoY cHaINe 1 Pet ala oe fe 1 erent rhite balls ia frst urn after 2 interchanges iy =j— @ wey | Bye Ne es ea nee 2 (a) Consider two ums A and B containing a total of N balls, An experiment is performed in which @ hall is selected at random (all selections equally likely) av time £ (¢=1,2,...) from among the totality of V balls. Then an urn is selected at random (A is chosen with probability p and B is choven with proba- Dility 4) and the ball previonsly drawa is placed in this urn. The state of the system at each trial is represented by the oumber of balls in A. Determine the transition matrix for this Markey chaia, (b) Assume that at time # there are exactly k balls in A. At time #+ 1 an, sum is selected at rwndom in proportion to its contents (i.c., A is chosen with probability KN and B is chosen with probability (N— &/). Then a hall is selected from A with probability p ur from B with probability @and placed in the previously chosen um. Determine the transition matrix for thie Markov chain, (0) Now assume that at time ¢+1 a ball and an urn are chosen sith proba- Dility depending on the contents of the wen (;e., a ball is chosen from A with probability K/N or from B with probability (N— Ry. Um A is chosen with probability A/NV or urn B is chosen with probability (V ~ k)]N), Determine the transition matrix of the Markov chain with states represented by the contents of A. (€) Determine the equivalence elasses in (a), (b), and (6). Solution: (a) Py = ny otherwise, One equivalence class: (0, 1,2, 5 N}. ELEMENTARY PROBLEMS Hf katy, 0) Py = if & 0 otherwise, P= if G=0 and §=N, Equivalence classes are {0}, {V}, {14 25-00 Woy. if k OP, if RaEb1 or hae, otherwise, Equivalence classes are (0), (1,2, .., N—1),(N}. 3. (a) A peyohological subjeot can make one of two responses Ay and Ay. Associated with these responses are a set of WV atimuli {S,, S:, ., Sy). Bark sinmulus is conditioned to one ofthe responses, A single stimulus ke sazapled at random (all possibilities equally Mely) and the subject responds according to the stimulus sampled. Heinforerment occurs at each tail with probabil (0-

You might also like