You are on page 1of 4

SA T TR

Volume 10, Number 2

Table of Contents Introductio

Selected Topics in Assurance gies Related Technolo

n W ho was M arkov, and W hat is a M arkov Analysis? Chains, M arkov Process, and semi M arkov M arkov M odels Process

Application of Coverage to M arkov MInternational Standards and M arkov odeling Analysis of Computeriz ed M arkov Analysis Examples T ools

The Applicability of Markov Analysis


Who was Markov, and What is a Markov Analysis?
Andrei A. M arkov graduated from Saint Petersburg in 1878 and subsequently became a U niversity professor tion theory and the convergence of series. H e later

Methods to Reliability,

Introduction

F or many years, M arkov models and M arkov analysis method s applied

Maintainability, and Safety


T he observer chooses to model a phenomenon as stochastic or deterministic. T he choice depends on the observers purpose; the criterion for judging this choice is always the models usefulness for the intended purpose. T o be useful, a stochastic model must reflect all those aspects of the phenomenon under study that are relevant to the question at hand. In addition, the model must allow the deduction of important predictions or implications about the phenomenon. In reliability, maintainability, and safety (R M S) engineering stochastic modeling is used to describe a systems operation with respect to time. T he component failure and repair times typically become the random variables.

R eliability, M aintainability, and Safety Application of M arkov Analysis M ethods: the Pros and Cons An Example of the D evelopment of a M arkov M odel M arkov M odel R eduction T echniques

Summary R eferences About the Author O ther ST AR T Sheets Available there. H is early work dealt mainly in number theory and analysis, continued fractions, limits of integrals, approxima-

were relegated to that list of exotic but rarely used stochastic modeling techniques, at least for reliability and maintainability purposes. T he promulgation of IEC standard 61508

the method of continued fractions to probability theory. M arkov is particularly remembered for his study of M arkov

Functional Safety of Electrical/Electronic/Programmable chains. T hese chains are sequences of random variables in Electronic Safety-Related Systems has significantly re-vitalwhich the future variable is determined by the present variiz ed M arkov analysis by requiring the analysis of various able but is independent of the way in which the present state V disparate failure modes from a safety perspective. T he arose from its predecessors. T his work launched the theory methods also are receiving more attention because todays of stochastic processes. software tools make computationally complex M arkov A analyses easier to perform than in the past. M arkov analysis looks at a sequence of event and analyz es M the tendency of one event to be followed by another. U sing W hat is stochastic modeling [1]? A quantitative description this analysis, we can generate a new sequence of random but of a natural phenomenon is called a mathematical model of related events, which appear similar to the original. 0 that phenomenon. A deterministic model predicts a single outcome from a given set of circumstances; a stochastic The Markov model assumes that the future is model predicts a set of possible outcomes weighted by their independent of the past given the present. likelihoods or probabilities. T he word stochastic derives When using Markov the random variable is indexed in A from G reek, to aim or to guess, and it means random or time, which can be either discrete or continuous. chance. Sure, deterministic, or certain are the antonyms. Such models are to be judged only on the models M any random events are affected by what has happened usefulness for the intended purpose. before. F or example, todays weather does have an influence on what tomorrows weather will be. T hey are not totally independent events.

A publication of the Reliability Analysis Center

O K R , 2 3 0 2 T R T S

Markov Chains, Markov Process, and semi- Process Markov Models T here are two basic M arkov analysis methods: M arkov
Chain arkov Process. and M The M arkov C hainassumes discrete states and a discrete time

After observing a long sequence of rainy and sunny days, a assumption of a M arkov process is that the behavior of a system M arkov model could be used to analyz e the likelihood that one in each state is memoryless [2]. kind of weather is followed by another. Assume that 25% of the time, a sunny day follows a rainy day and 75% of the time, rain A Markov Process is completely characteriz ed by its transition was followed by more rain. Also assume that sunny days were probability matrix. followed 50% of the time by rain, and 50% by sun. A memoryless system is characteriz ed by the fact that the future U sing this analysis, one could generate a new sequence of statisstate of the system depends only on its present state. A stationtically similar weather by following these steps: ary system is one in which the probabilities that govern the transitions from state to state remain constant with time. In other 1. Start with todays weather. words, the probability of transitioning from some state i to 2. G iven todays weather, choose a random number to pick another state j is the same regardless of the point in time that the tomorrows weather. transition occurs. T he states of the model are defined by system 3. M ake tomorrows weather todays weather and go back element failures. T he transitional probabilities between states to step 2. are a function of the failure rates of the various system elements. A set of first-order differential equations are developed by A result is a particular sequence of days as follows: describing the probability of being in each state in terms of the transitional probabilities from and to each state. T he number of Sunny Sunny R ainy R ainy R ainy R ainy Sunny R ainy first-order differential equations will equal the number of states R ainy Sunny Sunny... of the model. T he mathematical problem then becomes one of solving the following equation: In other words, the output chain would reflect, statistically, the transition probabilities derived from weather that we observed. &P = [A]P T his stream of events is called a M arkov Chain. A M arkov Chain, while similar to the source in the micro, is often nonsenW here & P and Pare n x 1 column vectors, [A] is an n x n matrix sical in the macro. T his would be a poor way to predict weathand n is the number of states in the system. T he solution of this er because the overall shape of the model has little formal resemequation is: blance to the overall form of the source. H owever, steady-state (long run) probabilities of any day being in a specific state (e.g., P= exp[A]t P(0) rainy or sunny) are the useful and practical result. W here exp[A]t is an n x n matrix and P(0) is the initial proba-

bility vector describing the initial state of the system. T wo methods that are particularly well suited for the digital computer for computing the matrix exp[A]t are the infinite series method and the eigenvalue/eigenvector method. F igure 1 presents a flow R epairs can be addressed by using repair rates that account for chart that illustrates the procedure used to develop a M arkov model.

A M arkov chain may be described as H omogeneous or NonH omogeneous. A H omogeneous M arkov Chain is characteriz ed transition rates between the states. by constant A omogeneous M arkov Chain is characteriz ed by the fact that parameter; With the M arkov Process, states are continuous. H Nonthe the return from any given failed state to the proceeding working transition rates between the states are functions of a global clock state. All of this results in a complex diagram of bubbles, repree.g., elapsed mission time. senting each state, and directed lines, with arrows, showing the movement from one state to the next, or to the preceding state. M arkov models are frequently used in R M S work where events, F igure 2 shows a simple state transition or bubble diagram with two states: A = operational, B = failed) with a diagram (a fail-rate a repair rate (for the homogeneous case, for the ure and As the M arkov D iagram is drawn, (see subsequent example) the failure rate values and the repair rate numbers can be entered into an n x n matrix (where n is the number of states being considered) commonly called the transition matrix. such as the failure or repair of a module, can occur at any point in time. T he M arkov model evaluates the probability of jumping from one known state into the next logical state (i.e., from everything working to the first item failed, and from the first item

failed to the second items failed state, and so on,) until, depending upon the configuration of the system being considered, the system has reached the final or totally failed state. T he basic

non-homogeneous case and (t).) M ovements from left (t) to indicate failure and movement from right to left right indicates recovery.

You might also like