You are on page 1of 15

SECOND REVIEW REPORT

MONTE CARLO SIMULATION Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used in simulating physical and mathematical systems. Because of their reliance on repeated computation of random or pseudo-random numbers, these methods are most suited to calculation by a computer and tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm Monte Carlo simulation methods are especially useful in studying systems with a large number of coupled degrees of freedom, such as fluids, disordered materials, strongly coupled solids, and cellular structures (see cellular Potts model). More broadly, Monte Carlo methods are useful for modeling phenomena with significant uncertainty in inputs, such as the calculation of risk in business. These methods are also widely used in mathematics: a classic use is for the evaluation of definite integrals, particularly multidimensional integrals with complicated boundary conditions. It is a widely successful method in risk analysis when compared with alternative methods or human intuition. When Monte Carlo simulations have been applied in space exploration and oil exploration, actual observations of failures, cost overruns and schedule overruns are routinely better predicted by the simulations than by human intuition or alternative "soft" methods. There is no single Monte Carlo method; instead, the term describes a large and widelyused class of approaches. However, these approaches tend to follow a particular pattern: 1. Define a domain of possible inputs. 2. Generate inputs randomly from the domain using a certain specified probability distribution. 3. Perform a deterministic computation using the inputs. 4. Aggregate the results of the individual computations into the final result.

An approximation will also be poor if only a few grains are randomly dropped into the whole square.For example. the ratio of the area of an inscribed circle to that of the surrounding square is π / 4. If grains are purposefully dropped into only. From plane geometry. the approximation of π. we aggregate the results into our final result. and so our approximation will be poor. the objects should fall in the areas in approximately the same ratio. the approximation of π will become more accurate both as the grains are dropped more uniformly and as more are dropped. we define a domain of inputs: in this case. for example. two other common properties of Monte Carlo methods: the computation's reliance on good random numbers. they will not be uniformly distributed. Since the two areas are in the ratio π / 4. and its slow convergence to a better approximation as more data points are sampled. also. Thus. Notice how the π approximation follows the general pattern of Monte Carlo algorithms. Note. then perform a computation on each input (test whether it falls within the circle). then inscribe a circle within it. Draw a square on the ground. Uniformly scatter some objects of uniform size throughout the square. At the end. the center of the circle. For example. Multiplying the result by 4 will then yield an approximation for π itself. grains of rice or sand. Thus. 2. Next. we generate inputs randomly (scatter individual grains within the square). the value of π can be approximated using a Monte Carlo method: 1. . it's the square which circumscribes our circle. 3. First. counting the number of objects in the circle and dividing by the total number of objects in the square will yield an approximation for π / 4.

just like the physics problems in the 1940s.for example.you can change numbers. unknown plans of competitors. that were too complex for an analytical solution -. This type of model is usually deterministic. variable market demand. They had access to one of the earliest computers -MANIAC -.and business applications in virtually every industry. uncertainty in costs. ask 'what if' and see the results.so they had to be evaluated numerically. When you create a model with a spreadsheet like Excel. and finance -. But you can build a spreadsheet model that lets you evaluate your plan numerically -. If your situation sounds like this.Whatis MonteCarloSimulation? The Monte Carlo method was invented by scientists working on the atomic bomb in the 1940s. plans and processes are too complex for an analytical solution -just like the physics problems of the 1940s. Monte Carlo simulation proved to be surprisingly effective at finding solutions to these problems. WhyShouldI UseMonteCarloSimulation? Most business activities. [ Example 1: A Deterministic Model for Compound Interest ] . This is straightforward if you have just one or two parameters to explore. you may find that the Monte Carlo method is surprisingly effective for you as well. But many business situations involve uncertainty in many dimensions -. EXAMPLE: Computer simulation has to do with using computer models to imitate real life or make predictions. Since that time. Monte Carlo methods have been applied to an incredibly diverse range of problems in science. who named it for the city in Monaco famed for its casinos and games of chance. meaning that you get the same results no matter how many times you re-calculate. Its core idea is to use random samples of parameters or inputs to explore the behavior of a complex system or process. engineering.but their models involved so many dimensions that exhaustive numerical evaluation was prohibitively slow. The scientists faced physics problems. such as models of neutron diffusion. and many others -. you have a certain number of input parameters and a few equations that use those inputs to give you a set of outputs (or response variables).

a little word about uncertainty propagation: The Monte Carlo method is just one of many methods for analyzing uncertainty propagation. and confidence intervals. we used simple uniform random numbers as the inputs to the model. The data generated from the simulation can be represented as probability distributions (or histograms) or converted to error bars. By using random inputs. . performance. you are essentially turning the deterministic model into a stochastic model. lack of knowledge. Monte Carlo simulation is categorized as a sampling method because the inputs are randomly generated from probability distributions to simulate the process of sampling from an actual population. or error affects the sensitivity. a uniform distribution is not the only way to represent uncertainty. Monte Carlo simulation is a method for iteratively evaluating a deterministic model using sets of random numbers as inputs. a task which in the past was only practical using super computers. or reliability of the system that is being modeled. So. we try to choose a distribution for the inputs that most closely matches data we already have. or best represents our current state of knowledge. However. In Example 2. where the goal is to determine how random variation. Before describing the steps of the general MC simulation in detail.000 evaluations of the model. (See Figure 2). Example 2 demonstrates this concept with a very simple problem. nonlinear. tolerance zones. A simulation can typically involve over 10. This method is often used when the model is complex.Figure 1: A parametric deterministic model maps a set of input variables to a set of output variables. reliability predictions. or involves more than just a couple uncertain parameters.

etc. All we need to do is follow the five simple steps listed below: Step 1: Create a parametric model..) If you have made it this far.. . and can be easily implemented in Excel for simple models. Step 2: Generate a set of random inputs.. xq). (The basic principle behind Monte Carlo simulation. summary statistics. y = f(x1.. confidence intervals. Step 5: Analyze the results using histograms. Step 3: Evaluate the model and store the results as yi. Step 4: Repeat steps 2 and 3 for i = 1 to n. congratulations! Now for the fun part! The steps in Monte Carlo simulation corresponding to the uncertainty propagation shown in Figure 2 are fairly simple.Uncertainty Propagation Figure 2: Schematic showing the principal of stochastic uncertainty propagation.. . xi1. xi2. xiq. . x2..

animation. etc.Some Advantages of Simulation • Often the only type of model possible for complex systems – • Analytical models frequently infeasible Process of building simulation can clarify understanding of real system – Sometimes more useful than actual application of final simulation • Allows for sensitivity analysis and optimization of real system without need to operate real system • • Can maintain better control over experimental conditions than real system Time compression/expansion: Can evaluate system on slower or faster time scale than real system Some Disadvantages of Simulation • • May be very expensive and time consuming to build simulation Easy to misuse simulation by “stretching” it beyond the limits of credibility – Problem especially apparent when using commercial simulation packages due to ease of use and lack of familiarity with underlying assumptions and restrictions – Slick graphics. tables. may tempt user to assign unwarranted credibility to output • Monte Carlo simulation usually requires several (perhaps many) runs at given input values .

quantization. For instance. If this is not the case. . and amplifier nonlinearities must precede the effects of noise in the actual channel being modeled. but not for others. • The noiseless simulation has no errors in the received signal constellation. then the calculated BER will be too low. The semi analytic technique is applicable if a system has all of these characteristics: • Any effects of multi path fading. Distortions from sources other than noise should be mild enough to keep each signal point in its correct decision region. if the modeled system has a phase rotation that places the received signal points outside their proper decision regions. they reduce the applicability of the semi analytic technique to a communication system. Because phase noise and timing jitter are slow processes. then the semianalytic technique is not suitable to predict system performance.– Contrast: analytical solution provides exact values WHEN TO USE THE SEMIANALYTIC TECHNIQUE • The semi analytic technique works well for certain types of communication systems. and timing jitter is negligible. • The receiver is perfectly synchronized with the carrier.

Bessel. elliptic. If you use a square-root raised cosine filter. which makes the distribution of ones and zeros equal. but you can also use a Butterworth.PROCEDURE FOR THE SEMIANALYTIC TECHNIQUE The procedure below describes how you would typically implement the semi analytic technique using the semi analytic function: Generate a message signal containing at least ML symbols. or more general FIR or IIR filter. An augmented PN sequence is a PN sequence with an extra zero appended. Modulate a carrier with the message signal using base band modulation. where M is the alphabet size of the modulation and L is the length of the impulse response of the channel. A common approach is to start with an augmented binary pseudo noise (PN) sequence of total length (log2M)ML. Filter the modulated signal with a transmit filter. . Chebyshev type 1 or 2. This filter is often a square-root raised cosine filter. use it on the non-oversampled modulated signal and specify the oversampling factor in the filtering function. Supported modulation types are listed on the reference page for semi analytic. Shape the resultant signal with rectangular pulse shaping. If you use another filter type. you can apply it to the rectangular pulse shaped signal. in symbols. using the oversampling factor that you will later use to filter the modulated signal. Store the result of this step as txsig for later use.

the phase of a reference signal (the carrier wave). The function filters rxsig and then determines the error probability of each received signal point by analytically applying the Gaussian noise distribution to each point. each phase encodes an equal number of bits. determines the phase of the received signal and maps it back to the symbol it represents. quantization. typically by assuming Gray coding. which is designed specifically for the symbol-set used by the modulator. phase shifts. in the case of DQPSK modulation. thus recovering the original data.Run the filtered signal through a noiseless channel. Usually. The demodulator then determines the changes in the phase of the received signal rather than the phase itself. Since this scheme . Any digital modulation scheme uses a finite number of distinct signals to represent digital data. Each pattern of bits forms the symbol that is represented by the particular phase. This channel can include multipath fading effects. or modulating. it can instead be used to change it by a specified amount. The function averages the error probabilities over the entire received signal to determine the overall error probability. Specify a receive filter as a pair of input arguments. Invoke the semi analytic function using the txsig and rxsig data from earlier steps. instead of using the bit patterns to set the phase of the wave. then the function converts it to a bit error rate. and additional filtering. This requires the receiver to be able to compare the phase of the received signal to a reference signal — such a system is termed coherent (and referred to as CPSK). The demodulator. If the error probability calculated in this way is a symbol error probability. amplifier nonlinearities. The function returns the bit error rate (or. Alternatively. an upper bound on the bit error rate). Store the result of this step as rxsig for later use. unless you want to use the function's default filter. each assigned a unique pattern of binary bits. PSK uses a finite number of phases. but it must not include noise. PHASE-SHIFT KEYING Phase-shift keying (PSK) is a digital modulation scheme that conveys data by changing.

depends on the difference between successive phases. There are three major classes of digital modulation techniques used for transmission of digitally represented data: • • • Amplitude-shift keying (ASK) Frequency-shift keying (FSK) Phase-shift keying (PSK) All convey data by changing some aspect of a base signal. . There are two fundamental ways of utilizing the phase of a signal in this way: • By viewing the phase itself as conveying the information. it produces more erroneous demodulations. The exact requirements of the particular scenario under consideration determine which scheme is used. A convenient way to represent PSK schemes is on a constellation diagram. In exchange. The amplitude of each point along the in-phase axis is used to modulate a cosine (or sine) wave and the amplitude along the quadrature axis to modulate a sine (or cosine) wave. in which case the demodulator must have a reference signal to compare the received signal's phase against. it is termed differential phaseshift keying (DPSK). the carrier wave (usually a sinusoid). or • By viewing the change in the phase as conveying information — differential schemes. This shows the points in the Argand plane where. the real and imaginary axes are termed the in-phase and quadrature axes respectively due to their 90° separation. in response to a data signal. some of which do not need a reference carrier (to a certain extent). In the case of PSK. Such a representation on perpendicular axes lends itself to straightforward implementation. the phase is changed to represent the data signal. in this context. DPSK can be significantly simpler to implement than ordinary PSK since there is no need for the demodulator to have a copy of the reference signal to determine the exact phase of the received signal (it is a non-coherent scheme).

Since the data to be conveyed are usually binary. the PSK scheme is usually designed with the number of constellation points being a power of 2. although any number of phases may be used. It is a scaled form of the complementary Gaussian error function: CONSTELLATION DIAGRAM . In this way. some definitions will be needed: • • • • • • • Eb = Energy-per-bit Es = Energy-per-symbol = kEb with k bits per symbol Tb = Bit duration Ts = Symbol duration N0 / 2 = Noise power spectral density (W/Hz) Pb = Probability of bit-error Ps = Probability of symbol-error give the probability that a single sample taken from a random process with Q(x) will zero-mean and unit-variance Gaussian probability density function will be greater or equal to x. Definitions For determining error-rates mathematically. and "quadrature phase-shift keying" (QPSK) which uses four phases. the moduli of the complex numbers they represent will be the same and thus so will the amplitudes needed for the cosine and sine waves.In PSK. This gives maximum phase-separation between adjacent points and thus the best immunity to corruption. Two common examples are "binary phase-shift keying" (BPSK) which uses two phases. They are positioned on a circle so that they can all be transmitted with the same energy. the constellation points chosen are usually positioned with uniform angular spacing around a circle.

. or Q-axis. PSK CONSTELLATION WITH ISI AND AWGN • The M signal waveforms for ideal M-ary PSK are represented As sm(t) = g(t)cos(ωct + θm) m = 0. Measured constellation diagrams can be used to recognize the type of interference and distortion in a signal.. • As the symbols are represented as complex numbers. In a more abstract sense. The example shown here is for 8-PSK. A coherent detector is able to independently demodulate these carriers. or I-axis and the quadrature. The real and imaginary axes are often called the in phase. • Also a diagram of the ideal positions. In this sense the constellation is not a scatter diagram but a representation of the scheme itself. They are a set of modulation symbols which comprise the modulation alphabet. the phase of the modulating symbol is the phase of the carrier itself. the symbol can be sent with two carriers on the same frequency. . The points on a constellation diagram are called constellation points. signal space diagram. 1. Plotting several symbols in a scatter diagram produces the constellation diagram..• A constellation diagram is a representation of a signal modulated by a digital modulation scheme such as quadrature amplitude modulation or phase-shift keying. they can be visualized as points on the complex plane. In pure phase modulation. • By representing a transmitted symbol as a complex number and modulating a cosine and sine carrier signal with the real and imaginary parts (respectively). in a modulation scheme can be called a constellation diagram. it represents the possible symbols that may be selected by a given modulation scheme as points in the complex plane. They are often referred to as quadrature carriers. which has also been given a Gray coded bit assignment. This principle of using two independently modulated carriers is the foundation of quadrature modulation.M − 1 where g(t) is a pulse waveform used to shape the spectrum of the transmitted signal and θm is the information-bearing phase angle which takes M possible values • θm =(2m+ 1) π . It displays the signal as a two-dimensional scatter diagram in the complex plane at symbol sampling instants.

1. performing a simulation using ML +2L symbols from a maximal length ML symbol sequence. it is sufficient to consider all possible • sequences of L symbols. ISI will refer to the effects of both linear and non-linear time invariant distortions with memory. In this context. . Assuming equiprobable symbols. A maximal length pseudorandom ML symbol sequence will satisfy this property. With the addition of cyclic prefixes and postfixes of L symbols each. the resulting ML decision variable points will completely characterize the effect of the system ISI on the signal.M − 1 where ε is the energy of the spectrum shaping pulse and is given in terms of bit energy Eb by ε = log2 (M)Eb. shows the ideal symbol locations Sm and decision regions Rm for 8PSK. • For M>2. and their locations will be data dependent due to ISI... linear feedback shift registers can be used to generate maxima length pseudorandom bit sequences. • This set of decision variables can be defined in terms of their respective magnitudes and phases or in-phas and quadrature components . each of the complex decision variables takes one of the following M values • • Sm = εejθm m = 0.. 1. For M=2. then in order to completely characterize the ISI of a channel with L symbol periods of memory.M − 1 If an ideal PSK signal is optimally demodulated. then using complex phasor notation. a simulation using one cycle of a ML length pseudorandom symbol sequence is sufficient to emulate equiprobable data symbols. the received decision variables will differ from the M ideal points. • Therefore. and discarding the first and last L demodulated and detected decision variable points. ..• • m = 0. efficient methods for generating maximal length pseudorandom symbol sequences have also been proposed. • When distortions due to channel effects or modem imperfections are present...

.. 1.. .ML − 1 When AWGN is present at the receiver input. 1.ML − 1 For a receiver having an arbitrary discrete time detection filter with impulse response h(n).. the noise component nk at the filter output is a sequence of complex Gaussian distributed random variables .• sk = rkejθk = ik + jqk k = 0.. the decision variables are yk = sk + nk k = 0...

5 1 0.5 -2 -2 -1.5 1 1.5 2 .5 -1 -0.5 0 -0.5 0 0.5 -1 -1.OUR MATLAB SIMULATION 2 1.