This action might not be possible to undo. Are you sure you want to continue?

Welcome to Scribd! Start your free trial and access books, documents and more.Find out more

** Roughly speaking, TCP operates as follows:
**

± Data packets reaching a destination are acknowledged by sending an appropriate message to the sender. ± Upon receipt of the acknowledgement, data sources increase their send rate, thereby probing the network for available bandwidth, until congestion is encountered. ± Network congestion is deduced through the loss of data packets (receipt of duplicate ACK¶s or non receipt of ACK¶s), and results in sources reducing their send rate drastically (by half).

Hamilton Institute

**TCP congestion control
**

Congestion control is necessary for a number of reasons, so that:

± catastrophic collapse of the network is avoided under heavy loads; ± each data source receives a fair share of the available bandwidth; ± the available bandwidth B is utilised in an optimal fashion. ± interactions of the network sources should not cause destabilising network side effects such as oscillations or instability

Hamilton Institute

**TCP congestion control
**

Hespanha¶s hybrid model of TCP traffic.

± Loss of packets caused by queues filling at the bottleneck link. ± TCP sources have two modes of operation

Additive increase Multiplicative decrease

Data s rc Data s rc 1

B ttl Data s rc 2 R t r

ck li k l

R

t r

**± Packet-loss detected at sources one RTT after loss of packet.
**

Hamilton Institute

TCP congestion control Data source 1 Bottleneck link l Data source 2 Router Router Data source n Packet not being dropped Packet drop detected Half source rate Packets dropped Hamilton Institute .

TCP congestion control Data source 1 Bottleneck link l Data source 2 Router Router Data source n Queue not full Packet drop detected Half source rate Queue full Hamilton Institute .

Modelling the µqueue not full¶ state The rate at which the queue grows is easy to determine. dQ § wi ! B dt RTT Q RTT ! Tp B While the queue is not full: dwi 1 ! dt RTT Hamilton Institute .

Modelling the µqueue full¶ state When the queue is full dQ !0 dt dw 1 ! dt RTT One RTT later the sources are informed of congestion Hamilton Institute .

TCP congestion control

dwi 1 ! dt RTT

Queue fills ONE RTT LATER

**wi ! 0.5wi dQ !0 dt 1 dw ! dt RTT
**

QUEUE FULL

wi dQ ! dt RTT

B

Hamilton Institute

TCP congestion control: Example (Hespanha)

400 350 300

B ! 1250 packets/sec Q max ! 250 packets Tp ! 0.4 Seconds

250 200 150 100 50 0 0 100 200 300 400 500 600

Hamilton Institute

**TCP congestion control: Example (Fairness)
**

700

600

500

B ! 1250 packets/sec Q max ! 250 packets Tp ! 0.4 Seconds

400

300

200

100

0

0

200

400

600

800

1000

1200

Hamilton Institute

Shorten & Douglas Leith The Hamilton Institute NUI Maynooth Ro ert . Shorten Hamilton Institute .Modelling of dynamic systems: Part 3 System Identification Robert N.

Building our first model Example: Malthus¶s law of population growth Government agencies use population models to plan. What do you think be a good simple model for population growth? Malthus¶s law states that rate of an unperturbed population (Y) growth is proportional to the population present. dY ! kY dt Introduction Hamilton Institute .

Y ear 250 200 150 P op 100 50 0 1800 1820 1840 1860 1880 1900 YEAR 1920 1940 1960 1980 Hamilton Institute .U S P opulation G row th (m illions ) v.

5 0 1800 1800 1820 1820 1840 1840 1860 1860 1880 1900 1880 1900 YEAR YEAR 1920 1920 1940 1940 1960 1960 1980 1980 Hamilton Institute .5 250 5 200 4.5. Y ear S lope = k Interc ept = e y 0 1.5 100 3 2.5 4 ln(P op) 150 P op 3.5 50 2 U S P opulation G row th (m illions ) v.

U S P opulation G row th (m illions ) v. Y ear 350 300 250 M O DE L 200 P op 150 100 50 0 1800 1820 1840 1860 1880 1900 YEAR 1920 1940 1960 1980 Hamilton Institute .

However to build models we need to do a lot of work.Modelling Modelling is usually necessary for two reasons: to predict and to control. ± Postulate the model structure (most physical systems can be classified as belonging to the system classes that you have already seen) ± Identify the model parameters. ± Solve the equations to use the model for prediction and analysis (now). Introduction Hamilton Institute . ± Validate the parameters (later).

Experiment design Parameter estimation ± Validate the parameters (later). ± Solve the equations to use the model for prediction and analysis (now).Modelling Modelling is usually necessary for two reasons: to predict and to control. However to build models we need to do a lot of work. ± Postulate the model structure (most physical systems can be classified as belonging to the system classes that you have already seen) ± Identify the model parameters. Introduction Hamilton Institute .

Usually this involves two steps. The second step usually involved using some mathematical technique to infer the parameters from the observed data. Hamilton Institute .What is parameter estimation? Parameter identification is the identification of the unknown parameters of a given model. The first step is concerned with obtaining data to allow us to identify the model parameters.

the unknown parameters appear as coefficients of the variables (and offset).Linear in parameter model structures The parameter estimation task is simple when the model is a linear in parameters model form. Hamilton Institute . For example. The parameters of such equations are estimated using the principle of least squares. in the equation y ! ax b.

Hamilton Institute . Gauss stated that the parameters of the models should be chosen such that µthe sum of the squares of the differences between the actually computed values is a minimum¶.The principle of least squares Karl Friedrick Gauss (the greatest mathematician after Hamilton) invented the principle of least squares to determine the orbits of planets and asteroids. For linear in parameter models this principle can be applied easily.

Hamilton Institute .The principle of least squares Karl Friedrick Gauss (the greatest mathematician after Hamilton) invented the principle of least squares to determine the orbits of planets and asteroids. Gauss stated that the parameters of the models should be chosen such that µthe sum of the squares of the differences between the actually computed values is a minimum¶. For linear in parameter models this principle can be applied easily.

y1 ) x V ( a . y k ) y ( x2 .b ) ! § ( yi Ö i )2 y i !1 k Hamilton Institute . y 2 ) ( x1 .The principle of least squares ( xk .

b ) !0 xa xV ( a . we need to solve: xV ( a .The principle of least squares: The algebra For our example: we want to minimize V ( a .b ) ! § ( yi Ö i )2 y i !1 m m ! § ( yi axi b )2 i !1 Hence.b ) !0 xb Hamilton Institute .

b ) ! 2( § yi axi b )( 1 ) ! 0 i !1 xb Hence.b. a § x b § xi ! § xi yi i !1 m i !1 i !1 m 2 i m m a § xi mb ! § yi i !1 i !1 m Hamilton Institute .b ) ! 2( § yi axi b )( xi ) ! 0 i !1 xa m xV ( a . we need to solve the following equations for the parameters a.The principle of least squares: The algebra For our example: we want to minimize m xV ( a .

X Y -1 10 0 9 1 7 2 5 3 4 4 3 5 0 6 -1 10 8 6 4 y 2 0 -2 -1 0 1 2 x 3 4 5 6 Hamilton Institute .A linear model Example: Find the least squares line that fits the following data points.

X Y -1 10 0 9 1 7 2 5 3 4 4 3 5 0 6 -1 a § x b § xi ! § xi yi i !1 m i !1 i !1 m 2 i m m a § xi mb ! § yi i !1 i !1 m Hamilton Institute .A linear model Example: Find the least squares line that fits the following data points.

A linear model Example: Find the least squares line that fits the following data points. X Y -1 10 0 9 1 7 2 5 3 4 4 3 5 0 6 -1 a § x b § xi ! § xi yi i !1 m i !1 i !1 m 2 i m m a § xi mb ! § yi i !1 i !1 m Ö ! 1.607 x 8.643 y Hamilton Institute .

607 x 8.A linear model Example: Find the least squares line that fits the following data points. X Y -1 10 0 9 1 7 2 5 3 4 4 3 5 0 6 -1 Ö ! 1.643 y 12 10 8 6 y 4 2 0 -2 -1 0 1 2 x 3 4 5 6 Hamilton Institute .

9218 17.9218 101. fit y X 1.9218 50.9218 26.A polynomial model Least squares can be used whenever we suspect a linear in parameters model?Find the least squares polynomial Ö k ! c1 c2 xk2 to the following data points.9218 37.0000 2.0000 3.9218 82.0000 7.9218 5.0000 Y 2.9218 65.0000 10.9218 120 100 80 60 y 40 20 0 1 2 3 4 5 x 6 7 8 9 10 Hamilton Institute .0000 4.0000 5.0000 6.0000 8.0000 9.9218 10.

9218 101.0000 7.9218 120 100 80 60 y 40 20 0 1 2 3 4 5 x 6 7 8 9 10 Hamilton Institute .73 xk2 y X 1.9218 26.9218 10.9218 50.0000 9.0000 10.0000 3.9218 5.9218 82.0000 6.9218 37.0000 4.9218 17.0000 Y 2.A polynomial model By proceeding exactly as before: Ö k ! 1 1.0000 8.9218 65.0000 2.0000 5.

dY ! kY dt Introduction Hamilton Institute .Building our first model Example: Malthus¶s law of population growth Government agencies use population models to plan. What do you think be a good simple model for population growth? Malthus¶s law states that rate of an unperturbed population (Y) growth is proportional to the population present.

there is a change of variables to make it linear in parameters.An exponential model (the first lecture) The solution to the differential equation is not linear in parameters. ln Y ! ln( Ae kt ) ! ln ln e kt ! ln kt Hamilton Institute . Y ! Ae kt However.

U S P opulation G row th (m illions ) v. Y ear 250 200 150 P op 100 50 0 1800 1820 1840 1860 1880 1900 YEAR 1920 1940 1960 1980 Hamilton Institute .

5 100 3 2.5 50 2 U S P opulation G row th (m illions ) v.5 250 5 200 4.5 0 1800 1800 1820 1820 1840 1840 1860 1860 1880 1900 1880 1900 YEAR YEAR 1920 1920 1940 1940 1960 1960 1980 1980 Hamilton Institute . Y ear S lope = k Interc ept = e y 0 1.5.5 4 ln(P op) 150 P op 3.

A much more effective solution to the least squares problem can be found using matrices. y ! ax bz c Hamilton Institute . This technique is effective but tedious for complicated linear in parameter models.Matrix formulation of least squares The least squares parameters can be derived by solving a set of simultaneous linear equations. Suppose that we wish to find the parameters of the following linear in parameters model and that we have m measurements.

Matrix formulation of least squares All m-measurements can be written in matrix form as follows ¨ y1 ¸ © ¹ © y2 ¹ © /¹ © ¹ ©y ¹ ª mº or more compactly as Y ¨ x1 © © x2 ! © / © ©x ª m 1¸ ¹¨ a ¸ 1¹© ¹ ¹© b ¹ / /© ¹ ¹ª c º z m 1¹ º z1 z2 ! Hamilton Institute .

Hamilton Institute .This matrix (here a mx3 matrix) is usually not invertible. To find the least squares solution we multiply both sizes of the equation by the transpose of the regressor matrix.Matrix formulation of least squares The matrix is known as the matrix of regressors. T Y T ! Y! T ( T )1 It can be shown that the least squares solution is given by the above equation.

X Y -1 10 0 9 1 7 2 5 3 4 4 3 5 0 6 -1 10 8 6 4 y 2 0 -2 -1 0 1 2 x 3 4 5 6 Hamilton Institute .A linear model Example: Find the least squares line that fits the following data points.

A linear model The regressor is given by reg =[-1 1 0 1 1 1 2 1 3 1 4 1 5 1 6 1]. Hence reg'*reg = [92 20 20 8] Hamilton Institute .

.... The linear parameters can be found using: T Y T ! Y! T ( T )1 Hamilton Institute . We have seen that the basis functions can be linear or non-linear.Summary: Linear least squares To do a least squares fit we start by expanding the unknown function as a linear sum of basis functions: y( x ) ! af1 ( x ) bf 2 ( x ) .

Discrete time dynamic systems Our examples work beautifully for static systems. What about identifying the parameters of dynamic systems. We wish to build a model of the relationship between the throttle and the speed of an automobile. Dynamic systems are in principle not any different to static systems. We begin by collecting data from an experiment. Consider the following problem. We define our regressors and solve the regression problem. T ROTTLE CAR DYNAMICS VELOCITY Hamilton Institute .

14 0.16 0.08 12 0.06 11 0.1 0.Discrete time dynamic systems 0.04 0 50 100 150 ti e [s ec ] 200 250 300 ti e [s ec ] ¢ ¡ £ 10 peed 9 8 7 6 0 50 100 150 200 250 300 350 Hamilton Institute .12 U 0.

8uk 0.03 Hamilton Institute .95vk 3.Discrete time dynamic systems A good choice for the model structure is first order: vk 1 ! avk bu k c We can solve for the parameters by solving T Y T ! Y! T ( yielding: T )1 v( k 1 ) ! 0.

Hamilton Institute . Also. Sometime we want to estimate model parameters recursively so that the parameters can be estimated online. then we need to continually estimate and verify the model parameters. if system parameters change over time.Recursive identification The algorithms that we have looked at so far are called batch algorithms.

Recursive least squares The least squares algorithm invented by Gauss can be arranged in such a way such that the results obtained at time index k-1 can be used to obtain the parameter estimates at time index k. To see this we use T Y T ! Y! T ( T )1 and note that T m T m m m !§ m i !1 T i i i !1 Ym ! § i yi Hamilton Institute .

Recursive least squares With a little manipulation (show) we get: k ! Pk k 1 ( yk k T k k 1 ) where: Pk1 ! Pk11 k T k More complicated versions of the algorithm are available that avoid matrix inversion. Hamilton Institute .

5 c 0 1.5 4 0 10 20 30 40 50 60 70 80 90 100 2 b 0 0 10 20 30 40 50 60 Ti e index k 70 80 90 100 ¤ Hamilton Institute .5 0 10 20 30 40 50 60 70 80 90 100 1 a 0.Recursive least squares (car example) 1 0.

1 50 1 0.1 c ¥ 0 0 .6 50 3 2 1 50 55 60 65 70 75 80 Tim e index k 85 90 95 b 55 60 65 70 75 80 85 90 95 100 Hamilton Institute .Recursive least squares (car example) 0.8 55 60 65 70 75 80 85 90 95 100 a 0.

The matrix inversion lemma One not-so-nice feature of the RLS formula is the presence of a matrix inversion at each step. This can be removed using the matrix inversion lemma (the Sherman-Morrisson formula). Let . . The -1 -1 be invertible square is invertible and (A BCD)-1 ! A1 A1 B( C 1 DA1 B )DA1 Hamilton Institute . and matrices.

The RLS algorithm Application of the lemma results in the standard RLS algorithm. ! Gk ( yk k 1 k T k k k 1 ) Pk 1 k Gk ! 1 T Pk 1 k Pk ! ( I Gk T k )Pk 1 Hamilton Institute .

If we assume that the rate of change of mass of the fuel is um and the exhaust velocity is ve.Time-varying systems Much of the appeal of the RLS algorithm is that we can potentially deal with time-varying systems. Example: Suppose that a rocket ascends from the surface of the earth propelled a thrust force generated through the ejection of mass. then the physical equations governing the rocket is: dv m( t ) ! m( t )g u m ve dt Hamilton Institute .

How can we modify the basic RLS algorithm? k ! k 1 P k k ( yk T k T k k 1 ) Pk1 ! Pk11 k To estimate time-varying parameters we would like to forget past data points.Forgetting factors For time varying systems we must estimate the parameters recursively. The only place in the above formula that depends on past data points is the covariance matrix. Hamilton Institute .

How can we modify the basic RLS algorithm? k ! k 1 Pk k ek T k Pk1 ! Pk11 k This corresponds to minimising the time-varying cost function: V ( .Forgetting factors For time varying systems we must estimate the parameters recursively.k ) ! § k k i i !1 ( yi Ö i ) 2 y Hamilton Institute .

The RLS algorithm Application of the matrix inversion lemma results in the standard RLS algorithm with a forgetting factor. k ! Gk ( yk k 1 k T k T k k 1 ) Pk 1 k Gk ! T Pk 1 k Pk ! ( I Gk 1 )Pk 1 Hamilton Institute .

5 1 0 50 100 150 tim e [ ec ] 200 250 300 § § ¦ ¦ ¦ 0 . bk vary as shown. 0.5 0 50 100 150 tim e [ ec ] 200 250 300 Hamilton Institute .5 0 b 0 .5 a 0 0.Example Consider the dynamic system yk 1 ! ak yk bk u k where the parameters ak.

5 1 0.5 b 0 0 .5 a 0 0 .5 1 0 50 100 150 tim e [ e c ] 200 250 300 © © ¨ ¨ ¨ 0 50 100 150 tim e [ e c ] 200 250 300 Hamilton Institute .Example 1 0.

Numerical issues RLS algorithm is of great theoretical importance. it suffers from one very big disadvantage. Hamilton Institute . However. It is numerically unstable. The numerical instability stems from the equation: Pk1 ! Pk11 k T k If no information enters the systems. P becomes singular and the estimator returns garbage.

Numerical issues 1 0.5 a 0 0 2 1 b 0 -1 1 input 0 100 200 300 400 tim e [s ec ] 500 600 700 100 200 300 400 tim e [s ec ] 500 600 700 0 -1 0 100 200 300 400 tim e [s ec ] 500 600 700 Hamilton Institute .

The really interested student should consult Astrom for more on this topic. Hamilton Institute . Roughly speaking. PE means that the input signal has been chosen such that the least squares estimate is unique.Persistence of excitation One final thought: Persistence of excitation Persistence of excitation has a strict mathematical definition.

We were trying to minimize: m V ( a . Huh! What is meant by a quadratic cost function. In this case finding the least squares solution was easy because the error surface is quadratic.Error surfaces and gradient methods All the examples that we have looked at so far involved linear in parameter models.b ) ! § ( yi Ö i )2 y i !1 m ! § ( yi axi b )2 i !1 Hamilton Institute . Consider the examples of the line fitting.

let¶s assume that we have two observations (m=2) and that we assume that b = 0. V ( a ) ! ( y1 ax1 )2 ( y2 ax2 )2 ! ( x12 x22 )a 2 ( 2 y1 x1 2 y2 x2 )a y12 y22 Remember we are trying to find the parameter a that minimises this function. But the function is quadratic in a. Hamilton Institute . Then.Least mean squares and gradient methods To make life simple.

Least mean squares and gradient methods The quadratic surface looks like the following for a single parameter. 150 100 V (a) 50 0 -6 -4 -2 0 a 2 4 6 8 Hamilton Institute .

Least mean squares and gradient methods With two parameters we get some thing like: Hamilton Institute .

( k 1 ) ! ( k ) V ( ) For linear in parameter structures the batch version of least squares is better.A word on gradient methods Another way of estimating the best parameters is to estimate the parameters in an interative manner in the direction of the gradient. Hamilton Institute . However. the above idea can be extended to deal with model structures that are not linear in parameters (Doug will tell you all about this).

- Module
- RealTime Curve Interpolators
- Algorithm 2
- art-3A10.1023-2FA-3A1018681526204.pdf
- Dynamic Optimization
- Hevia ARMA Estimation
- MultipleRegressionREview.pdf
- Chapter6.pdf
- ds1_2012
- handout2
- LOCAL LINEAR ESTIMATION OF SCALAR DIFFUSION.pdf
- Kalman Filter
- week_4_UNIT_I
- Roi Baer- Accurate and efficient evolution of nonlinear Schrodinger equations
- 9 Annotated 5.4 and 5.5 Fall2014
- Self Information
- lec16 Bocipherous
- Normal Distribution
- The Analysis of a Fractional Pixels Generated by a Combination of Linear FIR Filters in a Video Codec
- TESIS Hybrid Symbolic-Numeric Method for Multiple Integration Tensor-Product -O a Carvajal
- hmwk1
- 234 3.pdf
- image_fusion_sati
- RandomGrids_Andreasen
- MITRES2_002S10_linearbooj
- Ols Geometry
- DiffEq
- Tired of Defining Axis Scale for SAS Graphs
- mod 1 syll
- Report Probabilities

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue listening from where you left off, or restart the preview.

scribd