0 Up votes0 Down votes

4 views3 pagesMay 08, 2014

© © All Rights Reserved

PDF, TXT or read online from Scribd

© All Rights Reserved

4 views

© All Rights Reserved

- Hidden Figures: The American Dream and the Untold Story of the Black Women Mathematicians Who Helped Win the Space Race
- Hidden Figures Young Readers' Edition
- The Law of Explosive Growth: Lesson 20 from The 21 Irrefutable Laws of Leadership
- The E-Myth Revisited: Why Most Small Businesses Don't Work and
- The Wright Brothers
- The Power of Discipline: 7 Ways it Can Change Your Life
- The Other Einstein: A Novel
- The Kiss Quotient: A Novel
- State of Fear
- State of Fear
- The 10X Rule: The Only Difference Between Success and Failure
- Being Wrong: Adventures in the Margin of Error
- Algorithms to Live By: The Computer Science of Human Decisions
- The Black Swan
- Prince Caspian
- The Art of Thinking Clearly
- A Mind for Numbers: How to Excel at Math and Science Even If You Flunked Algebra
- The Last Battle
- The 6th Extinction
- HBR's 10 Must Reads on Strategy (including featured article "What Is Strategy?" by Michael E. Porter)

You are on page 1of 3

1 A randomly scaled logistic function

(a) Each sample path is a translation and scaling in time of the logistic function . Thus each sample path

is an increasing function with limit zero at and limit one at .

(b) All sample paths converge to one at t . So lim

t

X

t

= 1 in the a.s sense, and therefore in the p.

and d. senses too. Since X is bounded, the limit holds also in the m.s. sense.

(c) For any a, b R with a < b, P{X

b

X

a

> 0} = 1. Thus, E[X

b

X

a

] > 0, or E[X

b

] > E[X

a

]. That is,

E[X

t

] is strictly increasing in t.

Note that ()

1

2

=

e

/2

e

/2

2(e

/2

+e

/2

)

=

1

2

tanh

_

2

_

, which is an odd function of . Thus, X

0

1

2

=

1

2

tanh

_

U

2V

_

,

which is an odd function of

U

V

. Since U and U have the same distribution, and U and V are independent,

it follows that

U

V

and

U

V

have the same distribution. So

U

V

has an even pdf. Thus, E

_

tanh

_

U

2V

_

= 0, or

E[X

0

] =

1

2

. Since lim

t

X

t

= 1 in m..s. sense and for a m.s. convergent sequence, the mean of the limit is

the limit of the means, it follows that lim

t

E[X

t

] = 1. Similarly, lim

t

E[X

t

] = 0. (Note: It can also

be shown that E[X

t

] 0.5 is an odd function of t.)

(d) No, in fact X is not even wide sense stationary, because, for example, E[X

t

] depends on t.

(e) No. Given r < s < t and constants a and b with 0 < a < b < 1, the condition (X

r

= a, X

s

= b) determines

U and V , and therefore it also determines X

t

. On the other hand, given X

s

= b there is a nondegenerate

conditional density of X

t

. So the conditional distribution of X

t

given X

s

= b is not equal to the conditional

distribution of X

t

given (X

r

= a, X

s

= b). So X is not a Markov process.

2 A simple model of a neural spike train

(a) Let T denote the time elapsed after a given spike until the next spike. The failure rate function of T is

h(t) =

_

1

4

0 t < 1

1 t 1

Therefore,

1 F

T

(t) = exp(

_

t

0

h(s)ds) =

_

e

t/4

0 t < 1

e

(1/4+t1)

t 1

and

f

T

(t) = h(t)(1 F

T

(t)) =

_

1

4

e

t/4

0 t < 1

e

(1/4+t1)

t 1.

(b) By the area rule for expectations, E[T] =

_

0

(1 F

T

(t))dt = 4 3e

1/4

1.6636.

3 A compound Poisson process with mean zero

(a) E[Y

t

|N

t

= n] = E [

n

i=1

J

i

] =

n

i=1

E[J

i

] = 0 for all n, so E[Y

t

|N

t

] is identically zero. Thus, E[Y

t

] =

E[E[Y

t

|N

t

]] = 0.

(b) The increment of Y over an interval [a, b] is the sum of N

b

N

a

independent random variables, all

with the same distribution as J

1

. By the independent increment property of the Poisson process N, for m

disjoint intervals [a

1

, b

1

], . . . , [a

m

, b

m

], the corresponding m increments of Y are given by sums of independent

numbers of Js, which are all independent and with the same distribution as J

1

. Those increments of Y are

thus independent.

(c) E[Y

2

t

|N

t

= n] = E

_

(

n

i=1

J

i

)

2

_

= Var (

n

i=1

J

i

) = n

2

J

so E[Y

2

t

] = E[N

t

2

J

] = t

2

J

.

Similarly, E

_

e

juYt

|N

t

= n

= E

_

e

J1++Jn

|N

t

= n

=

J

(u)

n

, so

Y

(u) = E[

J

(u)

Nt

] = e

t{

J

(u)1}

.

1

(d) Let c =

2

J

so that E[Z

t

] = E[Z

0

]. (If any choice of c works it must be this one.) Let t

1

< < t

n+1

.

Think of t

n

as the present time, and let H

tn

represent the past information: H

tn

= (Y

t1

, . . . , Y

tn

). Then,

E[Z

tn+1

|H

tn

] = E[Y

2

tn+1

ct

n+1

|H

tn

]

= E[Y

2

tn

|H

tn

] + 2E[(Y

tn+1

Y

tn

)Y

tn

|H

tn

] +E[(Y

tn+1

Y

tn

)

2

|H

tn

]

2

J

t

n+1

= Y

2

tn

+ 2 E[(Y

tn+1

Y

tn

)|H

tn

]

. .

0

Y

tn

+(t

n+1

t

n

)

2

J

2

J

t

n+1

= Z

tn

Therefore, E[Z

tn+1

|Z

t1

, . . . , Z

tn

] = E[E[Z

tn+1

|H

tn

]|Z

t1

, . . . , Z

tn

] = Z

tn

. Thus, Z is a martingale.

4 Hitting the corners of a triangle

Let h

i

be the mean time to hit {3, 4, 5} from initial state i. We are interested in h

1

= E[

B

]. By symmetry,

h

2

= h

6

. By conditioning on the rst step, we nd:

h

1

= 1 +h

2

h

2

= 1 +

1

2

h

1

Solving yields (h

1

, h

2

) = (4, 3). Thus, E[

B

] = 4.

(b) Redene h

i

to be the mean time to hit state 3 from initial state i. By symmetry, h

1

= h

5

and h

2

= h

4

.

By conditioning on the rst step, we nd

h

1

= 1 + (h

2

+h

6

)/2 h

2

= 1 +

1

2

h

1

h

6

= 1 +h

1

Solving yields (h

1

, h

2

, h

6

) = (8, 5, 9). Thus, E[

3

] = 8.

(c) The mean time to reach {3, 4, 5} is four. When {3, 4, 5} is rst reached, the state is either 3 or 5. If

3 is reached rst, then the process continues until state 5 is reached. If 5 is reached rst, then the process

continues until state 3 is reached. Once either state 3 or 5 is reached, by symmetry, the mean amount of

additional time needed to reach the other is the same as E[

3

] in part (b). So E[

C

] = 4 + 8 = 12.

(d) The dierence

R

C

is the time needed to reach state 1 starting from either state 3 or 5, and by

symmetry, the mean of this dierence is E[

3

]. Thus, E[

R

] = 4 + 8 + 8 = 20.

5 Marginal distributions for a continuous-time three state Markov process

(a) Q =

_

_

0

2

0

_

_

.

(b) Solving ()Q = 0 for the probability vector () gives () =

_

2+

,

2+

,

2+

_

.

(c) We shall solve the forward Kolmogorov equations

(t)

t

= (t)Q. The second of the equations, upon

substituting

0

(t) +

2

(t) = 1

1

(t), becomes

1

(t)

t

=

0

(t) 2

1

(t) +

2

(t) = (2 +)

1

(t)

which, with the initial condition

1

(0) = 0, yields

1

(t) =

2 +

_

1 e

(2+)t

_

.

The other two equations,

0(t)

t

=

0

(t) +

1

(t) and

2(t)

t

=

2

(t) +

1

(t), can be combined

to yield

(0(t)2(t))

t

= (

0

(t)

2

(t)). Solving, with the initial condition

0

(0)

2

(0) = 1, yields

2

0

(t)

2

(t) = e

t

. Also,

0

(t) +

2

(t) = 1

1

(t). Putting it together yields

0

(t) =

2 +

+

1

2

_

e

(2+)t

2 +

+e

t

_

2

(t) =

2 +

+

1

2

_

e

(2+)t

2 +

e

t

_

6 Conditioning a Gauss-Markov process

(a) Since E[X

t

] = 0 for all t, Cov(X

t

, X

0

) = R

X

(t) and

o

X

(t) =

E[X

t

|X

0

= 0] = E[X

t

] +

Cov(Xt,X0)

Var(X0)

(0 E[X

0

]) = 0.

(b) The desired quantity, R

o

X

(s, t) is the upper right entry of the conditional covariance matrix of

_

Xs

Xt

_

given

X

0

. By the formula for the covariance of error for MMSE estimation, said matrix is given by:

Cov

__

X

s

X

t

_

X

0

_

= Cov

__

X

s

X

t

__

Cov

__

X

s

X

t

_

, X

0

_

Var(X

0

)

1

Cov

_

X

0

,

_

X

s

X

t

__

=

_

1 e

|st|

e

|st|

1

_

_

e

|s|

e

|t|

_

_

e

|s|

e

|t|

_

which yields R

o

X

(s, t) = e

|st|

e

(|s|+|t|)

.

(c) Yes, the conditional distribution has the Markov property. Note that C

o

X

(s, t) = R

X

(s, t) = 0 if st 0

(i.e. if s and t dont have the same sign) so that under the conditional distribution, the process for positive

time is independent of the process for negative time, and the correlation structure is symmetric in time. It

hence suces to prove that the process restricted to times in (0, +) is Markov. To that end, we nd the

correlation coecient between X

s

and X

t

under the conditional distribution, for 0 < s t :

o

X,Y

(s, t) =

e

(st)

e

(s+t)

1 e

2s

1 e

2t

=

e

s

(1 e

2s

)

1 e

2s

e

t

1 e

2t

=

e

2s

1

e

2t

1

Therefore, for 0 < r < s < t,

o

X,Y

(r, t) =

o

X,Y

(r, s)

o

X,Y

(s, t), so that the process under the conditional

distribution is Markov.

Alternative argument: Another way to show that the process restricted to positive times is Markov is to refer

directly to the Markov property of X itself. Given 0 < r < s < t, by the Markov property of X, the condi-

tional distribution of X

t

given X

0

, X

r

, X

s

is the same as the conditional distribution of X

t

given X

s

, which

is the same as the conditional distribution of X

t

given X

0

, X

s

. Thus, for the conditional distributions given

X

0

= 0, the conditional distribution of X

t

given X

r

, X

s

is the same as the conditional distribution of X

t

given

X

s

. This, together with the fact the conditional distribution is Gaussian, implies it has the Markov property.

3

- hw3_solUploaded byanthalya
- Optimal Risk PortfolioUploaded byKarthik Kota
- chapter 2Uploaded byKatie Cook
- STATISTICS EXERCISESUploaded byRaymond Lu
- Folien Econometrics I Teil3Uploaded byHumus
- articlemonopolyrevisitedUploaded byapi-264326218
- 2%2C3-prob.pdfUploaded byKevin Hyun
- KalmRandUploaded bymossammos1
- Homology Search: Basic Local Alignment Search Tool (BLAST)Uploaded byrf
- 1106.2055Uploaded bysaleemnasir2k7154
- MonteCarlo+QuasiMC 2010Uploaded byc_mc2
- A Call Admission Control for Service Differentiation and Fairness Management in WDM Grooming Networks - ZhanxiangUploaded byCrooxx Rocker
- Slide_Set_4.PDFUploaded byIndranil Bhattacharyya
- Markov ChainsUploaded byHamza Bennis
- 08_Lopez Et Al 2001_LUP_Predicting Land_cover and Land_use ChangeUploaded byOmar Sarhan
- econometricUploaded byali
- NYU Lecture1Uploaded byNishanth Pasham
- Anderson1957_Statistical Inference About Markov ChainsUploaded byBernardo Frederes Kramer Alcalde
- 2005 Efficient Service Location Design in Government Services a Decision Support System FrameworkUploaded bynileshsaw
- lecture01[1]Uploaded byAwele Austine Rapu
- MIT6_436JF08_final_2008Uploaded byMohsina Zafar
- Homework 1 - 2016Uploaded byAsfand Saleem
- lec1.pdfUploaded byVanitha Shah
- f 0253540Uploaded byInternational Organization of Scientific Research (IOSR)
- C. Maes, F. Redig and E. Saada- The Infinite Volume Limit of Dissipative Abelian SandpilesUploaded byHemAO1
- andrews2014.pdfUploaded byvisnuganth
- markovUploaded byAnonymous dGnj3bZ
- FadsUploaded bykrunoeisenn
- calm.pdfUploaded byAtiqur Rahaman
- Stochastic ProcessesUploaded byHasan Zind Al Hadid

- sol7Uploaded byThinh
- sol6Uploaded byThinh
- sol5Uploaded byThinh
- sol3Uploaded byThinh
- sol2Uploaded byThinh
- FinalUploaded byThinh
- sol1Uploaded byThinh
- Quiz SolUploaded byThinh
- exam2solUploaded byThinh
- QuizUploaded byThinh
- ps7Uploaded byThinh
- ps6Uploaded byThinh
- ps5Uploaded byThinh
- ps4Uploaded byThinh
- ps3Uploaded byThinh
- ps2Uploaded byThinh
- ps1Uploaded byThinh
- Final SolUploaded byThinh
- exam2Uploaded byThinh
- exam1Uploaded byThinh
- exam1solUploaded byThinh
- 534 CoversUploaded byThinh
- Minimax OptimizationUploaded byThinh

- Ass2AnsUploaded byNguyễn Thị Hồng Yến
- lecture1_-_2014Uploaded bysarahjohnson
- KernSmooth.pdfUploaded byAgus Dwi Sulistyono
- THE TRAGEDY OF TITANIC: A LOGISTIC REGRESSION ANALYSIS.Uploaded byIJAR Journal
- Example of Detrending RegressionsUploaded byFranco Imperial
- Ferrari e Galves - Construction of Stochastic Processes, Coupling and RegenerationUploaded byFelipe Contratres
- me-ise JUNE 2012 qbUploaded bysivakumar
- Pretest-Posttest-ANCOVUploaded byajax_telamonio
- InstructionUploaded byReza Riantono Sukarno
- FormulasUploaded byNguyen Duc Nam
- Smooth transition autoregressive models - a survey of recent developments.pdfUploaded byDung Ngo Hoang
- 123S10-3Uploaded byAzam Shaikh
- 3 (2).pdfUploaded bySWAPNIL BOBATE
- Glmulti WalkthroughUploaded byrglinton_69932551
- HeteroscedasticityUploaded byaziz haq
- Regression Analysis_ How to Interpret the Constant (Y Intercept) _ MinitabUploaded by3rlang
- Bootstrap for Panel Data (Ppt), HounkannounonUploaded byanon_183065072
- Wind Energy Production Kaldellis-Zafirakis-Kondilil IEEE2017 PaperUploaded byCrispFutures
- Factor Analysis - spssUploaded bymanjunatha t
- IntroUploaded byonmcv
- Probability and Random ProcessUploaded byShri Prakash
- Brm statwikiUploaded byMubashir Anwar
- Stock_Watson_3U_ExerciseSolutions_Chapter12_Instructors.pdfUploaded byArun Pattanayak
- Regional Flood Frequency Analysis I Homogeneity StatisticsUploaded byAnkush Mangoch
- Error Analysis for the Ion Probe Part-1Uploaded byeggwash
- Spss Anova JatiUploaded byHestikrn
- BBS11_ISM_Ch09 (1)Uploaded bycleofecalo
- quizsol3Uploaded byDhayanandh Murugan
- MDM4U Culminating Project RequirementsUploaded byAL Rajhi Zakaria
- R21@09@2019Uploaded byNAG

## Much more than documents.

Discover everything Scribd has to offer, including books and audiobooks from major publishers.

Cancel anytime.