You are on page 1of 5

Math 727, Fall 2017

Spencer Dang

(1) Proof.
n
X n
X
2
x2i − 2xi x̄ + x̄2

(xi − x̄) =
i=1 i=1
n
X n
X n
X
= x2i − 2x̄ xi + x̄2 1
i=1 i=1 i=1
n
X
= x2i − 2x̄n ∗ x̄ + x̄2 n
i=1
Xn
= x2i − nx̄2
i=1
n n
!2
X 1X
= x2i − n xi
i=1
n i=1
n n
!2
X 1 X
= x2i − xi
i=1
n i=1

1

s a.s Note that by the Strong law of large numbers both Snn → µ and 1 n i=1 µ2 )) = σ 2 Thus a. Let m > 0.s lim Tn → σ 2 n→∞ (3) Proof. n  2 1 X Sn Tn = xi − n − 1 i=1 n n   2 1 X Sn = (xi − µ) + µ − n − 1 i=1 n n  " n #  2 1 X 2 Sn 1 X n n Sn = (Xi − µ) + 2 µ − xi − µ+ µ− n − 1 i=1 n n − 1 i=1 n−1 n−1 n n       2 1 X Sn n Sn n n Sn = (Xi − µ)2 + 2 µ − − µ+ µ− n − 1 i=1 n n−1 n n−1 n−1 n n      2 1 X 2 n Sn Sn n Sn = (Xi − µ) + 2 µ− −µ + µ− n − 1 i=1 n−1 n n n−1 n n     2  2 1 X n Sn n Sn = (Xi − µ)2 − 2 −µ + µ− n − 1 i=1 n−1 n n−1 n n   2 1 X 2 n 2n Sn = (Xi − µ) + − −µ n − 1 i=1 n−1 n−1 n n   2 1 X 2 n Sn = (Xi − µ) − −µ n − 1 i=1 n−1 n n  2 n 1X 2 n Sn = (xi − µ) − −µ n − 1 n i=1 n−1 n Pn (xi − µ)2 → E((x − a. we want to show that limn→∞ P(Sn > m) = 1. M    k  n−k X n 2 1 P(Sn > m) = 1 − k=0 k 3 3 M X n! 2k ≥1− n 2 3n  k=0 2 ! n!2m ≥1− 2 3n n2 ! 2 .(2) Proof.

Note that our summand is a positive decreasing continuous function. n −n √ n e 2πn lim P(Sn > m) ≥ 1 − 2m lim  n  −n p 2 n→∞ n→∞ n n2 n 3 2 e 2 2π 2 √ nn e−n 2πn ≥ 1 − 2m lim n n n −n n n→∞ 3 ( ) e 2π 2 2 √ 2n 2πn ≥ 1 − 2m lim n n n→∞ 3 2π 2 ≥ 1 − 2m · 0 =1 Therefore limn→∞ P(Sn > m) = 1 (4) (a) diverges Proof. u = log(x). Taking limits. Z ∞ Z ∞ 1 du dx = = log log x|∞ 2 → ∞ 2 x log x log 2 u Since the integral diverges. our CDF becomes lim Fn (x) = 0 n→∞ d Hence Xn → 0. therefore we apply the integral test to it with the u-substitution. as n → ∞. (7) (a) The CDF of Xn will look something like this: 2 1 Fn (x) 0 −2 0 2 x This can be described mathematically as  0  x<0 Fn (x) = nx 0<x<n  1 x>n  However. 3 . our series diverges.

 Yn − n P(Xn ≤ z) = P √ ≤z n √  = P Yn ≤ z n + n Let Zk ∼ P ois(1) and note that in the sense of distributions n X Yn = Zk k=1 Also note that Pn k=1 Zk − n Xn = √ n d Then by the central limit theorem. When z is negative and n → ∞. This tells us that ( d 1 z>0 P(Xn ≤ z) → H(x) = 0 z<0 d (c) Xn has a pdf with 1 − cos 2πnx. 1). 1). the integral goes to 0. 1). 4 . with z ∈ (0. Z z P(xn ≤ z) = 1 − cos(2πnx)dx 0 1 =x− sin 2πnx|z0 2πn 1 =z− sin 2πnz 2πn   1 = lim z − sin 2πnz n→∞ 2πn =z d So Xn → U nif (0. Xn → N (0. Consider CDF of Xn given by Z z √  2  n −x n P(Xn ≤ z) = √ exp dx −∞ 2π 2 Making a change of variables. √ u = x n/2 √ du = n/2dx We get √ Z z n 2 √ exp −2u2 du  −∞ 2π Notice that when z is positive and n → ∞. 1). 1) n −n Y√ (d) We have Yn ∼ P ois(n) and Xn = n . then the integral is just 1. with x ∈ (0. 1/n). CDF of Xn .(b) Xn ∼ N (0. Claim that Xn → U nif (0.

(8) Proof.Y (x. y) 5 . we have d FXn (x)FYn (x) → FX (x)FY (y) = FX. we let the CDFs of X. Yn ≤ y) = P(Xn ≤ x)P(Yn ≤ y) = FXn (x)FYn (x) This follows from definition of the CDF and independence.Y (x. Y ) in distribution. Let us restrict our attention to the set of points of continuity of (Xn . (x. since we know that the separate sequences Xn and Yn converge in distribution.Yn (x. FYn and FY respectively. call it S. y) ∈ S. Yn and Y be FX . Xn . To prove that the joint distribution (Xn . y) = P(Xn ≤ x. let FXn . FX. Let Xn and Yn be sequence of random variables converging in distribution to X and Y respectively. FXn . y) denote the CDF. y).Yn (x. Yn ). such that for each n. Xn and Yn are independent. FXn . For the joint distributions. Yn ) converges to (X. Taking n → ∞.