11 views

Uploaded by Aseem Raseed

save

You are on page 1of 34

VIJAYA CHANDRAN RAMASAMI (KUID 698659)

1

2

VIJAYA CHANDRAN RAMASAMI (KUID 698659)

Contents List of Figures 1. Introduction 1.1. Motivation for MUD 1.2. Overview of the Report 2. System Model 2.1. The Matched Filter 3. Optimal Multiuser Detection 3.1. The Maximum-Likelihood Criterion 4. Generalized MMSE Multiuser Detection 4.1. Other Variations 4.2. The Gradient Projection Method 4.3. Kuhn-Tucker Conditions 4.4. Gradient Projection - Movement on the Constraint Edge 4.5. Exception - Out of bounds 4.6. Movement inside the Constraint Space 4.7. Stopping Criteria 4.8. Computational Complexity 4.9. Advantage 5. Results - Generalized MMSE MUD 5.1. Test Case - 1 5.2. Test Case II 5.3. Test Case III - A Higher Dimensional Problem 5.4. Observations 5.5. Conclusion 6. MATLAB Code for Generalized MMSE Detection 6.1. The Gradient Projection Method 6.2. Experiment with 2 Users 6.3. Experiment with 50 Users 7. Linear MUD 7.1. Derivation of the MSE 7.2. Method of Steepest Descent 7.3. Method of Steepest Descent - Noisy Gradient 7.4. Time-Varying Step Size (µ) 7.5. “Noisy” Line Search 7.6. Convergence 8. Results for Linear MMSE MUD 8.1. Using Exact Gradient 8.2. Convergence Rate - Using Noisy Gradient 9. Code for Linear MMSE MUD 9.1. Using Exact Gradient 9.2. Using Noisy Gradient References 3 4 4 4 5 5 7 7 8 8 8 8 9 10 10 11 11 11 12 12 15 16 16 17 18 18 21 21 23 23 24 24 25 25 25 26 26 29 31 31 32 34

EECS 967 - PROJECT MMSE MULTIUSER DETECTION

3

List of Figures 1 The Matched Filter 2 Exception for the Gradient Projection Algorithm 3 Gradient Projection Algorithm Execution 4 Error (Between the Objective Functions (optimal and k th iteration) 5 αk Variation 6 Convergence of the Gradient Projection Algorithm for Test-Case 2 7 α plot for Gradient Projection (Test Case III) 8 Accuracy Vs. Iterations 9 Linear Transformation on the statistic (y1 , . . . , yM ) 10 Convergence Rate Vs. Eigen-Value Spread 11 Convergence Rate Vs. Eigen-Value Spread (ﬁxed αk ) 12 Weight Vector Convergence 13 Line Search αk Variation 14 Residual Error with Fixed and Varying Step Sizes 15 Residual Error with Noisy Line Search 6 11 13 13 14 15 16 17 23 26 27 27 28 29 30

• the seventh section describes the problem of Linear Multiuser Detection and various solution methods. • The Eight and Ninth sections present the results and code for Linear MMSE MUD respectively.the exploitation of the information contained in the cross-talk to provide optimal detection. . the cross-talk between users is higly structured with the structure being described in some sense by the cross-correlation matrix of the signature sequences. The conventional wisdom for demodulating mutually interfering digital streams of information was the matched-ﬁlter followed by a detector where. Introduction Multiuser Detection (MUD) techniques are one of the most important recent advances in communications technology. • The ﬁfth and sixth sections present the results and code for the Gradient Projection algorithm respectively. the speciﬁc case of CDMA MUD based on pseudo-random signature sequences will be considered. Thus a detector that takes into account. A gradient projection algorithm for implementing generalized MMSE multiuser detection is also presented. In this techique.1. the structure of the cross-talk can provide better performance than one that ignores it as noise. the cross-talk between users is neglected as additive white gaussian noise.4 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 1. CDMA etc. 1. • The fourth section describes the Generalized Multiuser Detector and an implementation of it using the gradient projection method. the MMSE optimization criterion will be considered in detail and the convergence charaterestics of various iterative techniques will be studied. FDMA. 1. • The second section of the report describes the mutliple access model used for the project. Overview of the Report. MUD deals with the optimal detection of mutually interfering digital streams of information that occur in various communication systems based on TDMA. Speciﬁcally. In this project. Motivation for MUD. each matched ﬁlter was matched to its corresponding signature sequence. This is the basic motivation behind the multiuser detection techniques .2. In reality. • The third section of the report desribes Optimal Multiuser Detection and the problems associates with it.

• The diagnol elements are equal to 1 (normalized) ⇒ It is toeplitz. . .PROJECT MMSE MULTIUSER DETECTION 5 2. ρM 1 . . . The output of the j th correlator is given by: N (4) yj = n=1 y(n)sj (n) When expanded. sj >= l=1 si (l)sj (l) Where. • b = [b1 . In the case of a multiple-access system. ρM M The R-matrix has the following properties: • It is symmetric. • ℵ(n) is a white-gaussian noise sequence with unit PSD. the vector of the received bits. ρ1M ρ21 ρ22 . Further. 2. the matrix is non-negative deﬁnite. Am ). ρj2 . as shown in ﬁg (1). bM ]T . • In general. The Matched Filter. . . • rj = [ρj1 . • bk ∈ {1.1. N is the length of the signature sequence. a one-shot model is used. Note that. ρjM ]T . . . we can deﬁne a cross-correlation matrix R as: ρ11 ρ12 . with the k th user transmitting a bit-stream bk over an AWGN channel. Here. • σ is the standard-deviation of the noise present in the channel. . • Ak is the received amplitude of the k th user. the cross-correlation vector of the j th -user with all the other users. Consider a CDMA multiple-access system with N-users. . This one-shot model is enough for analysis if synchronous CDMA is considered. since only one bit duration is considered. Let the k th user be assigned a signature sequence sk [n] and let the received amplitude of the k th user be Ak .EECS 967 . the signature waveforms are selected such that ρii = 1 (so that the cross-coreelations are normalized). the above equation becomes: M N N (5) yj = k=1 M Ak bk n=1 sk (n)sj (n) + n=1 ℵ(n)sj (n) (6) = k=1 Ak bk ρjk + nj The above expression can be written in a compact matrix notation form: (7) yj = rj Ab + nj Where. the matrix of received signal amplitudes. Let the normalized cross-correlations of the signature waveforms be deﬁned as: N (2) ρij =< si . . ρ2M . . System Model The model presented here is the discrete-time synchronous CDMA model [1]. . . . . The combined received signal from the channel is given by the sum: M (1) y(t) = k=1 Ak bk sk (n) + σℵ(n) Where. . . • A = diag(A1 . . . The matched-ﬁlter in digital communication system is used to generate suﬃcient statistics for signal detection. −1} is the input bit-vector corresponding to the k th user. (3) R= . . . . the receiver consists of a bank of matched-ﬁlter each matched to the corresponding users signature waveforms. .

. . . E{ni nj } = σ E (si (1). the correlation matrix of the input noise will be an identity matrix. bM nM In compact matrix notation.6 VIJAYA CHANDRAN RAMASAMI (KUID 698659) If the outputs of all the users y1 ρ11 . The Matched Filter N N (10) E{ni nj } = E n=1 σℵ(n)si (n) l=1 σℵ(l)si (l) Further simplifying. the covariance matrix of the output of the matched-ﬁlter is thus given by: (13) E{nnT } = [E{ni nj }](i. . . . . .. Thus. 2 (ℵ(1). ... . (8) . . . . . the problem of optimal multiuser-detection is presented in the next section. . = . . . ρM 1 yM (9) are stacked up. ρ21 . . ρ2M 0 . . . . si (N )) . . we get: ρ12 . . . we get: (11) ℵ(1) sj (1) . . + . . ℵ(N ) sj (N ) Using the fact that the input noise is a unit-variance white gaussian random variable. ρ1M A1 ρ22 . .j) = σ 2 R Using this model (eqn.AM 0 b1 n1 0 . the above equation can be represented as: y = RAb + n The noise at the output of the matched ﬁlter. . . . . 0 . . ℵ(N )) . the above equation simpliﬁes to: N (12) E{ni nj } = σ 2 n=1 si (n)sj (n) = σ2 ρij N2 And. . . ρM M 0 0 A2 . . . (9)) developed in this section . n can be statistically characterized as follows: Matched to s1(n) y1 Matched to s2(n) y(n) y2 Matched to sM(n) yM Figure 1.

. The probability of error is chosen as the optimization criterion since it is the most important criterion for measuring the eﬃciency of a digital communication system. removing the common factor N and using the fact that maximizing the negative of a function is same as minimizing the function. It has been shown by Verdu [1] that no-other algorithm whose computational complexity is a polynomial in the number of users exists to solve this combinatorial optimization problem. . the above expression can be simpliﬁed as: (16) Ω(b) = −2N bT Ay + N bT ARAb Again. . bM ) that optimizes (minimizes) the probability of error. The Maximum-Likelihood Criterion. the problem of optimal multiuser detection can be stated as: (17) (18) Maximize . . Optimal Multiuser Detection Using the model developed in the previous section.1. Thus the search space increases in a geometric fashion with the number of users. since b ∈ {+1.EECS 967 . (For Q-ary modulation. It is a well-known fact [4] that. The ML criterion is based on selecting the input bit. since the variables of the optimization problem are basically limited to a ﬁnite set. In other words. In the above case. that minimizes the euclidean distance between the transmitted symbol (corresponding to the input bit) and the received symbol. . there are 2M possibilities. −1}M The maximization problem stated above is a combinatorial optimization problem. . b ∈ {+1. −1}M . in the case of detecting signals corrupted by additive white gaussian noise (AWGN). . we get: N M N N M 2 (15) d(b) = n=1 y(n) − 2 k=1 2 Ak bk n=1 y(n)sk (n) + n=1 k=1 Ak bk sk (n) The ﬁrst term in the expression is independent of b and so it can be removed from the minimization process (instead we deﬁne a likelihood function Ω(b) that diﬀers from d(b) by a constant). have QN possibilies !). Ω(b) = 2bT Ay − bT ARAb Subject to . 3. the problem of optimal multiuser detection can be stated as follows: Given the statistic (y1 . In the case of multiuser detection. The straight-forward method for solving such combinatorial optimization problem is an exhaustive search over all the possibilities. Using the deﬁnitions of yj in equation (4) and using the deﬁnitions of A and b. ﬁnd an estimate of the transmitted bit-vector (b1 . . the complexity required for decoding M bits of data is O(2M ).PROJECT MMSE MULTIUSER DETECTION 7 3. the decoder that minimizes the probability of error is the Maximum-Likelihood Decoder. yM ) from the output of the matched ﬁlter. . the euclidean distance between a transmitted symbol vector corresponding to the input bit-vector b and the received symbol vector is given by [1]: N M 2 (14) d(b) = n=1 y(n) − k=1 Ak bk sk (n) Expanding the above expression.

there is no restriction on the types of operations allowed on the input statistic y1 . we deﬁne the following constants: (23) (24) So that. Further. belong to {+1. yM ). Kuhn-Tucker Conditions. To begin. Ω(b) = 2bT Ay − bT ARAb Subject to . The Kuhn-Tucker conditions for Constrained Non-Linear Programming can be stated as follows [3]: (27) (28) (29) (30) (31) (32) Ω(b∗ ) + µ h(b∗ ) = 0 µh(b∗ ) = 0 µ≥0 (Qb − q) + 2µb = 0 µ(bT b − M ) = 0 µ≥0 For the given problem in hand. Other Variations. bT b ≤ M Where. as shown by [2]. −1}M . This detector is called the generalized MMSE detector since its closed form solution is a generalized version of all the other MMSE detectors (with additional restrictions). The Steepest descent method can be used to arrive at the solution. we arrive at the following optimization problem: (19) (20) Maximize . 4. Ω(b) = bT ARAb − bT Ay 2 (22) Subject to . . . the given problem becomes: (25) (26) 1 T b Qb − bT q 2 Subject to . The Gradient Projection Method. bT b ≤ M Since the minimization is done on a convex function over a convex set.3.2. Ω(b) = Q = ARA q = Ay Note that the solution to this problem does not.8 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 4. If the operations allowed on the input statistic are restricted to be linear. . Generalized MMSE Multiuser Detection The constraint space in the case of optimal ML multiuser detection is the set of edjes of a Mdimensional hypercube. the Kuhn-Tucker conditions are given by: A possible solution to the Kuhn-Tucker conditions can be obtained by assuming the only constraint be inactive and then solving the resulting equation to get: (33) bopt = Q−1 q . in general. we can simplify the above equation (20) as follows: 1 (21) Minimize. This type of detector leads to the well-known Soft Interference Canceller [2].1. the optimization leads the linear MUD that has an elegant (quadratic) structure. a unique minimum exists and can be found using gradient projection method. 4. The sign of the solution represents the actual solution to the original problem. . including one in which the whole hyper-cube (a convex polytope) that includes the edjes corresponding to the b vector is used for optimization. If this constraint space is extended to include the M-dimensional sphere that passes through all the edjes. bT b − M ≤ 0 Minimize. Other variations of the constraint space is possible. 4. the given problem was converted to a minimization problem and scaled (for mathematical conveneince). The above problem can be solved using the gradientprojection method with Non-linear constraints [3]. It is important to note that in the optimal detector and in the generalized MMSE detector.

EECS 967 . the surface of the hyper-sphere). since the point in which all the elements of b are 1’s is one such point. the above equation can be reduced to: (35) This condition has to be satisﬁed if a solution to this problem exists such that the only constraint is inactive. But.PROJECT MMSE MULTIUSER DETECTION 9 The feasibilty of this solution can be tested by substituting this value into the constraint. We now have to move in a direction that is given by the negative of the gradient such that we minimize the function: (36) −gk = − Ω(bk )T = −(Qbk − q) Where.4. Gradient Projection . Thus: (34) (Q−1 q)T (Q−1 q) ≤ M qT Q−2 q ≤ M Using the symmetric nature of Q and some matrix algebra. Let’s assume that we start on a point that is on the constraint egde (i. The value of αk that minimizes the above equation is given by [3]: (41) αk = dT d k dT Qdk k And thus. we get: (45) xk+1 + h(bk )T α = bk+1 h(bk+1 ) = 0 h(bk ) in the ﬁrst equation and substituting the ﬁrst equation into the (xk+1 + 2αbk )T (xk+1 + 2αbk ) = M . because of the non-linear nature of the constraints. the xk+1 always falls outside the hypersphere. the direction to move is now given by the negative of the gradient projected onto the tangent plane using the projection matrix computed. since we have to satisfy the constraints. Speciﬁcally in this case. f (αk ) = Ω(bk + αk dk ) = (bk + αk dk )T Q(bk + αk dk )(bk + αk dk ) − (bk + αk dk )T q 2 The problem is “unconstrained” because we ignore the constraints at this point of time. xk+1 may not fall in the constraint space. k represents the iteration number.Movement on the Constraint Edge. Thus we have: (39) dk = −Pk gk The next thing to ﬁnd is “how far” one should move in the projected direction so that the function is minimized as much as possible in that given direction. unless dk is zero. A closed form solution to the above problem can be obtained by using the First-Order Necessary Conditions (FONC). we have to ﬁnd a value α that is solution to the following unconstrained non-linear optimization problem: 1 (40) Min . Finding such a starting point is trivial. the only constraint forms the working set to begin with. The following few sections describe the Gradient Projection method as applied to MMSE Multiuser Detection. To do this we must solve the non-equations given by: (43) (44) Using the expression for second. Thus. we ﬁrst ﬁnd the projection of this gradient onto the tangent plane of the surface that represents by the working set. The projection matrix for such a projection is given by: (37) In this context we have (38) Pk = I − h(bk )T h(bk ) h(bk )T −1 h(bk ) h(bk )T = 2bk . avoiding the use of line search techniques. 4. the iteration for the gradient projection method can be represented as: (42) xk+1 = bk + αk dk In general. Thus we must ﬁnd the projection of this new point xk+1 so that it falls back again on the constraint surface (of the working set). and thus the above equation can be reduced to: Pk = I − bk bT bk bT k k =I− Tb M bk k Thus.e. Thus. in the direction given by dk .

Exception . then we can continue the unconstrained steepest descent method till we reach a speciﬁed accuracy. since the value of α1 and α2 at this situation will be imaginary. the value of bk+1 can be found as: (49) bk+1 = xk+1 + αp h(bk )T Once the value of bk+1 is found.Out of bounds. the above procedure is iterated until we arrive at the solution (or) we encounter some exceptions. if |α1 | ≤ |α2 | if |α1 | ≥ |α2 | Using the value of αp calculated above. we can declare that we are done at this point. On the other hand. Thus. • c1 = 4bT bk k • c2 = 4bT xk + 1 k • c3 = xT xk+1 − M k+1 The obvious choice for α is the one that is corresponds to the nearest point on the surface of the hyper-sphere. For the steepest decsent method. where the value of dk was reduced by half. this problem can be detected in this specﬁc case. This happens when we move far enough (on the tangent plane). Thus we have: (48) αp = α1 . The back-oﬀ used in this project was a binary back-oﬀ. 4.6. we drop the only constraint in the working set and start using the unconstrained steepest descent method. α2 = −c2 ± c2 − 4c1 c3 2 2c1 Where. Now. This is expected. in order to avoid numerical instabilities. we get the following equation in terms of α: (46) (4bT bk )α2 + (4bT xk+1 )α + (xT xk+1 − M ) = 0 k k k+1 There are two values of α that satisfy the above equation. Thus. These two points can be found by solving the above two equations as: (47) α1 . Movement inside the Constraint Space. Thus. we can “back-oﬀ” a speciﬁc distance and try again. that a projection of the new point on the constraint surface does not exist. Once detected. 2 While αp is imaginary 4. we have the back-oﬀ algorithm given by: (50) dk = dk . since a straight line intersects a sphere at two points.5. in the neighbourhood of the optimal point. The kinds of exceptions and methods to handle them are given in the following sections. This situation is illustrated in ﬁgure (2). this implies that we have already reached the point which miminizes the objective based on the given constraints. (52) αk = T gk g T Qg gk k bk+1 = bk − αk gk It must be noted that. This is true because of the quadratic (unimodal) nature of the objective function. before trying the next iteration. We are done once we reach that condition. An “exception” occur while proceeding with the projection algorithm as explained in the above section.10 VIJAYA CHANDRAN RAMASAMI (KUID 698659) Simplifying the above expression. gk ≈ 0. α2 . Fortunately. if the direction of movement happens to be inside the constraint space. the k th iteration is given by: (51) Where αk is now given by. . we must choose to stop the gradient algorithm once we reach a small enough value of gk . If the new direction of movement happens to be outside the constraint space. The possibility of movement inside the constraint space starts when dk = 0 (or a very small value).

4. 4.7. − yT A(Q + λI)−1 Ay − λM.PROJECT MMSE MULTIUSER DETECTION 11 Constraint Space bk back-off bk+1 d xk +1 = bk + α k k 2 xk +1 = bk + α k d k This point causes an exception Figure 2. Computational Complexity. Stopping Criteria. Exception for the Gradient Projection Algorithm 4. The Generalized MMSE multiuser detector makes no assumptions on the properties (statistical and other) of the parameters b and y. the value of can be as high as 0. λ) with λ being the solution to the following optimization problem: Obviously. Advantage. the proposed gradient projection method involves only linear operations and no matrix inverses. we can reduce the accuracy as much as possible. since the output variables of the gradient projection algorithm will anyhow be quantized to bits. A large value of results in good accuracy but a large number of iterations (and computation). But apart from the computation of the inverse.1 without signiﬁcant degradation in the performance. while a small value of epsilon results in poor accuracy. On the other hand. the algorithm is numerically robust as seen from the ﬂow-chart.8. Further the algorithm does not require the value of the noise variance (σ 2 ). The solution speciﬁed in [2] for the above optimization problem is given by: (53) (54) b∗ = (Q + λ∗ I)−1 q Maximize.EECS 967 . the iterations involved in ﬁnding λ∗ are single dimensional and hence computationally less complex. The Gradient Projection algorithm can be stopped once if d or g have reached a suﬃciently small value (provided some other conditions also hold). Fortunately in our case. Further. . Infact. the above solution involves the computation of the inverse of a M × M matrix which becomes impractical for extremely large values of M. The question of “how small” can d or g become before the algorithm has to be stopped (or changed) must be answered. Let be a suﬃcient small value and let’s decide to stop (or change) the gradient projection algorithm once the values of ||d|| or g reach . Thus it is much general than the Linear MMSE detector (which is discussed next) and other similar linear detectors.9. λ≥0 ¯ ¯ Where λ∗ = max(0. but a small number of iterations.

3]T • y = [0.0000] 2.1814 0. • 1 0.0000] 1. 0.4]T ⇒ bo pt = [1. −1.0003] [1.5165] 2. 0.0000.Generalized MMSE MUD The following test-cases were used for the Gradient Projection algorithm.08.3000] [0.9999] 2. −0.0000] 2.2000] [1.1668.0000] [1. −1]T The following results were obtained from the iterations: • Number of Iterations for Convergence= 12.2. The plots (3.0082] [1.0014.0000 [0.0000. −0. −0. −0.0068. Test Case .0000.3165.0001.2382] [1.1. The ﬁnal value obtained using the gradient projection method was the same as the one obtained using AMPL. −0.4000.3200 [0.5152 -0.0003. • Final Value: bk = [0. −0.5 (55) R= 0.0000 [−0.1800 0. The parameters were chosen such that the optimal point is an interior point. −0. −0.0000] 2. −0. 0.0000] 2.4. −0. −0.0394. The values of diﬀerent parameters are tabulated in table (1).6667 -0.0000 [−0.0000 [0.12 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 5.5) illustrate the convergence behaviour of the Gradient Projection Algorithm. 5.8113] 2.6667 -0.6667 -0. 1] The optimal solution for this case is: (56) b∗ = R−1 y = [0. Ω(b) = −0.0081.6667 -0.9932] 2.4000] 0.2]T . 0.1800 0.0335.0000 [0.9986] 2. −1.3.9997] 2.6672 -0.5 1 • A=I • b = [1. Table of various parameters of the Gradient gp sd αk αk Error 2. −1]T • n = [−0.7240] [1. • Objective Function Value.0000 [−0.0016] [1.4.0016.3000. −0. −0. Results .1800 0.0000 -1.0000 [−0. −0.4.1800 0.0422] [1.0000 [−0.6667 -0.6667 -0.0001. 1.0000.1801 0.2840. −0. −0.0000.2000. −0.0000.0000 [0.1800 0.0000 [−0.0001] [1. 0.0000 -0.1800 0 0 Projection Algorithm .0000] [1. −1.1584.1800 2.2162 0.2600 1. 0.0000 [0.0000. −0.1.4]T . Iteration 1 2 3 4 5 6 7 8 9 10 11 12 Table bT bT bk dT k k k [1. • Initial Point: bk = [1.9653] 2.6803 -0.

5 Initial Working Space Working Constraint dropped here −1 −0.4 1.5 1 1.2 0 1 2 3 4 5 6 7 Iterations −−−> 8 9 10 11 12 Figure 4. Error (Between the Objective Functions (optimal and k th iteration) .2 Error (Obj Func variation from Minimum) −−−> 1 Movement along the Surface Movement on the Interior 0.8 0.5 2 −2 −2 −1.5 Constraint Space Initial Point 1 0. Gradient Projection Algorithm Execution 1.5 bk(2) −−−> 0 −0.5 0 bk(1) −−−> 0. 0.PROJECT MMSE MULTIUSER DETECTION 13 2 1.EECS 967 .5 Optimal Solution −1 −1.6 This region occurs because the value of d was allowed to converge to 1e−06 before the constraint was dropped.5 Figure 3.4 0.

2 0 0 1 2 3 4 5 6 Iterations −−−> 7 8 9 10 Figure 5.6 Movement Along Surface Wait till d is less that 1e−06 Alpha (Gradient Projection) −−−> 1.6 0.8 0.14 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 2 1.8 1.4 0. αk Variation .4 1.2 1 0.

5 1 1. Convergence of the Gradient Projection Algorithm for Test-Case 2 .5 Constraint Space 1 0.2.5 2 Figure 6. This time the parameters were chosen such that the optimal point lies on the constraint surface. 2 Initial Point 1. Test Case II.5 −1 −0. As expected.PROJECT MMSE MULTIUSER DETECTION 15 5.5 0 bk(1) −−−> 0.5 −1 Optimal Solution Initial Working Surface −1.5] The convergence is illustrated by the ﬁgure (6).5 −2 −2 −1.5. The parameters that were changed from test case-1 are: • y = [1.EECS 967 .5 bk(2) −−−> 0 −0. 0. the constraint was never dropped and the gradient algorithm continued along the constraint surface till it converged.

8 which is less than M = 50.4. 1. −0.0525 per bit. Test Case III . The following results were obtained: • No errors.4. 5. (because of the low noise variance). Increasing the value of does not sacriﬁce accuracy in this speciﬁc case. • = 10−3 For the above case. single constraint etc). • The results obtained are tabulated in the next page. diﬀerent combinations of input bits).3. Number of Iterations. The reason for the faster convergence may be because the coded MATLAB version took into account more features of the problem (such as the quadratic nature of the problem.e. • The noise in the channel was AWGN with a variance of 0.A Higher Dimensional Problem. we have qT Q−2 q = 49. α plot for Gradient Projection (Test Case III) 5. • Number of iterations = 89. The Gradient projection method was attempted with a 50 user synchronous CDMA system model.001 and convergence was obtained in 7 iterations compared to 12 in the previous case with the optimal soltion given by.5 0 0 10 20 30 40 50 Iterations −−−> 60 70 80 90 Figure 7.4028]. Accuracy Vs Iterations. The results are given for one speciﬁc run. the test-case-1 was repeated with = 0.5 1 α −−−> 0. The above problem (test case-1) was solved using AMPL and it took 28 iterations compared to the 12 iterations using the coded MATLAB version.1. This situation can be improved if we chose to be greater than 10−6 . The following were the charateristics of the model (the parameters were chosen so that the optimal solution was inside the sphere): • The Spreading Sequences were random Gaussian waveforms of chip length 200. For example. The Gradient Projection method that was implemented (in test case1) stayed around the same point on the constraint edge for a large number of iterations before the only constraint in the working set is dropped. than the AMPL version (which is more general).2. • The system had 50 users with equal power. A small test with 10 users was performed to measure the trade-oﬀ between accuracy and the number of iterations and the following graph (8) indicated an exponential decrease in the number of .16 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 5. Thus the optimal solution can be found as b∗ = Q−1 q. Observations. convex nature of the hyper-surface. • The value of α (gradient descent) is plotted in ﬁgure (7). b∗ = [0. 5.3973. (This varied widely for diﬀerent runs of the program i.4. because the optimal olution will anyhow be quantized to the nearest edge of the hyper-cube.

1 for this speciﬁc case. .PROJECT MMSE MULTIUSER DETECTION 17 iterations with increasing the value of . Accuracy Vs. Bit errors did not occur for a increase of till 0.EECS 967 . The large number of iterations in this case (compared to the 50-user case) is because of using a diﬀerent correlation matrix and a diﬀerent value of the noise variance.5. Iterations 5. 400 350 300 Exponential Decrease # of Iterations −−−> 250 200 150 100 50 −6 10 10 −5 10 10 Resolution (ε) −−−> −4 −3 10 −2 10 −1 Figure 8. This algorithm is much eﬃcient (in the computational sense) than the optimal multiuser detection when the number of users is very large. A Gradient Projection algorithm was formulated and implemented to solve the problem of Multiuser Detection. Conclusion.

A = User Power Distribution Matrix.18 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 6. b(:.Iteration # in which the Gradient Projection algorithm terminated. % The Gradient. q . . % Again.maxIters). P_k = I .1. b = zeros(M. dist = zeros(M.index)’)/M. % Project the gradient onto the tangent plane. epsilon .. g_k = Q*b(:. % Compute the Projection Matrix. end_point . % Compute the actual soltion (using Matrix Inversion).. The Gradient Projection Method..5*b_opt’*Q*b_opt .epsilon. I = eye(M.maxIters) % This flag will keep track of whether we are right now on the % Constraint Surface (or) inside it.. just for plotting. b_opt = inv(Q)*q. % Allocate and initialize the values of the variables (b).. Q .(b(:.1) = ones(M. end_point = maxIters..R = Cross-Correlation Vector of Signature Sequences. Return Values : -----------------b .. % Distance Travelled in every iteration.. maxIters .q. theflag = 0. % End Point .Values of the Output bits (Variables) at every iteration.index)*b(:.Stopping Criteria (Accuracy). for index = 1:maxIters.index) .maxIters). . ObjOpt = 0. MATLAB Code for Generalized MMSE Detection 6.Intialize to maxIters. end_point] = GradientProjection(M.# of Users.Maximum Number of Iterations (algorithm stopped after maxIters. Where.1). % MATLAB Routine for Generalized Multiuser Detection using the % Gradient Projection Algorithm % % % % % % % % % % % % % % % Paramters : ----------M . function [b. .q.b_opt’*q.A*y.Q.M). y = Matched Filter Output.R*A. % Only for Comparision purposes.

index) . b(:.index+1)’*..index) + alpha_k*(-g_k) dist(:.5*b(:.PROJECT MMSE MULTIUSER DETECTION 19 d_k = -P_k*g_k. % causes problems in the next. if (norm(Value)^2 > M) % If it does. we have to set the flag to 1. else % Else. (note : Convexity).index+1)’*q). Q*b(:. alpha_k = (g_k’*g_k)/(g_k’*Q*g_k) alpha_steep(index) = alpha_k. % Compute the Error (for plotting). we are done !! if (g_k <= epsilon ) end_point = index. we are DONE. % Compute the projection of the new point . if (norm(d_k) <= epsilon | theflag == 1) % Evaluate the gradient once again (for steepest descent).. proceed with the ordinary gradient method. alpha_grad(index) = alpha_k. end else % Ignoring the constraints. theflag = 1. g_k = Q*b(:. % Ofcourse.index+1) = Value. end_point = index. Error(index) = ObjOpt . x_k = b(:. alpha_k = (d_k’*d_k)/(d_k’*Q*d_k). dist(:. % Move in that direction and find the new point.index+1)-b(:.(0.index) = -g_k. break.index) + alpha_k*d_k... end % If gradient is still non-zero.q. Value = b(:. so that the given % function is minimized. break. % Check if dropping constraints in this iteration. then we may have to drop the constraint. so that we remember % next time that we are right now inside the constraint surface. % Move in the projected direction.index) = d_k.EECS 967 . % If d=0. find the maximum possible % distance that can be moved in the direction of the negative % gradient. % If the gradient value is less than epsilon. find the maximum distance % to move in the project direction.

2]).index)’*x_k. find the new point % and its projection on the constrain surface. Q*b(:.index+1)’*q). you have to choose the nearest point.. b = b(:.4*c1*c3))/(2*c1).index)’*b(:. while (real(alpha) ~= alpha) % a binary back-off is used. % Do everything again. 4*c1*c3))/(2*c1)..M.-2.1:end_point). c2 = 4*b(:. . end % If then point x_k is toooo far away to provide % a "real" alpha.5*b(:.index+1)-b(:.index) + alpha_k*d_k.1:end_point).index)’*x_k. b(:. %axis([-2.’*-’).sqrt(c2*c2 - % Ofcourse. if (abs(alpha_1) > abs(alpha_2)) alpha = alpha_2. two choices of alpha.4*c1*c3))/(2*c1).index). c3 = x_k’*x_k . that I can move circle here. else alpha = alpha_1.index).(0. % Again.index+1)’*.index)’*b(:. 4*b(:.. 4*c1*c3))/(2*c1).2. then back-off and try again. alpha_2 = (-c2 . % Compute two possible points % Two. alpha_1 = (-c2 + sqrt(c2*c2 .index+1) = x_k + 2*alpha*b(:. c1 = 4*b(:. since we are talking a alpha_1 = (-c2 + sqrt(c2*c2 alpha_2 = (-c2 . end end % Trace the path for diagnostics (?!) %plot(b(1.sqrt(c2*c2 .index).b(2. % Choose the nearest point again.M. d_k = d_k/2.20 VIJAYA CHANDRAN RAMASAMI (KUID 698659) % on c1 = c2 = c3 = the constraint surface.1:end_point). else alpha = alpha_1. 4*b(:. x_k = b(:. end end % Zoom over to that point and iterate. Error(index) = ObjOpt . x_k’*x_k . if (abs(alpha_1) > abs(alpha_2)) alpha = alpha_2.

q.3]’. Error] = GradientProjection(M.5.Q.5. R = [1 0. end_point = 0.:)+b(2. %for theta = 0:0. %[b(1. [b. epsilon = 1e-02.01:2*pi. Error(maxIters) =0. y = R*[1 -1]’+[-0. % plot(x.3 0.*b(1.0. % Generate Random Bits. % Parameters. nUsers = 50. clear.b(2. M = 2. % x = r*cos(theta).*b(2. maxIters = 13. %ylabel(’b_k(2) --->’). clear.epsilon. nLength = 250.:)’. % y = r*sin(theta). noiseVarPerBit = 0. nLength). 6. % Parameters.:)’.nBits)>0.Error’] 6.(b(1.maxIters). % Generate Gaussian Signature Sequences.dist(2. alpha_grad(maxIters) = 0. .0425. for index = 1:nUsers. b(:. Q =R. dist(maxIters) = 0.:)’. alpha_steep(maxIters) = 0. b = 2*b-1.alpha_steep’.:)’. nBits = 1. Experiment with 2 Users. %xlabel(’b_k(1) --->’). b = zeros(2. b = rand(nUsers.EECS 967 .:). %r = sqrt(2). % Draw the constraint circle. q =y.1) = [1..maxIters).:))’.alpha_grad’.PROJECT MMSE MULTIUSER DETECTION 21 %hold on. Experiment with 50 Users.1]’.:).y).2.5 1]..3. S = zeros(nUsers. % . %end..dist(1.

end_point] = GradientProjection(nUsers.:).:)). Q = R.:)/rmsS. noiseSamples = (sqrt(noiseVarPerBit)/nLength)*randn(1. for index = 1:nUsers. end % Compute the output. for index = 1:length(epsilon).22 VIJAYA CHANDRAN RAMASAMI (KUID 698659) S(index.:). epsilon = 1e-03. rmsS = sqrt(mean(S(index. R = zeros(nUsers. R = toeplitz(R).epsilon(index). Epsilon varied. bout = sign(br(:.5000). % Since A = I. % Epsilon.:) = randn(1.1). % Generate Noise Samples and find the Matched filter % response to those. end % Calculate the Correlation Matrix R. for index = 1:nUsers.. y = R*b + n. q = y. % Gradient Projection.:)). n = zeros(nUsers.*S(index.nLength).nLength).end)) errors = sum(b ~= bout) end_pt(index) = end_point end . n(index) = sum(noiseSamples.1). S(index.^2)). R(index) = sum(S(1.q. end R = R/max(R)..*S(index.:) = S(index. [br.Q.

To simplify the second expression in the above equation (58). . ﬁnd an estimate of the transmitted bit-vector (b1 . . . . we can deﬁne the MSE (or the objective for optimization) for the j th user as follows: 2 M (57) fj (w) = E bj − wi yi i=1 Where w = (w1 . . yM )T . . . . . . .EECS 967 . we have E{b2 } = 1. The weights w must be optimized y1 W1 y2 Sum £ ¡ ¡ W2 bj yM ¢ ¢ WM Figure 9. . Derivation of the MSE. bM ) that optimizes (minimizes) the Mean Squared error between the detected and the transmitted bits. . yM ) from the output of the matched ﬁlter. Linear MUD The optimization problem in the case of the Linear MMSE detector can be stated as follows: Given the statistic (y1 . . Further. We thus get: 0 ρ1j Aj .PROJECT MMSE MULTIUSER DETECTION 23 7. only linear operations are allowed on the statistic y=(y1 . The above expression can be simpliﬁed as follows: M (58) fj (w) = E{b2 } j −2 i=1 wi E{bj yi } + E wT yyT w Since the bits are either +1 or -1. yM ) to minimize the mean-squared error between the input bits and the weighted combination of the input statistic. . and assuming they are independent. . . Linear Transformation on the statistic (y1 . wM )T are weights attached to the statistic y. using only linear operations on the given statistic The restriction on linear operations is in place to guarentee ease of implementation. we have the the last term in the expression (58) can be simpliﬁed as: (61) E{yyT } = E{RAbbT AR} + E{nnT } . Since. 2 (59) q = E{bj y} = E{bj RAb} = E RA bj . j it is reasonable to assume that the input bits are uncorrelated with the noise.1. ρ2j Aj (60) q = R Aj = . 0 ρM j Aj Further. 7. . . . 0 In the above equation it is assumed that the users bits are uncorrelated with each other (which is again a reasonable assumption). . deﬁne the vector q such that: 0 . .

The optimal solution can be readily found from the FONC as follows: (64) (65) w fj (w) = −2q + 2(RA2 R + σ 2 R)w = 0 Which gives the following solution: wopt = (RA2 R + σ 2 R)−1 q 7. in this case. . A proper line-search is not performed for the sake of simplicity.3. a training . Thus time-averaged approximations need to be used instead of statistical averages. For this method.Noisy Gradient. In this case. There are many methods to overcome this situation. In many cases. bj (i) etc. we need the values of bj (i) for the iterations. we have E{bbT } = I. these values cannot be found apriori. using the expression for the noise covariance matrix in (13) we get: (62) (63) E{yyT } = RA2 R + σ 2 R fj (w) = 1 − 2qT w + wT (RA2 R + σ 2 R)w Combining the results obtained in (62) and (60) in equation (58) we get: Thus. An iterative solution to the LMMSE optimization problem can be be obtained using the Method of the steepest descent. and σ 2 must be known). the values of R. the samples are represented as functions of the time index as y(i). k (70) ˆ Q= i=k−p k y(i)y(i)T (71) ˆ q= i=k−p bj (i)y(i) or. Method of Steepest Descent.2. • Q = RA2 R + σ 2 R • qT = [ρ1j . since the instantaneous values are used. Thus. Normally. ρM j ]Aj The iterations in the steepest descent method are then given by [3]s: (67) (68) Where the gradient gk is given by: (69) gk = f (wk )T = Qw − q wk+1 = wk + αk (−gk ) αk = T gk gk gk Qgk 7..24 VIJAYA CHANDRAN RAMASAMI (KUID 698659) Using the independence of the input bits of the users. To apply the method of steepest descent. we can re-write the equation 63 as: 1 (66) f (w) = wT Qw − qT w 2 Where. A ”one-shot” solution as explained in the previous section is not possible in this case and each iteration of the gradient algorithm now occurs in a bit-interval A time-averaged approximation (over p samples of y) for the the values of Q and q are given by. the MSE comes out to be quadratic in w as expected. it was assumed that the value of the auto-correlation matrix Q=Ryy = E[yyT ] and the cross-correlation vector q=rbj y = E[bj y] were known (speciﬁcally. . the instantaneous values of these paramters can be used. . The case in which the instantaneous values are used is called the method of steepest descent with Noisy gradient since. . the gradient vector gk is given by: (72) (73) gk = Qw − q ≈ y(k)y(k)T w − bj w = y(k)(y(k)T wk − bj (k)) wk+1 = wk + µ(bj − y(k)T w)y(k) The gradient algorithm then gets modiﬁed as: The value µ represents the step-size of the iterations. In the previous derivations. . Method of Steepest Descent .

e. the convergence rate (ratio of the errors before and after an iteration) is given by [3]: (r − 1)2 (r + 1)2 In this of Linear MUD. the above algorithm is susceptible to numerical errors because of the noisy line-search parameter. In this case. we get: (76) αk = 1 y(k)T y(k) αk = T y(k)T y(k)(y(k)T wk − bj )2 gk gk ≈ T (y(k)y(k)T )y(k)(y(k)T w − b )2 gk Qgk y(k) k j Thus the gradient algorithm becomes: 1 (77) wk+1 = wk + (bj − y(k)T w)y(k). a large value of µ initially and then reducing it as the algorithm progresses. the MSE) is very high. This is because a increased noise variance adds a strong diagnol component of the correlation matrix. If µ is chosen to be large. if the value of µ is chosen to be very small. the eigen-values of the correlation matrix become wide-spread and this leads to slow-convergence. the following behaviour was noticed with respect to the paramters inﬂuencing the convergence of steepest descent: • When there is a wide discrepancy in the received power of the users. The Step-Size parameter determines the convegence rate of the gradient algorithm. “Noisy” Line Search. 7. Convergence. On the other hand. A solution to this problem is to use a time-varying step-size. then the algorithm converges rapidly. For the steepest descent method using exact gradient. A simple time-variation is given by. Time-Varying Step Size (µ). i≥0 where γ is a number very close to 1 (but less than one) and µ0 is the initial value of µ. The Convergence of the steepest descent method (using both the exact and noisy gradients) is dependent on the eigen-structure of the correlation matrix Q. 7.PROJECT MMSE MULTIUSER DETECTION 25 sequence with known values of bk (i) is used for initial convergence of the algorithm after which the detected bits can be fed back.5.6. the eigenvalues of the correlation matrix tend to be close to each-other. (74) µ(i) = µ0 γ i .4. 7. y(k)T y(k) = 0 y(k)T y(k) Obviously.EECS 967 . (78) rate = . i. an approximation of αk can be obtained as follows: (75) Simplifying. • When the variance of the noise is increased. An approximation of the line search can also be performed. The choice of the parameters µ and γ are problem speciﬁc. but the residual noise (and thus. the residual MSE is less but the algorithm takes a long time to converge.

6589] which is closer to [1. The results obtained for both the values of the noise variance are plotted in ﬁgure (10). Results for Linear MMSE MUD 8.1.001 was 8.8 0. as seen in ﬁgure (11).3 0. The Final Weight Vector obtained for the σ 2 = 0.5455 and the eigen-value spread for σ 2 = 0.4 0. −0. Eﬀect of Eigen-Value Spread. The Steepest Descent method using exact gradient (“one-shot” optimization) was implemented using the following parameters: • 1 0.1 was used).1 0 1 2 3 4 5 6 Iterations −−−> 7 8 9 10 Figure 10. .2. Convergence Rate Vs. the eigen-value spread for σ 2 = 5 was 3.9880.3.5 1 • A = I.5455 0. • Noise Variance σ 2 = 5 and 0. 8. the degradation in the convergence behaviour due to the eigen-value spread is still worse.6649] obtained using w = Q−1 q. Eigen-Value Spread 8. It can be clearly seen that an increased noise variance leads to faster convergence. The convergence behaviour of steepest descent with a σ 2 = 0.99 0.5 Eigen ValueSpread = 8.6 Error −−−> 0. It can be seen that the distance moved decreases with each iteration.001 is plotted in ﬁgure (12) along with the value of α (ﬁgure (13)). Convergence.26 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 8.1.3191. When a proper line-search is not performed (a ﬁxed αk = 0.1. Infact.9 0.1.001.3311.1.5 (79) R= 0. 8. because of the decrease in the eigen-value spread of the correlation matrix Q. Weight Vectors. −0.2 0. 1 0.001 case is [1.7 Eigen Value Spread = 3. Using Exact Gradient.

Convergence Rate Vs.6 Error −−−> 0.8 Eigen Value Spread = 8.7 0.EECS 967 .3 0.3 −0.1 0 1 2 3 4 5 6 Iterations −−−> 7 8 9 10 Figure 11.99 0.8 1 1.4 Figure 12.6 W(1) −−−> 0.6 −0.1 W(2) −−−> −0.1 0 −0.3 Eigen Value Spread = 3.4 0.2 0.2 0.2 1.5 −0.5455 0.9 0. Eigen-Value Spread (ﬁxed αk ) 0.2 −0. Weight Vector Convergence .2 0.7 0 0.PROJECT MMSE MULTIUSER DETECTION 1 27 0.4 −0.4 0.5 0.

2 1 0.6 1.8 1.4 1. Line Search αk Variation .8 0.2 2 Line Search Parameter (α) −−−> 1.4 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 2.28 2.4 1 2 3 4 5 6 Iterations −−−> 7 8 9 10 Figure 13.6 0.

The resulting error vs iteration is plotted in ﬁgure (14). Further.2 0.7 With Fixed Step Size 0. The only parameter changed from the previous case is the value of the noise variance (σ 2 = 1). The error plotted in the following ﬁgures represent the residual error between the optimal MUD outputs (using Matrix Inverse) and the actual outputs of the iterative Multiuser Detector.5 0. a case in which the noisy gradient was used is also illustrated in ﬁgure (15).Using Noisy Gradient. Residual Error with Fixed and Varying Step Sizes .9 0.1 0 0 200 400 600 800 1000 1200 Iterations (Bit Intervals) −−−> 1400 1600 1800 2000 Figure 14. The procedure was repeated for a variable value of µ such that: (80) µi = (3 × 10−4 ) × (0. Convergence Rate . but has a much high residual error.6 Error −−−> 0. It can be seen that the noisy gradient converges much rapidly than the case with the ﬁxed step-size.995)i The resulting error vs iteration plot is also given in ﬁgure (14). Further a ﬁxed value of the step-size paramter µ = 3 × 10−3 was used.EECS 967 . The results illustrate the potential advantage gained by using a time-varying step-size instead of a stationary one. 1 0.8 0. The Steepest Descent Method was implemented using a Noisy Gradient.3 0.4 With Variable Step Size 0.2.PROJECT MMSE MULTIUSER DETECTION 29 8.

6 0.2 0.1 0 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Figure 15.30 1 VIJAYA CHANDRAN RAMASAMI (KUID 698659) 0.4 With Noisy Line Search 0.8 0.3 0.9 0.7 With Fixed Step Size 0.5 0. Residual Error with Noisy Line Search .

Wopt = inv(Q)*g ObjOpt = 0.index). % Try till a max 10 iterations. Wopt. % Compute the Eigen Spread.index+1) = W(:. A = diag(Au). noiseVar . % % % % % % Parameters ---------R . Q = R*A*A*R + noiseVar*R. % alpha(index) = 0.index) + alpha(index)*(g .g. dUser .line search alpha(index) = (delf(:.1..Correlation Matrix of signature sequences. nIters = 10. for index = 1:nIters. Error. % Compute the Objective func at this point and the error in the % Objective Function. % Compute Q and q. % Gradient Initialization.dUser) % User Power Matrix.. W = zeros(2. Au . Objective(index) = 0.index)’*Q*delf(:. Error(index) = abs(ObjOpt .index) . /(delf(:..index)). g = R(1. % Iterate. function [eigenspread.PROJECT MMSE MULTIUSER DETECTION 31 9.index)’*delf(:. lamb = eig(Q).g’*Wopt..Noise Variance. % Compute Gradient at the given point. Code for Linear MMSE MUD 9..index)). % Compute the value of alpha -.5*W(:.5*Wopt’*Q*Wopt.User Power Distribution. Using Exact Gradient. towards the minimum W(:.noiseVar.index)). % Optimal Weights (using Matrix Inverse).Objective(index)). ObjOpt] = SteepestDescent(R.g’*W(:.nIters). % Inititalize Weights.:)’*Au(dUser).nIters)..index) = Q*W(:.Desired User.EECS 967 .1 % Weight Adaptation. end Error = Error/max(Error).Au.Q*W(:. delf = zeros(2. alpha.index) . W. Objective. delf(:. ..index)’*Q*W(:.

5*[1 1 1 -1].1). % Compute R.5*[1 1 1 1]. R(index) = sum(S(1. .nBits). end end % The output is the sum of all the spread waveforms. y = zeros(nUsers. nLength = 4. the Matched filter bank...0.:)). noiseVar = 1. = rand(nUsers. for b_index = 1:nBits. b(u_index. end % Note . R = toeplitz(R)..nBits*nLength).:). = 2*b-1. 9.(1:end)))+sqrt(noiseVar)*randn(1..:). g and Q. = zeros(nUsers. for u_index = 1:nUsers.:) = 0. R = zeros(nUsers. g = R(:. nLength).nBits*nLength). end % These signature sequences gives a correlation % matrix of R = [1 0. for index = 1:nUsers.*S(index. y(u_index.:) = Au(u_index)*b(u_index.nBits)>0. y_rx = sum(y(:.1). % Weigh the User Bits based on their % Power distribution. nBits = 2000. A = diag(Au). = b(u_index. S(2. % Paramters .R has a toeplitz struture. for b_index = 1:nBits.5.5 1].:). y_hat = zeros(nUsers.. w = inv(Q)*g ActMSE = 1-2*g’*w+w’*Q*w % Actual Spreading. S(1. clear. for u_index = 1:nUsers.2. Au = [1 10]. Using Noisy Gradient. nUsers = 2.5.(b_index-1)*nLength+1:b_index*nLength). Q = R*A*A*R + noiseVar*R. % S b b Generate Bits and Signature Sequences.:) = 0.32 VIJAYA CHANDRAN RAMASAMI (KUID 698659) eigenspread = max(lamb)/min(lamb).b_index)*S(u_index. % Despreading.

W(:..PROJECT MMSE MULTIUSER DETECTION 33 for u_index = 1:nUsers. %for b_index = 1:nBits. % Iterate.b_index)’*y_hat(:.nBits)..b_index) .b_index).nUsers).b_index)+W(:. for plotting purposes only. end objError = abs(objError)...b_index)’*Q*W(:.b_index)’*y_hat(:. CurMSE = 1-2*g’*W(:. % This is for implementing time-varying line search. objError(b_index) = ActMSE . y_hat(u_index. b_hat = W(:. . using line search. objError = objError/max(objError). *Error(b_index)*y_hat(:. Mu = 0.b(1. % Mu.b_index).*S(u_index. alpha = 1.. hold on..b_index)...CurMSE.EECS 967 . end end % Compute the actual value of Q obtained. %Ryy = zeros(nUsers.b_index)*y_hat(:.b_index)’. % Compute the output.b_index+1) = W(:..’g’). just to check. using a "training" sequence. % Current MSE. W = ones(nUsers.00003. %plot(abs(objError)/max(abs(objError)). Mu = 1/(y_hat(:. % The Error in the output.b_index)).b_index) = sum(y_rx((b_index-1)*nLength.b_index).’b’). %Ryy = Ryy + y_hat(:. for b_index = 1:nBits. %end %Ryy = Ryy/nBits % Initialize W.Mu*(alpha)^b_index. +1:b_index*nLength). % Weight adpatation. Error(b_index) = b_hat . figure(1).:)). plot(abs(Error)/max(abs(Error)). % Mu.

Cambridge University Press. CA. “A Nonlinear Programming Approach to CDMA Multiuser Detection”. Shanmugan. “Multiuser Detection”. “Linear and Non-Linear Programming”. .34 VIJAYA CHANDRAN RAMASAMI (KUID 698659) References [1] Sergio Verdu. Proceedings of 33rd Asilomar Conference on Signals. Luenberger. G. Addison Wesley Press. [3] David. [2] Aylin Yener. K. John Wiley and Sons. [4] Sam. Systems and Computers. Yates and Sennur Ulukus. “Random Signals: Detection. Estimation and Data Analysis”. October 1999. Roy D. Pasiﬁc Grove.

- 06-sg-method (1)Uploaded bySamsul Rahmadani
- Massive Overloaded MIMO Signal Detection.pdfUploaded byvikram_908116002
- Class Scma 06Uploaded byhehehuhu
- Lecture Notes 7-10Uploaded byWylie
- Speech RecognitionUploaded byNadoc
- Project GuidelineUploaded byMohamed Farag Mostafa
- 200 TOP MOST ELECTRONICS and COMMUNICATION Engineering Questions and Answers PDF ECE Questions and Answers PDFUploaded byCharles Downs
- A Two-layered Subgoal Based Mobile Robot Navigation Algorithm With Vision System and IR SensorsUploaded bybadbatx
- Minimum Mean Square Error Equalization With PriorsUploaded byAchref Methenni
- Group 9 Parthenon Decision Science PresentationUploaded bynand bhushan
- 1 Cellular Wireless NetworksUploaded byvincent_bermudez
- Spread SpectrumUploaded byvsharma26
- Metode SimplexUploaded byDimas Prasetyo
- Linear Programming on Excel: Problems and SolutionsUploaded byacetinkaya92
- Tuning Advanced PID Controllers via Direct H∞-Norm MinimizationUploaded byH.E. Musch
- Using and extending fix-and-relax to solve maritime inventory routing problemsUploaded byEliza Popa
- C52.pdfUploaded byAngling Dharma
- Power Stab 300Uploaded bymakroum
- MB0048 Operations ResearchUploaded bySaagar Karande