37 views

Uploaded by Sarah Alvi

Resource Allocation and Design issues in Wireless communication

save

- Mdrs155ec Intro
- Commsim_UserGuide
- Baseband_Modulation
- Yao Ma Ber QE x a c t B E R fo r M-QAM with MRC and Imperfect Channel Estimation in Rician Fading Channelsam 04133880
- Paper Diogo x Almeida
- WLAN, Part 3 Contents
- 08_ResourceManager_2004.pdf
- Matlabb.docx
- Ber Analys. Qam Fso 2013
- 012
- TN0501-labmanual
- Resource Allocation and Design Issues in Wireless SystemsAggarwal Diss
- LTE Throughput
- Digital Coherent Detection of Multi-gigabit
- Digital Communications
- wsn-tae II
- edseries.txt
- Ceragon Manual
- MScDCOM-Lec07v2 With Annotations
- Chapter 6 Digital Modulation
- cwa8087_40
- Syed Usama Hasan. NUST-135.Docx
- arm_tcom_99
- Design and Development of Gaussian Minimum Shift Keying (GMSK) Demodulator for Satellite Communication
- NewTool_realistic Traffic 2011
- komdat 3
- Final-EE456-10
- hibrido
- Modulation
- Writing Problem and Hypothesis Statements for Engineering Research(19)
- Sapiens: A Brief History of Humankind
- Yes Please
- The Unwinding: An Inner History of the New America
- Elon Musk: Tesla, SpaceX, and the Quest for a Fantastic Future
- Dispatches from Pluto: Lost and Found in the Mississippi Delta
- The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution
- John Adams
- Devil in the Grove: Thurgood Marshall, the Groveland Boys, and the Dawn of a New America
- This Changes Everything: Capitalism vs. The Climate
- Grand Pursuit: The Story of Economic Genius
- A Heartbreaking Work Of Staggering Genius: A Memoir Based on a True Story
- The Emperor of All Maladies: A Biography of Cancer
- The Prize: The Epic Quest for Oil, Money & Power
- Team of Rivals: The Political Genius of Abraham Lincoln
- The New Confessions of an Economic Hit Man
- The World Is Flat 3.0: A Brief History of the Twenty-first Century
- Rise of ISIS: A Threat We Can't Ignore
- The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers
- Smart People Should Build Things: How to Restore Our Culture of Achievement, Build a Path for Entrepreneurs, and Create New Jobs in America
- How To Win Friends and Influence People
- Steve Jobs
- Angela's Ashes: A Memoir
- Bad Feminist: Essays
- The Light Between Oceans: A Novel
- Leaving Berlin: A Novel
- The Silver Linings Playbook: A Novel
- The Sympathizer: A Novel (Pulitzer Prize for Fiction)
- Extremely Loud and Incredibly Close: A Novel
- You Too Can Have a Body Like Mine: A Novel
- The Incarnations: A Novel
- Life of Pi
- The Love Affairs of Nathaniel P.: A Novel
- A Man Called Ove: A Novel
- The Master
- Bel Canto
- We Are Not Ourselves: A Novel
- The First Bad Man: A Novel
- The Rosie Project: A Novel
- The Blazing World: A Novel
- The Flamethrowers: A Novel
- Brooklyn: A Novel
- The Art of Racing in the Rain: A Novel
- Wolf Hall: A Novel
- The Wallcreeper
- Interpreter of Maladies
- The Kitchen House: A Novel
- Beautiful Ruins: A Novel
- The Bonfire of the Vanities: A Novel
- Lovers at the Chameleon Club, Paris 1932: A Novel
- The Perks of Being a Wallflower
- A Prayer for Owen Meany: A Novel
- The Cider House Rules
- Everything Is Illuminated

Dissertation Presented in Partial Fulﬁllment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Rohit Aggarwal, B. Tech, M. S. Graduate Program in Department of Electrical and Computer Engineering

The Ohio State University 2012

Dissertation Committee:

Phil Schniter, Advisor Can Emre Koksal, Co -Advisor Ness B. Shroﬀ

c Copyright by Rohit Aggarwal 2012

Abstract

Present wireless communication systems are required to support a variety of highspeed data communication services for its users, such as video streaming and cloudbased services. As the users’ demands for such services grow, more eﬃcient wireless systems need to be designed that can support high-speed data and, at the same time, serve the users in a fair manner. One method to achieve this is by using eﬃcient resource allocation schemes at the transmitters of wireless communication systems. Here, the term “resources” refers to the fundamental physical and networklayer quantities that limit the amount of data that can be transmitted over a communication link, such as available bandwidth and power. With the above fact in mind, we study, in this dissertation, physical and network-layer resource allocation problems in three diﬀerent communication systems, and propose various optimal and sub-optimal solutions. First, we consider point-to-point links and propose greedy low and high-complexity rate-assignment schemes using degraded feedback (ACK/NAK) to maximize goodput of the system. Here, goodput is deﬁned as the amount of data that a transmitter can send to the receiver without any error. Second, we propose optimal and sub-optimal schemes for simultaneous user-scheduling, powerallocation, and rate-selection in an Orthogonal Frequency Division Multiple Access (OFDMA) downlink, with the goal of maximizing expected sum-utility under a sumpower constraint and the availability of imperfect channel-state information at the ii

transmitter. Finally, we consider allocation of network resources in the downlink of large multi-cellular OFDMA-networks and answer, from the view-point of a service provider, complex questions related to network design such as: How many basestations need to be installed, or how much bandwidth should be purchased given a particular revenue-target, or what pricing scheme must be employed to meet the Quality-of-Service constraints of users and, at the same time, maximize revenue for the service provider? Using our analysis, we also propose a near-optimal resource allocation scheme for multi-cellular OFDMA systems under a truncated path-loss model and diﬀerent fading models (Rayleigh, Nakagami-m, Weibull, and LogNormal) that is optimal in the scaling sense.

iii

iv .Dedicated to Buaji and my family.

Lee Potter for giving me an opportunity to develop and teach the communication lab course ECE 508 for undergraduate students. Taiji. I am grateful to them for providing me the opportunity of research and encouraging me to pursue my own ideas. Moments spend with them will always be remembered and cherished by me. support. v . I used to come out of our meetings thinking “Why did I not think of that?”.Acknowledgments I had a wonderful time both inside and outside my workplace. support. I thank my advisors Prof. Phil and Emre inspired me by example and showed how to be creative and perfectionist in work. Tauji. and Chintu .Papaji. and inspiration to constantly strive for improvement in every aspect and in all stages of life. I want to express my gratitude to my Buaji and my family . Can Emre Koksal for their invaluable guidance. Their creativity and fundamental thinking made complex ideas so simple and invariably gave rise to new interesting ideas. Philip Schniter and Prof. I also thank Phil and Prof. Mummy. Bhaiya. Information Processing Systems (IPS) lab. Dadiji. First and foremost. Often. and belief in me that made this dissertation possible. at The Ohio State University (OSU). Ness Shroﬀ for being in my candidacy and dissertation committees and for his constructive comments.for their unconditional love. Many thanks go to Prof. Mahak. OSU is truly an amazing university and I am grateful to God for this unforgettable experience.

Their friendliness and spontaneity made the lab a fun-place to work. lunch and happy-hour breaks. homework solving sessions. for an amazing time spent together in parties. Brian Day and Jeremy Vila. Subhash Lakshminarayana. I want to thank all my friends at OSU with whom I spent time and had lots of fun during the past 4 years. and Sugumar Murugesan for our discussions and casual conversations. I also thank La Familia.During my doctoral education. soccer. tennis and poker.for not complaining (too much) about the food cooked by the worst chef of the house (me) and making 237 West 9th Ave an all-time “party” house. and (sometimes) tennis breaks. Special thanks go to my best friends and roommates . and comedy in IPS lab. doubt-clearing sessions. I also thank the newbies.Justin “the boy with the cup” Ziniel. Bo Ji. I saw many friends come to OSU and leave with diﬀerent degrees. I thank Carl Rossler. fun-trips. and Vinod Khare . Discussions with them were always helpful in the furthering research whenever it seemed to get stuck. Suryarghya Chakrabarti. and games of cricket. Rahul Srivastava. I thank my labmates . Thanks also to Sibasish Das.Anupam Vivek. members of which were my friends living near my house. Rahul. Liza Toher. and Mohammad Shahmohammadi for our intense discussions. and Wenzhuo Owen Ouyang for the homework solving. Young-Han Nam. Subhojit Som. I thank Jeri McMichael for her friendliness. Brian Caroll. and bearing up with my vi . It is extremely satisfying to teach talented students and see them pursue the path of research. for being my students in the undergraduate class ECE 508 and now colleagues in IPS lab. Arun Sridharan. Arun. Harsha Gangammanavar. outstanding organizational skills to get the nerds of IPS lab participate in fun-picnics and lab-cleaning exercises. Justin. Sung-Jun Hwang. Ahmed Fasih.

Her charming personality and her cooking always made my day! vii . and reimbursements. I want to thank Ankita “AK47” Agarwal for her warmth. encouragement. conference registrations.mistakes in ﬁnancial activities such as ordering items for labs. and emotional support. Finally. travel reservations.

. pp. . . . . . . . . . C. . 3217-3227. . . . . .S. C. . . Koksal. . . February 2012 R.” INFOCOM 2012. . . 1986 . . . . . . . and P. pp 4276-4285. Koksal. E. . . . .Large Scale Multiple Antenna Wireless Systems. . . Volume 59. . . . . . . and P. Issue 11. C. . . . August 2009. . . . . . . Koksal. . Aggarwal. . . Issue 6. submitted.” IEEE Transactions on Wireless Communications. . . Aggarwal. . June 2012. . . IEEE Journal on Selected Areas on Communications . . Aggarwal. .Tech. . . Indian Institute of Technology Kanpur 2011 . M. E. . . viii . . . November 2011. . . and C. . Volume 60. 5589-5604. R. R. E. Koksal “Rate Adaptation via link-layer feedback for goodput maximization over a Time-Varying Channels. Koksal. . Volume 8. Aggarwal. Jharkhand. Issue 8. . . The Ohio State University Publications Research Publications R. . . P. . and P. B. Assaad. R. . .Vita March 15. . E. . E. . . Aggarwal. . Born . India 2007 . . . IEEE Transactions on Signal Processing. . . . . . Schniter “Joint Scheduling and Resource Allocation in OFDMA Downlink Systems via ACK/NAK Feedback”. Schniter “Joint Scheduling and Resource Allocation in OFDMA Downlink Systems with Imperfect Channel-State Information.” IEEE Transactions on Signal Processing. . . . M. Schniter “Performance Bounds and Associated Design Principles for Multi-Cellular Wireless OFDMA Systems. . .Ranchi. . Schniter “On the Design of Large Scale Wireless Systems”. . . . pp. . . . . . . . and P. . . Schniter. C.

P. M. E. Aggarwal. Aggarwal.R. and Computers (Paciﬁc Grove. E. 2008. and C. CA). Asilomar Conf. Aggarwal. June 2011. IEEE Workshop on Signal Processing Advances in Wireless Communications. R. Assaad. (San Francisco. Princeton University.” Proc. Koksal. Schniter “OFDMA Downlink Resource Allocation via ARQ Feedback. Assaad.” IEEE Conference on Information Sciences and Systems. and P. Koksal. C. CA). Systems. Schniter “Optimal Resource Allocation in OFDMA Downlink Systems With Imperfect CSI. Nov. on Signals. R. C. and P.” Proc. E. 2009. Koksal “Rate Adaptation via ARQ Feedback for Goodput Maximization over Time-Varying Channels. M. Schniter. Fields of Study Major Field: Electrical and Computer Engineering ix .

2. . . . . . . . . . . . . . . . . . . . . . . .5. . . . . . .1 Setup . . . . . List of Tables . . . . .6. . . . . .2 Results . . . . . . . . . . . . . . . .5 Motivation . . . . . . . . . 2. . . .7 Summary . . . . . . . . . . . . . . . . . . . . . . . . Acknowledgments . .Table of Contents Page Abstract . . . . . . . . . . . . . . . . Rate Adaptation via Link-Layer Feedback in Point-to-Point Links . . . . . . . . . .1 Packet-Rate Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . Overview and Related Work . . . . . . . 2. . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . . . . . Vita . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . . System Model . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . .1 Attacked Problems and Our Contributions . . . . . . . . Introduction . . . . . . . . . . 2. . . . . . . . . . . . The Greedy Rate Adaptation Algorithm 2. . . . . . . . . . . . . . Optimal Rate Adaptation . . . . .1 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Block-Rate Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Numerical Results . . . . . . . . . . . . . . 2. . 2. . . . . . . . . . . . . . . .2 2. . . . . . . . ii iv v viii xiii xiv 1 2 7 7 8 10 14 18 19 20 23 23 25 33 x . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 2. . . . . . . . . . . . . . . . . . Dedication . . . . . . . . . . . .5. . . List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . .4 2.

. . . . Sum-Rate . . . . . . . .3 Optimizing over µ . . . . . . . . . . . . . . . 86 . . . .2 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . .5. . . . . . . Joint Scheduling and Resource Allocation in the OFDMA Downlink under Imperfect Channel-State Information . .4 4. System Model . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . Optimal Scheduling and Resource Allocation . . . . . .2 3. . . . . . . . . .5. . . . . . . . . .5. . . 94 . . . . . . . . . . . . . . for a given µ and user-MCS allocation matrix I . . . . .1 3. . . Past Work . . . . . . . . . 3. . . . . . . . . . . . . .1 5. . . . . . . .2 5. Optimal Scheduling and Resource Allocation with subchannel sharing 3. . . . . . . . . . . . . . . . . . . . . .8 5. . . . 107 110 111 114 . . . .1 Optimizing over total powers. .4 36 36 38 40 43 44 46 48 51 55 55 56 59 61 63 72 4.5 Scheduling and Resource Allocation without subchannel sharing . . . . . . . . . . . . 79 . 4. . . . .1 Brute-force algorithm . 3. . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . .2 Optimizing over user-MCS allocation matrix I for a given µ 3. .5 Introduction .5 Some properties of the CSRA solution . . . . . . 75 . . . . . . . .2 Proposed DSRA algorithm . . . . .4. . . . 3.3 Discussion .2 4. . . . . 81 .7 Conclusion . . . System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4. . . . .3. . Summary . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 4. . . . . . . . . 103 Large Scale Wireless OFDMA System Design . . . . Proposed General Bounds on Achievable xi . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 . . . . . . .5. . . . . . .5. . .4 Algorithmic implementation . . . . . . . . . . . . . . . . . .6 4. . . . . . . . . . . . . . . . . . Related Work . . . . System Model . 75 4. .1 4. . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . 88 . . . . . . . . . . . . . Past Work . Greedy Scheduling and Resource Allocation .4. . .7 4. . . 84 . . .6 Numerical Evaluation . . . . . . . . . .1 Brute-Force Algorithm . . . . . . . . . . . . . . . Updating the Posterior Distributions from ACK/NAK Feedback Numerical Results . . . . . . . . . . . . . x. .1 The “Causal Global Genie” Upper Bound . . . . 3. . . . . . . . 4. . . . . . 3. . . . . . .4. . . . . . . . . 78 . .3 3. . . . . . . . 107 5. . . . . . . . . 86 . . . . . . .4. . . . . . . Joint Scheduling and Resource Allocation in OFDMA Downlink Systems via ACK/NAK Feedback . . . . . . . 3. . . . . . . . . . . . . . . . . . .4 Introduction . . . . . . . . . . . . . . . . . Introduction . . . . . . . . .3 5. . . . . .

. . . . . . . . . . . B. . . . Case II : For some n. . .3 Case III : For some n. . . . . . . . . . . .5 Proof of Theorem 5 . . . . . . . . . . . . . . . . . . . . . Case I : |Sn (˜ µ)| ≤ 1 ∀n. . . . . . C. . . . . . . . . . . . . . . . . . . .4. . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . of Lemma 2 . . . . .4. . . . . . .4 Proof Proof Proof Proof B. . 184 xii . . . . . . . . . . . . . . . . 137 Proofs in Chapter 2 . . . . . . . . . . .3 Proof of Lemma 9 and Lemma 10 . . . . . . . . . . 153 155 167 167 171 173 176 177 182 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . .7 Conclusion . . . . . . . . . . h(K ) . . . . of Lemma 4 . . . . . . . . . . . . . . . . . . .5. .2 B. . . . . . . . . . . . . .3 LogNormal . . . B. . . . . B. . . . . . . . . . .5 Proof of Lemma 5 . . . . . . .6 A Note on MISO vs SISO Systems . . . . . . . . . . . . . . . . . . .2 Proof of Theorem 2 and Theorem 3 . . . . . . . . . .4. . . . . . . . B. . . . . C. . . . . . . . . . . . . .5. . . . . . . . . . . . . . . .1 B. . .1 Nakagami-m . of Lemma 3 . . . . . .2 139 141 143 144 144 145 148 149 150 C. . . . . . . . . . . . . . . . 137 Proofs in Chapter 3 . . . . . C. . .1 Proof of Theorem 1 . . .3 B. . . . . . . . . . . . . . . . . 153 C. . . . . . B. . .6 Proof of Lemma 6 .3. . . . . . . . . . . C. . . . . . C. . . . . . . . . . . . . . . . . . C. . . . . . . . . . |Sn (˜ µ)| > 1 and at least two combinations in Sn (˜ µ) have the same allocated power. . . . . . . . . . . . . . . . .1 Scaling Laws and Their Applications in Network Design 5. C. . . . . . .4 Proof of Theorem 4 . . . . . . .4. . . . . . . . . . 6. . . . . . .5 Maximum Sum-Rate Achievability Scheme . Proofs in Chapter 5 . . . . . . C. . . . . . . . . . . . |Sn (˜ µ)| > 1 but no two combinations in Sn (˜ µ) have the same allocated power. . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Weibull . . . . . . . . . . .1 B. . . . . 116 124 129 131 Conclusions and Future Work . 133 Appendices A. . . . . . . . . . . . . .1 Proof of a Property of OP c. . . . . . . . . . . . . . . . . . . . . 139 for convexity of CSRA problem . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . .

. . . . . . . . .4 Algorithmic implementations of the proposed algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 106 xiii . . . . . . . . . . . . . . . . . . . . . . . .1 4. . . . . . . Brute-force steps for a given I . . . . . . . . . . .List of Tables Table 3. . . . . . Proposed greedy algorithm . . . . . .2 4.3 4. . . Particle ﬁltering steps Page 74 89 93 96 . .1 4. Recursive update of channel posteriors . . . . . . .

. and delay d = 1 packet. . . . . . . . . . and delay d = 1 packet. and delay d = 1 packet. . . . . Steady-state goodput versus delay d for E{γt } = 25 dB. . . . . .List of Figures Figure 2. . . . . . . . . . E{γt } = 25 dB. . .7 32 2. . .2 System model. . . . . . .5 packets/interval. . . E{γt } = 25 dB. . block size n = 1 packet. . E{γt } = 25 dB. . and delay d = 1 packet. Steady-state goodput rate versus α for Markov arrivals with average rate = 0.01. is shown by the dash-dot line. . buﬀer size = 30 packets. . . . . .5 packets/interval. . . . . . . . . . . . . . . . . . α = 0.9 34 xiv .001. . . . . . α = 0. . . .5 29 2. . . . .1 2. . . . . .8 33 2.3 27 2. . . . . . . . . . . . . . . . as a function of SNR.4 28 2. . . . . Page 11 13 2. and block size n = 1 packet. . . . . . . . . . . . . . and delay d = 1 packet. Steady-state goodput versus α for E{γt } = 25 dB. . . . buﬀer size = 30 packets. Steady-state goodput versus block size n for E{γt } = 25 dB. . . block size n = 1 packet. . . . . . block size n = 1 packet. . . . . buﬀer size = 30 packets. . . . . Goodput contours versus SNR γt and constellation size mt for packet size p = 100. . . . . block size n = 1 packet. . The goodput maximizing constellation size. . . Average buﬀer occupancy versus α for Markov arrivals with average rate = 0. . .001. . .5 packets/interval. . . . . block size n = 1 packet. . . . . . .6 30 2. . . . . . . . . . . . . . . . . Steady-state goodput versus mean SNR E{γt } for α = 0. . . . . . Average drop rate versus α for Markov arrivals with average rate = 0. . . . and delay d = 1 packet.

and SNRpilot = −10 dB. . . . K = 8. N = 32. . The top plot shows sum utility versus w1 when w2 = 1. . . . . . as a function of the number of µ-updates. .e. . . SNR = 10dB. SNR = 10dB. . . . . Here. Here. . . . and α = 10−3. .6 68 3. . . . . . . . . . . . . . .8 71 4. . . . . .m (µ) as a function of µ. . . .5 67 3. . . . Here. . SNR = 0 dB. . . . and SNRpilot = −10 dB. SNR = 10 dB. . N = 64. The top plot shows the average goodput per subchannel as a function of SNR. . . . .3. . The choice of system parameters are the same as those used in Section 3. . . . . . . . K = 16. . . . . Here. . and SNR = 10 dB. K = 16. The bottom plot shows the sum-utility versus SNR when w1 = 0. . and the bottom plot shows average sum-utility. . . . Here. Here. . . . . . and Pcon = 100. . In this plot. . ∗ Prototypical plot of Xtot (µ) and L(µ. . . . . . . .. . . . . . Typical instantaneous sum-goodput versus time t. . 101 xv 4. . . . and SNRpilot = −10 dB. w2 = 1. N = 64. N = 64. . . . K = 16. . .k. . . . . α = 10−3 . . . . N = 64. . . and SNRpilot = −10 dB. Here.4 Average goodput per subchannel versus SNRpilot .2 Average sum-goodput versus the number of particles used to update the channel posteriors. K = 16. . . . . . . . . 66 3. . .3 53 3. . .1 99 4. . x∗ (µ. . . . . . I∗(µ))) as a function of µ for N = K = 5. . .3 .6. . . . . . . . . . . The bottom plot shows the average bound on the optimality gap between the proposed and exact DSRA solutions (given in (3. . SNR = 10dB. and S = 30. . . . . In this plot. . . . . . . . K = 8. . . 100 Average sum-goodput versus fading rate α. . . . . Prototypical plot of p∗ n. . . N = 32. SNR = 10 dB. µ∗))/N . . ∗ i. . . . n is the subchannel index. .) The red vertical lines in the top plot show that a change in I∗ (µ) occurs at that µ. . . 41 3. . . .85. .7 70 3. . . . . . the average value of (µ∗ − µmin )(Pcon − Xtot (Imin . .44)). . . . . . K . . . . . N = 32.1 System model of a downlink OFDMA system with N subchannels and K users. . . . . .2 47 3. . . . N = 64. (See Section 3. .6 for details. . . . The top plot shows the mean deviation of the estimated dual variable µ from µ∗ . . I∗ (µ). . . Average goodput per subchannel versus number of users. . . and S = 30. . . . . . . . . . K = 8. .

. . . . . . . . . . . .5 LHS and RHS of (5. . . 118 4. . . . 160 xvi .21) as a function of ρ. . . . . . and S = 30. . . . . i. . . α = 10−3. . . . . . . The bottom plot shows the average bound on the optimality gap between the proposed and optimal greedy solutions (given in (4. SNR = 10dB. . 124 C. In this plot. . . . . . . . . . . .7 5. . . . . . . .k . . 104 The top plot shows the average sum-goodput as a function of SNR.e. .. 105 OFDMA downlink system with K users and B transmitters. and S = 30. .3 5. . . . . . . . . . . µ∗)). . . . .1 5. and S = 30. . . K = 8. . . . as a function of λ. . .3 Cumulative distribution function of Gi. . . . . . . . . .1 OFDMA downlink system with K users and B base-stations. . 159 C. . . .4 Average sum-goodput versus number of subchannels N . . Here. . bi ). . . .19) as a function of ρ. .e. . . . . α = 10 . . . . . N = 32. . . . . . . . .5 4. Xcon does not scale with N and it is chosen such that SNR = 10dB for N = 32. . . the ∗ average value of (µ∗ − µmin )(Xcon − Xtot (Imin . 103 Average sum-goodput versus number of users. . . . . . . 122 LHS and RHS of (5.2 5. 158 C. . . . . . . . and S = 30. N = 32. . .35)). . . . . . . . . . . . . . and the user is stationed at (xk . 121 Optimal user-density.6 4. . . O is assumed to be the origin. . . . . .2 System Layout. . In this plot. . i. . . . . . SNR = 10dB. . . . . . . . . . K = 8. . yk ). The BS i is located at a distance of d from the center with the coordinates (ai . . −3 K = 8. . . . . . . . 111 A regular extended network setup. . . Here. . . α = 10−3. . 102 Average sum-goodput versus number of subchannels N . . .4 5. α = 10−3 . . . . ρ∗ (λ).4. . . . .

The most common wireless communication system in present-day world is the mobile communication system which supports communication between users having wireless devices. and computers. and under-water communication systems.Chapter 1: Introduction A wireless communication system is a system that enables communication between two or more users (or people or devices). namely 1 . typically in the form of voice-data or web-based data. we will focus on one of its aspects. banking etc. This growth is attributed to the rise in number of wireless devices. Examples of such systems include mobile communication systems. such as smartphones. we will explore some issues that arise in the design of such systems. for example. Here. video streaming. there has been a tremendous growth in the usage of mobile communication systems for information (or. In this dissertation. A complete study of a such big systems is quite complicated and is out of the scope of this dissertation. cloud-computing. satellite communication systems (GPS). via a network of service-nodes. better communication systems need to be designed. To meet this increasing data-demand of a growing number of users. data) transfer. such as base-stations. Over the last decade. AM/FM radio systems. particularly smartphones. and an increase in the number of web-based services. femtocells. and relays. tablets.

and propose various optimal and/or near-optimal resource allocation schemes for each system. 1. eﬃcient resource allocation schemes must be developed to exploit available resources in the best possible manner and provide ubiquitous high-data-rate to all users in a fair manner. and multi-cell OFDMA). along with a summary of contributions towards each problem. these resources are power and bandwidth. we also provide design guidelines for service providers to design large multi-cellular communication systems while satisfying Quality-of-Service (QoS) requirements of users and revenue-targets of service providers. single-cell OFDMA. better data-compression algorithms.1 Attacked Problems and Our Contributions Brief descriptions of the problems considered in this dissertation. In this dissertation. Using our analyses. The amount of information that can be transferred from transmitter(s) to receiver(s) over this link is limited by the amount of resources available at the transmitter(s). are listed below: 2 .“resource allocation”. Further. Examples of some other methods that are useful in supporting the increasing data-demand of users include developing better application-speciﬁc protocols. and better channel-coding schemes. these resources (power and bandwidth) are limited in almost every communication system. using which eﬃciency/performance of the system can be enhanced. Typically. From a systems standpoint. the fundamental basis of any wireless communication system is the wireless link between transmitter(s) and receiver(s) that supports data transfer from transmitter(s) to receiver(s). we study three diﬀerent types of mobile-communication systems (point-to-point. Therefore.

with ACK/NAK feedback based resource allocation. In fact. ACK/NAK provides an indicator of the channel gain relative to the chosen user/power/rate. an ACK implies that the channel was good enough to support the user (with allocated power and rate) while a NAK implies otherwise. that the proposed rate adaptation scheme is a considerable improvement over ﬁxed code-rate schemes with minimal extra computation. which is impractical to implement. In fact. We show. with ACK/NAK feedback. The sequence of ACK/NAKs gives a distributional estimate of the channel state. our scheme achieves up to 90% of the goodput gain achieved by 3 . In particular.1. From the viewpoint of resource allocation. We use this estimate to achieve rate adaptation to maximize goodput. ACK/NAK feedback is quite diﬀerent than pilot-aided feedback. We propose a greedy code-rate adaptation algorithm and analyze the impact of limited (link-layer) feedback on per-packet information transfer on the “adaptiveness” of code-rate controllers. In Chapter 2. we study the problem of code-rate adaptation in point-to-point links. While pilot-aided feedback provides an indicator of the absolute channel gain. The metric associated with the performance is goodput. the chosen user/power/rate aﬀects not only the subsequent utility but also the quality of the subsequent feedback. we consider packetized time-varying continuous-state channels with link-layer feedback available at the transmitter in the form of perpacket ACK/NAKs (acknowledgements/negative acknowledgements). which in turn will aﬀect future utilities through future resource assignments. via simulations. As a result. optimal resource allocation assignment for communication over Markov channels turns out to be a POMDP policy with uncountable number of states and actions.

our greedy rate-adaptation scheme based on a continuous-Markov channel model is more appealing. base-station). In Chapter 3. Moreover. we propose an optimal algorithm to simultaneously schedule users across OFDM subchannels. 2. We also show that the performance of the proposed (greedy) scheme outperforms the POMDP (Partially Observable Markov Decision Process) based optimal rate adaptation under discretized channel-models with up to 7 discrete states. Our algorithms are bisection-based and are faster than other state-of-the-art algorithms (subgradient or golden-section based algorithms) that address resource allocation problems in OFDMA systems. In other cases where subchannel-sharing is not allowed. 3. theoretical performance guarantees as a function of any ﬁnite number of iterations are provided for our algorithms. Since POMDP-based optimal rateadaptation for discrete-Markov channels with 7 or more states is computationally intensive. we study the downlink of a single-cell OFDMA (Orthogonal Frequency Division . and allocate them powers and code-rates for system-wide utility maximization. we propose a near-optimal algorithm and bound its performance. we consider an application of the previously considered joint scheduling and resource-allocation problem for utility maximization in singlecell OFDMA downlink systems.Multiple Access) system under the availability of probabilistic CSI for all users at the transmitter (or. In cases where subchannel-sharing among users is allowed. In Chapter 4. Here.optimal genie-based schemes that use perfect causal/non-causal CSI for rateadaptation over ﬁxed code-rate schemes. binary ACK/NAKs are used to obtain 4 . unlike other algorithms.

We show that a signiﬁcant portion of the goodput gain achieved by the perfect-CSI based optimal algorithm over no-feedback can be achieved by our algorithm. a resource block is a collection of subcarriers such that all such disjoint collections have associated independently fading channels. and the number. and allocate powers and code-rates. of available resource-blocks. We propose a sub-optimal greedy resource allocation algorithm to schedule users. Nakagami-m. we propose a practical scheme that achieves the same sum-rate scaling 5 . the number. of users. we study multi-cellular OFDMA-based downlink systems and propose. Here. 4. Finally. K . we provide a computationally-eﬃcient iterative algorithm based on particle ﬁlters.channel-state information (CSI) of the scheduled users at each time-slot. bounds on the achievable sum-rate as a function of the number. of base-stations. In order to compute these distributional estimates at each time-slot. Weibull. and 2) a situation where users that are not scheduled in a given time-slot do not send any feedback to the transmitting basestation. We evaluate the bounds for dense networks and regular-extended networks under uniform spatial distribution of users using extreme-value theory. Our algorithm infers distributional estimates of channel-gains of every user using the available feedback (ACK/NAK) and uses them for scheduling. for a general spatial geometry of transmitters and end-users. which assumes: 1) usage of ACK/NAK feedback that is coarse. We then provide design principles for the service providers that guarantee users’ QoS constraints while maximizing revenue. and derive scaling laws for a truncated path-loss model and a variety of fading models (Rayleigh. B . N . In Chapter 5. and LogNormal).

contributions. 3. Proofs of various results in Chapters 2. N ). Appendix B. and 5 are provided in Appendix A. 5. respectively. B. and Appendix C. and directions for future work.law as that achieved by the optimal resource allocation policy for a wide range of parameters (K. Chapter 6 gives conclusions. 6 .

based on the predicted channel state. The idea is that. Therefore.Chapter 2: Rate Adaptation via Link-Layer Feedback in Point-to-Point Links 2. after discounting errors. we are motivated to developed rate-adaptation schemes for point-to-point links in the presence of practical modulation and coding schemes that enable higher-goodput achievability. are available at the transmitter. and multiuser interference. This is because most modulation and coding schemes that are used in practical systems are not capacity-achieving. As mentioned in the previous chapter.1 Motivation In point-to-point links. one transmitter wirelessly transmits data to one receiver. 7 . the transmitter optimizes the data rate in an eﬀort to maximize the goodput. the transmitter will use all available power and bandwidth in each channel use. may not be achieved. a limited amount of resources. mobility.e. the maximum achievable goodput. Under an instantaneous power constraint. i. power and bandwidth. which is common to all wireless communication systems due to factors such as fading. to maximize the reliability and amount of transmitted data. Rate adaptation [1–15] achieves higher goodput by combating channel variability. even in such cases. However. deﬁned as the maximum amount of data that can be transmitted on average.

” From the viewpoint of rate adaptation. Since ARQ feedback is a standard provision of the link layer. the chosen data rate aﬀects not only the subsequent goodput 8 . 2. i.For example. Therefore. the data rate should be decreased to avoid reception errors. While channel-state feedback provides an indicator of the absolute channel gain. the feedback information used for automatic repeat request (ARQ). Rate adaptation would be relatively straightforward if the transmitter could perfectly predict the channel. ACK/NAK feedback is quite diﬀerent than channel-state feedback. an ACK implies that the channel was good enough to support the rate while a NAK implies otherwise.e. while. Fortunately. the data rate should be increased to prevent the channel from being underutilized. when the channel quality is above average. when the channel quality is below average. its use by the physical layer comes essentially “for free. with ACK/NAK-feedback based rate adaptation. however. maintaining accurate transmitter channel state information is a nontrivial task that can consume valuable resources. As a result. we focus on rate adaptation schemes to improve performance of point-to-point systems wherein channel state knowledge is inferred by monitoring packet acknowledgments/negative-acknowledgments (ACK/NAKs) [5–15].. rate adaptation using single-bit error-rate feedback comes at zero additional load on downlink/uplink channels and are of interest. ACK/NAK provides an indicator of the channel gain relative to the chosen data rate. In practice. error rate feedback in the form of single-bit ACK/NAKs are a standard provision in most networks by means of ARQ (Automatic Repeat reQuest).2 Overview and Related Work In this chapter.

and uncoded QAM modulation. Furthermore. which we study analytically as well as numerically. in particular. we outline a novel implementation of the greedy rate assignment scheme. Because the optimal solution of the POMDP is too diﬃcult to obtain. with ACK/NAK feedback.e. In the next few sections.. we take a more principled approach to cross-layer rate adaptation. 2) we consider delay in the feedback channel. which in turn will aﬀect future goodputs through future rate assignments. which are ad hoc in nature. a partially observable Markov decision process (POMDP) [16]. we show (numerically) that the long-term goodput achieved by our greedy rate assignment scheme is close to the upper bound. First we establish that the optimal rate assignment is itself greedy when the error-rate feedback is not degraded. 9 . For the example case of binary (i. we establish that the greedy non-degraded scheme can be used to upper bound the optimal degraded scheme in terms of long-term goodput.but also the quality of the subsequent feedback. ACK/NAK feedback) in order to maximize long-term expected goodput. we consider the use of greedy rate adaptation. a Rayleigh-fading channel. Second. we assume a Markov channel indexed by a continuous parameter. and 3) we propose simpler greedy heuristics. our work diﬀers in the following key aspects: 1) we employ a continuousstate Markov channel model. Compared to the POMDP-based work [14]. In fact. ACK/NAK) degraded error-rate feedback. optimal rate assignment for communication over Markov channels can be recognized as a dynamic program [14]. In order to circumvent the sub-optimality of ﬁnite-state channel approximations [17]. Compared to the previous works [5–13]. we consider the general problem of adapting the transmission rate via delayed and degraded error-rate feedback (in particular.

we assume an inﬁnitely back-logged queue. 10 . which we then analyze numerically in Section 2. For simplicity. packet delay and drop rate) are handled gracefully by our greedy algorithms.. This chapter is organized as follows.g. where t denotes the packet index.3 System Model We consider a packetized transmission system in which the transmitter receives delayed and degraded feedback on the success of previous packet transmissions (e.6 for the case of uncoded QAM transmission. In particular. we consider optimal rate adaptation and suboptimal greedy approaches.1 shows the system model. In fact. where the SNR γt 0 is assumed to be constant over the packet duration.. ACK/NAK feedback.g. since only successfully communicated packets are removed from the input queue. the maximization of short-term goodput—our greedy objective—leads simultaneously to the minimization of buﬀer occupancy. binary ACK/NAKs). we assume a ﬁxed transmission power and a ﬁxed packet length of p channel uses.1 as does the one in [14]. Figure 2. Since the 1 For algorithm design. we show (numerically) that ﬁnite buﬀer eﬀects (e. we detail a novel implementation of the greedy rateassignment scheme. and in Section 2. one could argue that. which it uses to adapt the subsequent transmission parameters.7.5.Though our adaptation objective—goodput maximization—does not explicitly consider the input buﬀer state. Notice that γt is not assumed to be a discrete parameter. In Section 2. In Section 2. 2. The time-varying wireless channel is modeled by an SNR process {γt }. We summarize our ﬁndings in Section 2. we assume the use of a transmission scheme parameterized by a data rate of rt bits per packet. we outline our system model.4. and a Rayleigh-fading channel.3.

1) where ǫ(rt . Example 1. the function ǫ(rt . γ t ) 1 − ǫ(rt . γt ) Figure 2. is then deﬁned as G ( rt . an estimated version of the (d 1 delayed) error rate ǫ(rt−d . The instantaneous packet error rate ǫ(rt . We will assume that. The instantaneous goodput G(rt . γ t ) ǫ ˆt−d feedback channel ǫ(rt . γt ) varies with the rate rt and SNR γt according to the particular modulation/demodulation scheme in use. γt ) denotes the error rate at time t. γt−d ) to choose the time-t rate parameter rt .transmission power is ﬁxed.1: System model. and minimum-distance 11 . γt ) rt . we now consider uncoded quadrature amplitude modulation (QAM) using a square constellation of size m. As an illustrative example. (2. γt ). γt ). γt controller rt comm system G ( rt . γt ) is monotonically decreasing in γt . The transmitter uses ǫ ˆt−d . {γt } is an exogenous process that does not depend on the transmission parameters. deﬁned as the number of successfully communicated bits per channel use. so that γt can be uniquely determined given rt and the true error rate ǫ(rt . for each rt .

and with γ describing the ratio of received symbol power to additive noise power. At the link layer.2 also plots the (unique) goodput-maximizing constellation size as a function of SNR.1) yields the instantaneous goodput expression.2 plots instantaneous goodput contours versus SNR γt and constellation size mt for the case of p = 100 symbols per packet. (2. Note that. Figure 2. Under an AWGN channel. for this uncoded communication scheme. Thus. where symbols are grouped to form packets.3) Plugging (2. p.decision making. allowing us to ignore them in goodput calculations.2) where Q(·) denotes the Q-function [18]. which identiﬁes a particular one-to-one mapping between rate rt and goodput for a ﬁxed SNR γt . We will also assume that the number of CRC bits is small compared to the packet size. the SNR must be relatively high 12 . then the data rate equals rt = p log2 mt and the packet error rate equals ǫ(rt . then the goodput could be maximized by appropriate choice of constellation size. if the SNR was known perfectly. the symbol error rate for minimum-distance decision making is [18. a ﬁxed number of extra cyclic redundancy check (CRC) bits are appended to each packet for the purpose of error detection.3) into (2. 280] 1− 1 1−2 1− √ m Q 3γ m−1 2 . γt ) = 1 − 1−2 1− √ 1 2rt /p Q 3γ t 2rt /p − 1 2p . If we assume that the constellation size is ﬁxed over the packet duration. Figure 2. Here the ﬁnite set of allowed constellation choices (and hence rates) is apparent. We will assume the probability of undetected error is negligible and the associated ACK/NAK error feedback is sent back to the transmitter over an error-free reverse channel. (2.

4) p ˆ ǫt = k ǫ(rt . QPSK).. is shown by the dash-dot line. Coded transmission. the goodputmaximizing constellation size remains at m = 4 (i. With ACK/NAK error feedback. we emphasize that the principal results 13 . γt ) according to the conditional probability mass function k=1 ǫ(rt . γt ) = 1 − ǫ(rt . ✷ While the previous example focuses on a particular modulation/demodulation scheme and a particular error feedback model. the estimated error-rate ǫ ˆt is a Bernoulli random variable generated from ǫ(rt . γt) k = 0 0 else. The goodput maximizing constellation size. could facilitate rate adaptation at lower SNRs. as long as the SNR remains below 14 dB. mt 3 4 84 12 10 4 8 10 2 4 8 12 8 4 4 10 1 4 10 0 5 10 15 20 25 30 SNR γt (in dB) 35 40 45 50 Figure 2. γt ) (2. as a function of SNR. on the other hand.2: Goodput contours versus SNR γt and constellation size mt for packet size p = 100.to facilitate rate adaptation. Goodput Contour Maximum Goodput 10 Constellation Size.e.

5) (2. γt ). . . where d 1 denotes ǫt−d estimated error-rate feedback ˆ the causal feedback delay. . . ǫ ˆ−1 . . 14 . we consider ˆ ǫt−d to be degraded relative to the true error-rate vector ǫt−d if E{G(rt . γk ) ˆ for t = 0.in the sequel are general. At packet index t. ˆ ǫt−d ]. where rt−d in γt−d (2. γt−d ] can be uniquely determined from the pair (ǫk . . we assume that the rate controller has access to the [. . no particular modulation/demodulation scheme and error feedback model are assumed. Equation (2. . . . . γt ) | γt−d }. . we use the abbreviation ǫt For every packet index t ǫ(rt . γt ) + rt ∈R k =t+1 ∗ ǫt−d . .4 Optimal Rate Adaptation In this section we formalize the problem of ﬁnite horizon goodput maximization. . ˆ ǫ0 . ˆ ǫ−2 . 2. rt−d G ( rk . Formally. rt } has been initiated at time t = −∞. the optimal controller uses the degraded error-rate sequence ˆ ǫt−d (as well as knowledge of the previously chosen rates rt−d ) to choose the rate rt from a set R of admissible rates in order to maximize the total expected goodput for the current and remaining packets: T ∗ rt arg max E G(rt . For convenience we assume that process {γt . γ−1 . . . .6) [. . . . T(2. . r−2 . T } for goodput maximization. Also. rt−d } = E{G(rt . r0 . . rt−d ]. γ0 . . . . rk ). ˆ ǫt . . . 0. γt ) | ǫt−d .7) . γ−2.6) follows because each SNR γk [. r−1 . . though we consider only the ﬁnite sequence of packet indices {0. γt ) | ˆ ǫt−d . . rt−d } = E{G(rt .

9) are dependent on rt .9) that the solution of the rate assignment problem at every time t depends on the optimal rate assignments up to time t − 1.9) is intractable. But. γt ) | ˆ t (ˆ rt ∈R ǫt−1 . rt−1 ) = max E{G(rt . rt−1 } . the much simpler greedy 2 3 For the d > 1 case. rt−d ) t (ˆ E k =t ∗ ǫt−d .e.6 that channel quantization leads to signiﬁcant loss in goodput. rt−d . The solution to this problem is sometimes referred to as a partially observable Markov decision process (POMDP) [16]. the Bellman equation is more complicated. . rt ]) | ˆ + E{G∗ t+1 ([ˆ (2. T } can then be written (for t 0) as T G∗ ǫt−d . [rt−1 . and so we omit it for brevity. we show in Section 2. G ( rk . it is known that POMDPs are PSPACE-complete.9) where the second expectation is over ǫ ˆt .e. which in turn depends on the solution of the rate assignment problem at time t + 2. i. 15 . they require both complexity and memory that grow exponentially with the horizon T [20]. . For practical horizons T . because both terms on the right side of (2.8) For a unit2 delay system (i. . notice from (2.3 In fact. . For an intuitive understanding of this phenomenon. and so on. in part due to the continuous-state nature of the channel. optimal rate selection based on (2. Consequently. d = 1). rt−1 } G∗ ǫt−1 ..The optimal expected sum goodput for packets {t. the following Bellman equation [19] speciﬁes the associated ﬁnite-horizon dynamic programming problem: ǫt−1 . γk ) ˆ (2.. ǫt−1 . ˆ ǫt ]. the solution of the rate assignment problem at time t also depends on the solution of the rate assignment problem at time t + 1. Though a quantized channel approximation could—with few enough states—yield a tractable POMDP solution.

γt ) + rt rt ∈R k =t+1 cg . . . While the latter is diﬃcult to compute. . consider the rate assignment that maximizes the total expected goodput for the current and remaining packets using knowledge of the nondegraded feedback ǫt−d : T arg max E{G(rt . .7) only in that ǫt−d is used in place of ˆ ǫt−d .12) 16 .7)? Since it is too diﬃcult to compute the optimal goodput (which depends on the optimal rate assignment r∗ T ).rate assignment scheme r ˆt is suboptimal. . rt−d ).10) relative to the optimal scheme (2. rt−d G ( rk for t = 0. . γk ) ǫt−d . T .11) . the causal genie can also be written as T cg = arg max E G(rt .11) diﬀers from (2. T (2. rt ∈R (2. T . γt ) | ˆ ǫt−d . . we instead compare the greedy scheme (2. . the former is not. rt−d } for t = 0. At packet index t. (2. . We now detail the rate assignment scheme that leads to our total-goodput upper bound. . The question of principal interest is then: What is the loss in goodput with the greedy scheme (2. We refer to this scheme as the causal genie. Because γt−d can be uniquely determined from (ǫt−d . . γk ) γt−d G ( rk for t = 0.10) to an upper bound on the optimal goodput. γt ) + rt ∈R k =t+1 cg . Note that (2. .10) rt cg arg max E G(rt . To establish the upper bound. we show that greedy rate assignment using non-degraded error-rate feedback yields a total goodput that is no less than that of optimal rate assignment using degraded error-rate feedback.

T } can be written (for t T 0) as (2. r−d E t=0 cg . Though the result may be intuitive. In other words. T T ∗ G ( rt . γt ) + rt ∈R k =t+1 T cg . the optimal expected sum goodput for packets {t. . rt−d ) max E G(rt . the expected total goodput for optimal rate allocation under degraded feedback back ˆ is no higher than the expected total goodput for the causal-genie rate allocation under non-degraded feedback. cg = arg max E{G(rt . (2. γt ) t=0 E ˆ ǫ−d . γt ) | ǫt−d .14) rt ∈R which shows that optimal rate assignment under non-degraded causal error-rate feedback can be accomplished greedily. r −d .15) (2. Given arbitrary past rates r−d and corresponding degraded error-rate feedǫ−d . γk ) γt−d G ( rk cg . (2. γt ) | γt−d }. .e. . rt ∈R We now establish that the causal genie controller upper bounds the optimal controller with degraded error-rate feedback in the sense of total goodput. rt−d }. γk ) γt−d G ( rk = E k =t+1 + max E{G(rt . Lemma 1.13) Gcg t (ǫt−d . γt ) | γt−d } rt rt ∈R (2.cg T Since the choice of {rk }k=t+1 will not depend on the choice of rt . . the proof provides insight into the relationship between degradation of the feedback and reduction of the total expected goodput.17) 17 . γt ) G ( rt ˆ ǫ −d ..16) = arg max E{G(rt . i.

γt ) | ˆ ǫt−d . In Section 2. in Section 2.20) follows since maxrt E{f (rt )} E{maxrt f (rt )} for any f (·).22) ∗ is chosen to maximize the long term goodput—not the where (2. (2. In Section 2.23) over t = {0.5. γt ) | ˆ cg E { G ( rt . we detail the implementation of greedy rate assignment (2. rt−d ). (2. . T } and any realization of (ˆ ǫt−d .2.6. conditional on (ˆ ǫ−d .20) d rt ∈R ǫt−d . r−d ). .Proof.21) (2. γt ) | ˆ ǫt−d . γt ) | ˆ ǫt−d .23) Finally. rt−d . we detail a procedure for packet-rate adaptation. we consider adapting the rate once per block of n packets. (2. summing both sides of (2.21) follows by deﬁnition of degraded feedback.1. . 18 .10) assuming continuous Markov SNR variation and conditionally independent error-rate estimates. rt−d } max E{G(rt . = E { G ( rt (2. r −d } E{G(rt .5 we study the greedy rate assignment scheme (2. Then. while in Section 2.18) = max E E{G(rt . γt ) | ˆ ǫt−d . For any t ∈ {0. ǫt−d } ˆ ǫt−d . ǫt−d } ˆ ǫt−d . T } yields (2. rt−d . 0 is con- 2. r−d }. γt ) | γt−d } ˆ rt ∈R cg . . we ﬁnd ∗ ǫ −d . Taking the expectation over (ˆ ǫt−d . .10) in depth. . we study (numerically) the particular case in which {ǫ ˆt }t structed from link-layer ACK/NAKs. rt−d = E max E{G(rt .19) d E max E{G(rt . rt−d } rt ∈R rt ∈R (2. γt ) | ˆ ǫt−d .22) follows by deﬁnition of the greedy genie. rt−d }. rt− (2.5 The Greedy Rate Adaptation Algorithm In this section. and (2. . rt− (2. we can write ∗ E{G(rt .5.17).18) follows since rt instantaneous goodput. . γt ) | ˆ ǫ−d . rt−d ).

γt−d ).24). ˆ ǫt−d−1 )p(γt−d | ǫ ˆt−d−1 . . rt−d−1 ) dγt p(ˆ ǫt−d | ǫ(rt−d . ǫ ˆt−d−1 . Furthermore. ˆ ǫt−d . rt−d )p(γt−d | ˆ ǫt−d−1 .. rt−d )p(γt−d | ˆ ǫt−d . rt−d ) dγt−d p(γt | γt−d )p(γt−d | ˆ ǫt−d . Expanding the inferred SNR distribution via Bayes rule.25) (2. rt−d ) = = p(ˆ ǫt−d | γt−d . . we ﬁnd p( γ t | ǫ ˆt−d .e. rt−d ) dγt−d . ′ ′ ′ ˆt−d−1 .30) 19 .5. (2. rt−d−1 ) . rt−d ) ′ ′ ′ ˆt−d−1 .10) Assuming a feedback delay of d can be rewritten as r ˆt = arg max rt ∈R G ( rt . γ t ) p ( γ t | ˆ ǫt−d . γt−d ))p(γt−d | ǫ ˆt−d−1 . rt−d ) = p(ˆ ǫt−d | ǫ(rt−d . ˆ ǫt−1 ) = p(ˆ ǫt | ǫt )). rt−d−1 ) . rt−d )p(γt p(ˆ ǫt−d | γt −d −d | ǫ −d . γt−d ).2. ǫ p(ˆ ǫt−d | ǫ(rt−d .26). rt−d ) dγt ˆt−d−1 .24) We now derive a recursive implementation of the greedy rate assignment (2. this becomes p(γt−d | ǫ ˆt−d . γt −d −d ))p(γt−d | ǫ (2.29) With conditionally independent error estimates (i.1 Packet-Rate Algorithm 1 packets. rt−d ) = = p(γt | γt−d . the greedy rate assignment (2. p(ˆ ǫt | ǫt .27) (2. (2. rt−d ) = p(γt−d | ǫ ˆt−d . ǫ −d (2. .26) where we used the assumption of Markov SNR variation to write (2.28) (2. ′ ′ ′ ˆt−d−1 . T . . rt−d−1 ) dγt ˆt−d−1 )p(γt−d | ǫ p(ˆ ǫt−d | ǫ(rt−d . rt−d ) dγt for t = 0. p(γt−d | ǫ ˆt−d . ǫ ˆt−d−1 .

. the rate assignment procedure for packet indices t = 0. d}. . then p(γt−d+1 | ǫ ˆt−d .26).2 Block-Rate Algorithm Since it may be impractical for the transmitter to adapt the rate on a per-packet basis. Measure ǫ ˆt−d . then we suggest to use the prior p(γt−d ) in its place. calculate the distribution p(γt−d | ˆ 2. If5 d > 1.31) (2. Calculate p(γt | ˆ ǫt−d . 3. we can also write p(γt−d+1 | ǫ ˆt−d . (2. . . rt−d ) was already computed in step 2). . rt−d ) × p(γt−d | ˆ ǫt−d .32) lead to the following recursive implementation of the greedy rate assignment (2. ˆ ǫt−d . .5.Similar to (2. rt−d ) = p(γt−d+1 | γt−d . 5 4 Notice that. rt−d ) using (2. and (2.26). rt−d ) dγt−d = p(γt−d+1 | γt−d )p(γt−d | ǫ ˆt−d .5. γt−d )) as a function of γt−d .30). if d = 1. 2.32) Equations (2. 4. Assuming the availability4 of p(γt−d | ǫ ˆt−d−1 . we now propose a modiﬁcation of the algorithm detailed in Section 2. and then ǫt−d . rt−d−1 ) is unknown. rt−d−1 ) when calculating rt . rt−d ) dγt−d . The main idea behind our block-rate algorithm is that the SNR {γt } and error-rate estimates {ǫ ˆt } are treated as if they For the initial packet indices t ∈ {0. then calculate p(γt−d+1 | ǫ ˆt−d . rt−d ) via (2. compute p(ˆ ǫt−d | ǫ(rt−d . (2.32) for use in the next iteration. rt−d ) using the Markov prediction step (2. Calculate r ˆt via (2. . T is: 1. if the pdf p(γt−d | ǫ ˆt−d−1 .30).1 that adapts the rate only once per block of n packets.24).24). . 20 .26).

The details of our block-rate algorithm are now given. d}. γ i−d )) ǫi−d . ties. ri−d−1 ) dγ ′i−d p(ˆ ǫi−d | ǫ(r i−d . i. compute ǫ ˆi−d via (2. ⌈T /n⌉ is: 1.were constant over the block. . . r i−d ) as a function of γ i−d . r i−d−1 ) when calculating ri . .5.35) Notice that. Though this treatment is suboptimal. (2. 21 . Assuming the availability6 of p(γ i−d | ˆ ǫi−d−1 .1.5. and the assigned rates {rt } are related to the calculated rates {ri } as rt = r⌊t/n⌋ . . r i−d−1 ) is unknown. γ ′i−d ))p(γ ′i−d | ˆ p(ˆ ǫi−d | ǫ(r i−d . γ i−d ))p(γ i−d | ˆ ǫi−d−1 . if the pdf p(γ i−d | ˆ ǫi−d−1 . the rate assignment procedure for block indices i = 0.33) (2. compute p(ˆ ǫi−d | ǫ(r i−d . then we suggest to use the prior p(γ i−d ) in its place. Here. .1. .e. . γ i = γi . the block-rate quantities reduce to the packet-rate quantiˆi .34) γin+⌊n/2⌋ .. ri−d−1 ) . we use d to denote the delay in blocks. when n = 1. Measure {ǫ ˆt }t=(i−d)n (i−d+1)n−1 .36) For the initial block indices i ∈ {0. the block-rate greedy implementation goes as follows. thereby allowing a straightforward application of the method from Section 2. our intention is to trade performance for reduced complexity. Denoting the block index by i. . and then calculate the inferred SNR distribution p(γ i−d | ˆ using p ( γ i− d | ˆ ǫ i− d .33). ˆ ǫi = ǫ Borrowing the packet-rate adaptation approach from Section 2. and r i = ri . the block versions of the degraded error-rate estimate and SNR are deﬁned as ǫ ˆi γi 1 n (i+1)n−1 t=in ǫ ˆt (2. (2. r i− d ) = 6 ǫi−d−1 .

we expect the packet error rate estimate ǫ ˆt to become more accurate (since it is estimated from. 7 Notice that. ǫi−d . Calculate r ˆi via r ˆi = arg max r i ∈R p ( γ i | γ i− d ) p ( γ i− d | ˆ ǫi−d .38) 4..ǫi−d . ri−d ) dγ i−d . r i−d ) = 3. ri−d ) using the Markov prediction step 2. We note that the block-rate modiﬁcation proposed here is suboptimal in the sense that the SNR of each packet in a block could have been predicted individually. r i−d ) dγ i . Calculate p(γ i | ˆ p( γ i | ˆ ǫi−d . If7 d > 1. i. r i−d ) as follows for use in the next iteration. r i−d ) dγ i−d . individual rates could have been assigned for each packet in the block.g. rather than a uniform rate for all packets in the block. However. then calculate p(γ i−d+1 | ˆ ǫi−d . ri−d ) p(γ i−d+1 | ˆ = p(γ i−d+1 | γ i−d )p(γ i−d | ˆ ǫi−d . r i−d ) was already computed in step 2). (2. (2. Likewise. and the per-packet implementation complexity of the algorithm to decrease.. simplicity. joint optimization of intrablock rates appears to be prohibitively complex and thus goes against our primary motivation for the block-rate algorithm. e. 22 . the SNR model to get less accurate (since a block-fading approximation is being applied to a process that is continuously fading).e.39) As the adaptation-block size n increases. n ACK/NAKs). then p(γ i−d+1 | ˆ ǫi−d . rather than predicting only the SNR of the packet in the middle of the block. γ i ) p( γ i | ˆ ǫi−d .37) G( r i . (2. if d = 1.

we feel that our choices are suﬃcient to illustrate the essential behaviors of the generic rate adaptation schemes discussed in Sections 2.e. 2. and fading could have been employed.1 Setup For our numerical experiments. if ǫi denotes the value of the true packet error rate over the i-th block. we used causal degraded error-rate feedback in the form of one ACK/NAK per transmitted packet. However. there were n ACK/NAKs. as detailed below. In addition. Given this setup. in a block8 of n packets. We used squared-integer constellation sizes. 23 . While other examples of modulation. doing so would spoil the total-goodput optimality of the packet-rate causal genie that was identiﬁed in Lemma 1.4. 4-QAM. it can be shown that the MVU estimate [21] of the average packet error rate over the i-th block can be computed by a simple arithmetic average of the n ACK/NAKs. i.16).33).6. using 0 for an ACK and 1 for a NAK. 9-QAM. which has been recognized as a non-degraded-feedback version of the greedy scheme (2. Notice that this MVU estimate corresponds exactly to the block error-rate estimate ǫ ˆi speciﬁed in (2. 2.5.6 Numerical Results We now describe the results of numerical experiments in which we assume uncoded square-QAM modulation. we used the uncoded QAM modulation/demodulation scheme described in Example 1. Furthermore.3).Finally. which yields the packet error-rate given in (2.. 16-QAM. we note that a similar block-rate modiﬁcation can also be applied to the causal genie scheme (2.10). error-rate estimation. Thus. a Gauss-Markov fading channel. 8 The results here also hold for packet-rate adaptation through the choice n = 1. 2. etc. and minimum variance unbiased (MVU) estimation of the error-rate.

Notice that α = 1 corresponds to i. .41). the SNR γt is exponentially distributed with mean value 2Kα .40) Thus. We then generate a packet-rate SNR process {γt } by scaling the squared magnitude of gt : γ t = K |gt |2 . ǫi ) and the error estimate ǫ p(ˆ ǫi = k n | ǫi ) = n k 0 n −k ǫk i (1 − ǫi ) for k = 0. we ﬁrst notice from (2. whereas α = 0 corresponds to a time-invariant gain. we can calculate ǫi = ǫ(r i . . (2. γ i ) as a function of γ i using (2. . it is possible to independently control the (steady-state) mean and coherence time of the SNR process {γt }.i. n else. gains. for steady-state indices t. we ﬁnd that gt = (1 − α)nd gt−nd + α nd−1 j =0 (1 − α)j wt−j . To generate the Markov block-rate SNR process {γ i }. . Thus. (2.34) that p(γ i |γ i−d ) = p(γt | γt−nd ).3) and plug the results into p(ˆ ǫi | ǫi ) from (2.ˆi obeys then the number of NAKs per block is Binomial(n. (2.41) where {wt } is a zero-mean unit-variance white circular Gaussian driving process and 0 ≤ α ≤ 1.43) 24 .d. Then. In fact.40) in order to compute (2. we ﬁrst generate a packetrate complex-valued Gauss-Markov “channel gain” [22] process {gt } using gt = (1 − α)gt−1 + αwt .42) The scaling parameter K in (2. it can be shown that.42) is essential because α aﬀects both the (steadystate) coherence time and the mean-squared value of the gain {gt }. by using the two parameters K and α. (2. 2−α To evaluate p(γ i | γ i−d ). from (2.36).

× Io Kα(1 − (1 − α)2nd ) (2. and noncausal genie. both with and without ﬁnite-buﬀer constraints at the transmitter. Since this scheme has access to 25 .e. current. As shown in Section 2.where nd−1 j j =0 (1 − α) wt−j (1 − (1 − α)2nd ) .e. as the feedback delay d and/or the block size n increases..5 relative to three reference schemes: ﬁxed rate. arg maxrt G(rt . and uses this information to choose the goodput-maximizing rate. constellation size) that maximizes expected goodput under the prior SNR distribution.2 Results Numerical experiments were conducted to investigate the steady-state performance of the greedy algorithm from Section 2. the causal genie’s ability to predict the SNR decreases.44) 2. this ﬁxed rate would be optimal. The so-called non-causal genie reference scheme assumes perfect knowledge of SNR γt for all past.e. causal genie.4 adapts the rate to maximize expected goodput under perfect causal feedback of the error rate ǫt or. total-goodput maximizing. we show ∼ CN 0. The so-called ﬁxed-rate reference scheme chooses the ﬁxed rate (i.. γt )p(γt )dγt .. However. the SNR γt .4. and thus its goodput suﬀers. i. The causal genie reference scheme deﬁned in Section 2. From this fact. equivalently. 1−(11 −α)2 in Appendix A that p(γt | γt−nd ) = 2−α 2Kα(1 − (1 − α)2nd ) −(γt + (1 − α)2nd γt−nd )(2 − α) × exp 2Kα(1 − (1 − α)2nd ) √ (1 − α)nd γt γt−nd (2 − α) .6. In the absence of feedback. and future packets. the goodput attained by the causal genie upper bounds that of optimal rate selection under degraded feedback. i.

Furthermore. at higher mean SNRs. But. and fading-rate parameter α = 0. Unless otherwise noted.6. To vary E{γt }. The plot shows the greedy algorithm exhibits an increasing gain over the ﬁxed-rate algorithm as mean SNR increases. The steady-state goodputs reported (per symbol per packet) in the ﬁgures were calculated by averaging instantaneous goodputs over the packets in 1000 channel realizations for Figs. At low mean SNR. we varied the parameter K while keeping α = 0. one can infer that greedy adaptation based on 1-bit ACK/NAK feedback is suﬃcient to attain a major fraction of the gain achievable by any causal feedback scheme. the greedy algorithm performs about 1 dB worse (in SNR) than the causal genie. whereas the ﬁxed-rate scheme performs about 5 dB worse. mean SNR E{γt } = 25 dB. To ensure that steady-state performance was reported. the algorithms were initialized at the goodput-maximizing rate for each new channel realization. the SNR gap between the greedy and ﬁxed-rate schemes grows as mean SNR increases. feedback delay d = 1 packet. little gain is observed because the optimal constellation size is almost always the smallest one. 2. 2. as can be inferred from Fig.4 and 500 channel realizations for Figs. Figure 2.3 plots steady-state goodput as a function of mean SNR E{γt }.2. we assumed an inﬁnitely back-logged queue at the transmitter. Since the steady goodput achieved by the causal genie upper bounds that achievable by any causal-feedback-based rate adaptation algorithm.3-2. 200 packets (each consisting of p = 100 symbols) were transmitted. the following parameters were used: block size n = 1 packet. 26 . it upper bounds them in terms of goodput. 2.5-2. Inﬁnite Buﬀer Experiments For the ﬁrst set of experiments.01. For each channel realization.001.more information than the causal-genie and greedy algorithms.

4 shows steady-state goodput versus fading-rate parameter α for mean SNR E{γt } = 25 dB. 9 Deviations from constant are due to ﬁnite averaging eﬀects. we conclude that the greedy scheme captures a dominant fraction (e.10 9 8 Steady State Goodput 7 6 5 4 3 2 1 0 0 5 10 15 Mean SNR (dB) 20 25 30 log2(1+SNRmean) Non Causal Genie Causal Genie Greedy Algorithm Fixed−rate Alg Figure 2. ≈ 90% at low α) of the goodput gain achievable under causal feedback. For a wide range of α.01. From the plot. the following can be observed: as α decreases. both the causal genie and the greedy algorithm approach the ﬁxed-rate algorithm. both the causal genie and the greedy algorithm approach the non-causal genie. The non-causal genie and ﬁxed-rate algorithms yield essentially constant9 steady-state goodput versus α. it can be seen that the greedy algorithm performs closer to the causal genie than it does to the ﬁxed-rate algorithm. block size n = 1 packet. whereas as α increases. Thus. 27 .3: Steady-state goodput versus mean SNR E{γt } for α = 0.. and delay d = 1 packet.g. Figure 2. Lower α corresponds to slower channel variation and thus more accurate prediction of instantaneous SNR.

the greedy algorithm performs closer to the causal genie than to the ﬁxed-rate algorithm. By deﬁnition. the causal genie and greedy algorithms eventually perform no better than the ﬁxed-rate algorithm.6 4. i. so that.2 5 4. the simple greedy scheme captures a dominant fraction of the goodput gain achievable under causal feedback.4 4. their causally predicted SNR distributions converge to the prior SNR distribution.001. Still.5 plots steady-state goodput versus feedback delay d for packet-rate adaptation. implying that the greedy 28 . Figure 2. when d = 1.e. Figure 2.. However.5. and future SNRs. their steady-state goodputs measure 30% and 20% above that of the ﬁxed-rate algorithm. and delay d = 1 packet.6 −4 10 Non Causal Genie Causal Genie Greedy Algorithm Fixed−rate Alg 10 10 Gauss Markov model parameter. current. For all tested block sizes.4: Steady-state goodput versus α for E{γt } = 25 dB. so its performance is unaﬀected by delay. n = 1.6 plots steady-state goodput versus block size n packets for delay d = 1 packet and α = 0.8 Steady State Goodput 4. for all delays. α −3 −2 10 −1 Figure 2. As for the causal genie and greedy algorithms.8 3.2 4 3. respectively. block size n = 1 packet. the non-causal genie has access to all past. as the delay d increases.

6 4 states 4.4 4. the SNR must be predicted farther into the future. α = 0. as the block length increases. though. Figures 2. and block size n = 1 packet.2 5 4.6 also plot the performance of the so-called quantized genie reference scheme.8 Steady State Goodput 7 states 4. The performances of all adaptive schemes decrease with block size.2 Non Causal Genie Causal Genie Quantized Genie Greedy Algorithm Fixed−rate Alg 2 states 4 3. and.8 10 0 10 Delay 1 10 2 Figure 2.5 and 2. knowledge of SNR γt−d .001. but otherwise perfect. whereas the optimal rate varies across the block. The goodput attained by the quantized 29 . algorithm once again recovers a dominant portion of the goodput gain achievable under the causal feedback constraint.5: Steady-state goodput versus delay d for E{γt } = 25 dB.5. a uniform rate is applied across the block. which adapts the rate to maximize goodput under quantized. second. This is for two reasons: ﬁrst. Notice that even the performance of non-causal genie degrades as n increases due to the sub-optimality of its uniform rate assignment across the block.

001.5 and 2. (15)-(16)]. Apart from the ﬁnite-state SNR model.and 4-state quantized genies. throughout most of the examined range of d and n. rate assignment for the quantized genie is identical to that for the causal genie. 10 30 .2 5 4.6 4. we conclude that the greedy algorithm outperforms the optimal POMDP-based rate adaptation scheme based on a The fact that the quantized genie yields an upper bound in the case of a ﬁnite-state Markov channel follows directly from Lemma 1.5. eq. To construct the corresponding ﬁnite-state Markov model.8 10 0 7 states 4 states Non Causal Genie Causal Genie Quantized Genie Greedy Algorithm Fixed−rate Alg 2 states 10 Block Size 1 10 2 Figure 2.4 4. α = 0.2 4 3.6: Steady-state goodput versus block size n for E{γt } = 25 dB.8 Steady State Goodput 4. and delay d = 1 packet. Figures 2. and performs on par with the 7-state quantized genie. genie upper bounds10 the goodput attained by a transmitter that assumes a ﬁnitestate Markov SNR model and employs optimal POMDP-based rate assignment. we quantized the SNR using the Lloyd-Max algorithm [23] and calculated the state transition probability matrix via [24. Thus. which holds for both continuous and ﬁnite-state Markov channels.6 show that the greedy algorithm outperforms the 2.

while. The self transition probability in both ON and OFF states was set to 0. the buﬀer would go from totally empty to totally full after 30 arrivals. feedback delay d = 1 packet. in the absence of ACKs. for channel models with more than a few states. in the absence of NAKs. The size of an arriving packet was set equal to the number of bits transmitted (per packet interval) by the ﬁxed-rate reference scheme under backlogged conditions. under typical horizons. The following parameters were used: block size n = 1 packet. In the ON state. and in the OFF state. 1000 packets were transmitted (each consisting of p = 100 symbols) and the buﬀer was initialized at half-full. where a buﬀer occupancy of “b” is to be interpreted as b arrival-packets worth of bits. a ﬁnite data buﬀer was employed at the transmitter.9 in order to mimic bursty traﬃc.7 plots average buﬀer occupancy versus fading-rate parameter α. then. if packets were arriving persistently.5 and the long-term arrival rate is 0. no packets arrive. the steady-state probability of each state is 0.ﬁnite-state Markov SNR model with 7 states or less. a single packet arrives in the buﬀer (queue). The packet arrival rate followed a 2-state Markov model with ON and OFF states. It 31 . Consequently. Figure 2. The values reported in the ﬁgures represent the average of all packets in 1000 channel realizations. For each channel realization. and mean SNR E{γt } = 25 dB. Finite Buﬀer Experiments For this second set of experiments.5 packets/interval. Bits are removed from the buﬀer when an ACK arrives. or when the buﬀer overﬂows. This is notable because the computational complexity of optimal POMDP-based rate adaptation is signiﬁcant. The size of the buﬀer was set equal to 30 such packets of data. the ﬁxed-rate scheme would yield a ﬁxed buﬀer occupancy. Thus. conﬁrming their successful transmission.

can be seen that the buﬀer occupancy achieved by the greedy algorithm is very close to that achieved by the causal and non-causal genie algorithms. 32 . Recall that. the drop rate achieved by the greedy algorithm is very close to that achieved by the causal and non-causal genie algorithms. Figure 2. block size n = 1 packet. 9 8 7 6 5 4 3 2 1 −4 10 Non Causal Genie Causal Genie Greedy Algorithm Fixed Rate Average Buffer occupancy 10 −3 α 10 −2 10 −1 Figure 2.5 packets/interval. whereas the drop rate achieved by the ﬁxed-rate algorithm is more than 10 times higher. during which ﬁxed-rate transmissions are more likely to yield NAKs and hence ﬁll the buﬀer. the SNR can remain below average for prolonged periods of time. when α is low. Here again. whereas the buﬀer occupancy achieved by the ﬁxed-rate scheme is much higher. especially at lower values of α. and delay d = 1 packet. buﬀer size = 30 packets.8 plots a related statistic: the fraction of packets that are dropped due to buﬀer overﬂows. E{γt } = 25 dB.7: Average buﬀer occupancy versus α for Markov arrivals with average rate = 0.

and delay d = 1 packet. whereas the steady-state goodput achieved by the ﬁxed-rate scheme is much lower.7 Summary In this chapter.9 shows steady-state goodput versus fading-rate parameter α for Markov arrivals and ﬁnite buﬀer size. since dropped packets do not contribute to goodput. we studied rate adaptation schemes that use degraded error-rate feedback (e. The increase of steady-state goodput with α is directly related to the decrease in drop rate with α observed in Fig. The steady-state goodput achieved by the greedy scheme is very close to that of the causal and non-causal genie schemes. Figure 2. especially when α is small. 2. block size n = 1 packet.g.8. packet-rate ACK/NAKs) to maximize ﬁnite-horizon expected goodput 33 . 2. E{γt } = 25 dB. buﬀer size = 30 packets..10 0 10 −1 10 −2 Drop Rate 10 −3 10 −4 10 −5 10 −6 Non Causal Genie Causal Genie Greedy Algorithm Fixed Rate −4 10 −7 10 10 −3 α 10 −2 10 −1 Figure 2.8: Average drop rate versus α for Markov arrivals with average rate = 0.5 packets/interval.

7 1.85 Steady State Goodput 1.1. We then detailed an implementation of the greedy rate-adaptation scheme in which the SNR distribution is estimated online (from degraded error-rate feedback) and combined with oﬄine-calculated goodput-versus-SNR curves to ﬁnd the expected-goodput maximizing transmission rate. and delay d = 1 packet.9: Steady-state goodput rate versus α for Markov arrivals with average rate = 0. 34 . we proposed a simple greedy alternative and showed that.65 1. the greedy approach is optimal when the error-rate feedback is non-degraded.75 1. First.8 1.5 −4 10 Non Causal Genie Causal Genie Greedy Algorithm Fixed Rate 10 −3 α 10 −2 10 −1 Figure 2. over continuous Markov ﬂat-fading wireless channels. we speciﬁed the POMDP that leads to the optimal rate schedule and showed that its solution is computationally impractical.55 1. buﬀer size = 30 packets. E{γt } = 25 dB.6 1. while generally suboptimal. a block-rate greedy adaptation scheme was also proposed that oﬀers the potential for signiﬁcant reduction in complexity with only moderate sacriﬁce in performance. In addition to the packet-rate greedy adaptation scheme. block size n = 1 packet.5 packets/interval. Then.9 1.95 1.

Since POMDP-based optimal rate-adaptation for discrete-Markov channels with 7 or more states would be computationally intensive. the eﬀects of mean SNR. which is too complex to implement directly. and the eﬀects of channel fading rate on buﬀer occupancy. and there it was found that the proposed greedy scheme outperformed the quantized genie scheme with up to 7 states. a genie-aided scheme with perfect causal SNR knowledge.. In this case. whereas the drop rate and average buﬀer occupancy of the ﬁxed-rate scheme were much higher (e. whereas the optimal ﬁxed-rate scheme captures signiﬁcantly less. packet-rate ACK/NAK feedback. the causal genie reference is especially meaningful because it upper bounds the performance of the optimal POMDP scheme. drop rate. channel fading rate. a ﬁnite transmission buﬀer was considered. First. Similarly. greedy rate-adaptation based on a continous-Markov channel model is more appealing. and Rayleigh fading. the greedy scheme was numerically compared to three reference schemes: the optimal ﬁxed-rate scheme. 35 . and feedback delay on steady-state goodput were investigated in the context of an inﬁnitely backlogged transmission queue. and steady-state goodput were investigated. The results suggest that the simple packet-rate greedy scheme captures a dominant fraction of the achievable goodput under causal feedback.g. the drop rate and average buﬀer occupancy of the greedy scheme were nearly equal to those of the causal and non-causal genie-aided schemes.For the particular case of uncoded square-QAM transmission. Comparisons to a “quantized genie” scheme that upper bounds optimal adaptation under a ﬁnite-state Markov SNR model were also made. and a genie-aided scheme with perfect non-causal SNR knowledge. an order-of-magnitude higher in the case of drop rate). Second.

Since bandwidth and power resources are limited.Chapter 3: Joint Scheduling and Resource Allocation in the OFDMA Downlink under Imperfect Channel-State Information 3..g. power) constraints [25]. such as a minimum reliable rate for each user. 36 . Although. given only a generic distribution for the subchannel signal-to-noise ratios (SNRs). the BS faces a resource allocation problem where the goal is to maximize an eﬃciency-related quantity (e. we consider simultaneous user-scheduling.. one would ideally like to have access to instantaneous channel state information (CSI). In doing so. the BS would like to allocate them most eﬀectively. Overall. and rate-selection in an OFDMA downlink. for resource allocation. Thus. by pairing users with strong subchannels and distributing power in the best possible manner. power-allocation.1 Introduction In the downlink of a wireless orthogonal frequency division multiple access (OFDMA) system. the base station (BS) delivers data to a pool of users whose channels vary in both time and frequency. a function of goodput) under particular (e. with the goal of maximizing expected sum-utility under a sum-power constraint.. in this chapter. e.g. and so resource allocation must be accomplished under imperfect CSI. the BS may need to maintain per-user quality-of-service (QoS) constraints. such CSI is diﬃcult to obtain in practice.g. At the same time.

e. we consider the above scheduling and resource allocation (SRA) problem under two scenarios. Based on a detailed analysis of the optimal solution to this problem and its relationship to that in the ﬁrst scenario. and/or the treatment of practical modulation-and-coding schemes (MCS).g.. comparing against state-of-the-art approaches and genie-aided performance bounds.g.g.. we allow at most one combination of user and MCS to be used on any given subchannel and time-slot. 37 .. In the ﬁrst scenario. such as IEEE 802. facilitating. [26–28]).g. and results in a mixed-integer optimization problem. In practice. Although the resulting optimization problem is non-convex. we propose a novel bisection-based algorithm that is faster than state-of-the-art golden-section based approaches (e. In particular. we show that it can be converted into a convex problem and solved exactly using a dual optimization approach. This scenario occurs widely in practice. in OFDMA systems where several users are multiplexed within a timeslot.g.16/WiMAX [29] and 3GPP LTE [30]. we simulate our algorithms under various OFDMA system conﬁgurations. 33]). such as in the Dedicated Traﬃc Channel (DTCH) mode of UMTS-LTE [32]... e. this scenario occurs. and we derive a novel tight bound on the optimality gap of our algorithm. In the second scenario. [31. we allow multiple users (and/or MCSs) to time-share any given subchannel and time-slot. we propose a novel suboptimal algorithm that is faster than state-of-the-art golden-section and subgradient based approaches (e. quality-of-service enforcement.we consider relatively generic goodput-based utilities. throughputbased pricing (e. [31]) and that admits ﬁnite-iteration performance guarantees. Finally. Based on a detailed analysis of the optimal solution.

In Section 3. notably [34–39].7. we conclude.The remainder of this chapter is organized as follows. and eﬃcient subgradient-based algorithms were proposed.4. we consider the “discrete” problem. and ﬁnd its exact solution. In [39]. In [34]. In Section 3. 3. our utility framework can be 38 . we outline the system model and frame our optimization problems. Compared to the above works.2. in Section 3. where each subchannel can be shared by multiple users and rates. Unlike [34–39]. rate.2 Past Work The problem of OFDMA downlink SRA under perfect CSI has been studied in several papers. In Section 3. a subchannel. Finally. In Section 3.3. In [38]. non-convex optimization problems regarding weighted sum-rate maximization and weighted sum-power minimization were solved using a Lagrange dual decomposition method. we extend the utility maximization framework to imperfect CSI and continuous allocations. we compare the performance of the proposed algorithms to reference algorithms under various settings. a weighted-sum capacity maximization problem with/without subchannel sharing was formulated to allocate subcarriers and powers. we discuss some past work that is related to our problem. In [35]. we consider the “continuous” problem. and power allocation algorithm was developed to minimize power consumption while maintaining a total rate-allocation requirement for every user. where each subchannel can support at most one combination of user and rate per time slot. a utility maximization framework for discrete allocation was formulated to balance system eﬃciency and fairness.5. In Section 3.6. and propose bisection-based algorithms that are faster for both the discrete and continuous allocation scenarios.

In additional. Relative to these works. and provide a tight bound on the duality gap of our proposed discrete-allocation scheme.g. under a more general class of channel estimators. and for both discrete and continuous subchannel allocations. In [41]. In [33]. and studied the impact of channel estimation error due to pilot-aided MMSE channel estimation. the eﬀect of heterogeneous delay requirements and outdated CSI on a particular discrete resource allocation problem was studied.33] are special cases). it can be applied to pricing-based utilities (e. we propose By a “ﬁxed rate-power function” we mean that. 11 39 . 33. as well as those based on pricing models. Our algorithms are inspired by a rigorous analysis of the optimal solutions to the discrete and continuous problems. 41]. The problem of OFDMA downlink SRA under imperfect CSI was studied in several papers. In contrast. In [40]. Furthermore.applied to problems with/without ﬁxed rate-power functions11 . Both optimal and heuristic algorithms were then proposed to implement the obtained solution. was investigated. we consider a general utility maximization problem that allows us to attack problems that may or may not be based on ﬁxed rate-power functions. 40.. subject to strict constraints on conditional expected user capacities. Compared to these two works. applicable to a general utility maximization framework (of which the objectives in [31. we propose faster algorithms. for a given SNR. In [31]. a deterministic optimization problem was formulated using an upper bound on system capacity (via Jensen’s inequality) as the objective. the authors considered the problem of discrete ergodic weighted sum-rate maximization for user scheduling and resource allocation. we study the relationship between the discrete and continuous allocation scenarios. notably [31. responsive pricing and proportional fairness pricing) [26]. the problem of total transmit power minimization. the achievable rate is a known function of the power.

.m bits per codeword and a codeword error probability of ǫk. For a given user k . the OFDMA subchannels are assumed to be non-interfering. so that there is always data available to be transmitted.faster algorithms for both continuous and discrete allocation problems with provable bounds on their performances.m and bk. [33]). MCS m corresponds to a transmission rate of rk. M }. the successful reception of a transmitted codeword depends on the corresponding subchannel’s SNR γ .1. 3.3 System Model Consider a downlink OFDMA system with N subchannels and K active users (N.k. for user k . so that pγ is the eﬀective received SNR. We assume that.m (see. some additional notation is useful. Thus. we will use the proportionality indicator In.m e−bk. To indicate how subchannels are partitioned among users and rates in each time-slot. 40 . and modulation and coding scheme (MCS). K ∈ Z+ ) as shown in Fig. indexed by m ∈ {1. We assume that. in a way that maximizes utility. e. .g. the base-station transmits codeword(s) from a generic signaling scheme. for each user. Here. with gains that are time-invariant over each codeword duration and statistically independent of those for other users.m pγ for known constants ak.m. power p. . During every channel use and across every OFDMA subchannel.m (pγ ) = ak. there is an inﬁnite backlog of data at the base-station. which propagate to the intended mobile recipient(s) through their respective fading channels. 3. the subchannel SNR γ is treated as an exogenous parameter. across OFDMA subchannels. . The scheduler-and-resource-allocator at the basestation uses the imperfect CSI to send data to the users. To precisely state our scheduling and resource allocation (SRA) problem. .

Finally. the goodput gn. we assume that it does know the (marginal) distribution of each γn. .k. In the sequel. we consider two ﬂavors of the SRA problem.k }. n=N n=1 .m ≤ 1 for all n. n is the subchannel index. . . 1]).k.m γn. .k .m pn. and a “discrete” one where each subchannel can be allocated to at most one user/rate combination per time slot (i.m In.m = 1 means that subchannel n is fully dedicated to user k at MCS m.m ∈ {0.m ∈ [0. a “continuous” one where each subchannel can be shared among multiple users and/or rates per time slot (i. user K user 1 user messages Figure 3. .k. ..k. and In.k. Here. we will use γn.m pn.k. . Although we assume that the BS does not know the SNR realizations {γn. m).e.k. n=N Base Station user 2 . n=N Partial CSI scheduler and resource allocator n=1 ..k to denote the nth subchannel’s SNR for user k . We will use pn.m = 0 means that subchannel n is totally unavailable to user k at MCS m.n=1 .m .k.k.m ≥ 0 as the power that would be expended on subchannel n if it was fully allocated to the user/rate combination (k. The subchannel resource constraint is then expressed as k. In.1: System model of a downlink OFDMA system with N subchannels and K users. . 1}).m e−bk.m = (1 − ak. the total expended power becomes n.e. where In.k. In.m quantiﬁes the expected number of 41 . With this deﬁnition.k.m In.k.m. When subchannel n is fully dedicated to user k with MCS m and power pn.k )rk.

and in Section 3.m).k.k wk In. Commonly used utilities constructed from concave functions of capacity log(1 + pn.m γn.k.k.m ∈ [0.k.m (·) ≤ 0.) In partic- ular. k. where the expectation is taken over the subchannel-SNRs {γn.5 we study it for the discrete case In.m In.k. in Section 3.1 = 1. E n. 1).k.t.m ≥0} {In.m(g ) = 1 − e−wk g (for some positive {wk }) is appropriate for “elastic” applications such as ﬁle transfer [27. strictly-increasing.m ≤ Pcon .m (g ) = wk g with appropriately chosen weights {wk }. For example. one would simply use Un.k.k. Incorporating a sum-power constraint of Pcon . we aim to maximize the expected sum utility.1) s.m In.k.k.m (·) > 0 and Un. transmitted without error.k } hidden within the goodputs.m(0) < ∞.mUn. and concave. 1}. To maximize weighted sum capacity n.4.m ) . and eﬀective-bandwidth pricing.1 = rk. and set Un. 1].m pn.k.k.k. 42 .m The above formulation is suﬃciently general to address a wide class of objectives. as in [33]. we focus on maximizing goodput-based utilities of the form Un.k.m (g ) = g .m ∈ {0.k )rk.bits.1γn.m } max E n=1 k =1 m=1 In.k . responsive pricing. where Un.1γn.k.m (·) is any generic real-valued function that is twice diﬀerentiable. with ′ ′′ Un.k ). Our formulation also supports various pricing models [26].m pn. per codeword.m Un.m e−bk.k.k. For example.1 = bk.k. to maximize sum-goodput. the utility Un.k. can also be handled by our formulation. we study the SRA problem for the continuous case In. For weighted sum-goodput.k.m (1 − ak. one would choose M = ak.1 log 1 + pn.k.k.m ≤ 1 ∀n and n.k. (These conditions imply Un. (3. one would instead choose Un.1(g ) = wk log 1 − log(1 − g ) for g ∈ [0.k. proportional fairness pricing.k.m (gn.k. Next. our SRA problem becomes N K M SRA {pn.k.k.m In. such as ﬂat-pricing. In the sequel. such as max-min fairness and the utilities in [34] and [31].m(gn. 28].

k.k.2) k.m(In.k.m γn.k /In.k.3) where Fn.m Fn.m I∈ICSRA xn. We refer to this problem as the “continuous scheduling and resource allocation” (CSRA) problem.t. Recall that this problem arises when sharing of any subchannel by multiple users and/or multiple MCS combinations is allowed.k. 1]N ×K ×M .k.m if In.m γn.k.k.m e−bk.m (3.k. m)th element as In. Moreover.t.m(In.m pn.m In.m ≤ Pcon .m ≥0} n. Deﬁning I as the N × K × M matrix with (n.m ∈ [0.m {xn.k. making it a non-convex optimization problem. (3.k.m = 0 otherwise.m.k.k.k.k.k.3) is a convex optimization problem with a convex objective function and linear inequality constraint.3.m.m pn.k.k. xn. ·) is given by Fn.k.k.k.m e−bk.k )rk.m ) = 0 − E Un. In order to convert it into a convex optimization problem.4 Optimal Scheduling and Resource Allocation with subchannel sharing In this section.m s. Slater’s condition is satisﬁed 43 .k.m ≤ Pcon .m = In.m.(3.m) s. we address the SRA problem in the case where In. k.k.m In.m (1 − ak.k.k. the CSRA problem can be stated as CSRA := min − In.m(·. we write the “actual” power allocated to user k at MCS m on subchannel n as xn.m and the domain of I as ICSRA := I : I ∈ [0.m )rk. the problem becomes CSRA = min In. 1] ∀(n.k. n. xn. m).m pn. k.m (1 − ak.m ≤ 1 ∀n .k. n. {pn.k.k.k.4) The modiﬁed problem in (3.m xn.m E Un. Then. This problem has a non-convex constraint set.m ≥0} I∈ICSRA n.

I.5) where we use x to denote the N × K × M matrix [xn.4.e.k. the Lagrangian of (3.4.r. then.r. I∗ (µ) ∈ ICSRA denotes the optimal I for a given µ. and let pCSRA be the corresponding p. x.m xn.6) where x 0 means that xn. N n.t.1. I) denotes the optimal x for a given µ and I.at In. we discuss some important properties of the CSRA solution in Section 3.m ) + n. we will optimize the Lagrangian according to (3. respectively.k.k.m = 1 2KM and xn. x) = n.3) is the same as that of its dual problem (i. We then propose an iterative algorithm to solve CSRA problem in Section 3.m. and Section 3. x) w. x∗(µ. Hence.1 Optimizing over total powers.3. I∗ (µ∗ ))).m = Pcon I .k. x. (3. 44 .m(In. respectively.3) by I∗ CSRA and xCSRA . and µ∗ denotes the optimal µ. x∗ (µ. The corresponding unconstrained dual problem. using µ as the dual variable.k. m.2. m. Finally. Section 3. Therefore. k. Writing the dual formulation. In the next few subsections.k.4.6) w. x∗ (µ∗ . 3.m ∀n.m ].4.k.k. and µ in Section 3. Let us denote the optimal ∗ ∗ I and x for (3. becomes max min L(µ. I)) µ≥0 x 0 I∈ICSRA µ≥0 I∈ICSRA µ≥0 = max L(µ.k. (3. zero duality gap) [42]. I.k.4.4. Calculating the derivative of L(µ. I. k. x) = max min L(µ. I∗ (µ∗ ).m In. the solution of (3.m − Pcon µ. I∗ (µ). I.m ≥ 0 ∀n. x∗(µ.m Fn. any local minimum of the function is a global minimum.5.k.t.3) is L(µ.k. xn.5) is a convex function of x..4. I∗ (µ))) = L(µ∗ . I. for a given µ and userMCS allocation matrix I The Lagrangian in (3.

m γn.k.m rk.m .k.k.m = 0 otherwise.m )rk.m e−bk. The terms “strictly-increasing” and “strictly-decreasing” are used when appropriate. then L(·.k. I.m ∂L(µ.k /In.m xn. Therefore.k.m e−bk.m bk.m (3.m )rk.k. I) = ′ x ˜n.m (µ.m (µ.m = 0.10).m x .9) where x ˜n.m rk.k.k.m ˜n. Therefore.m =0 (3.7) if In. x∗ n.k.m E{γn.mUn.m )rk.k /In.m xn.m γn.k.k.k.m bk.k } 0 otherwise. then is an increasing function ′ of xn.m (µ.I)γn.k.11) We use the terms “increasing” and “decreasing” interchangeably with “non-decreasing” and “non-increasing”.m γn.m ˜n. we get ∂L(µ. (3.k.k.m ×γn.xn.k ×γn.k. ·. we observe that x ˜n.m bk.k.m rk.k e−bk.m (µ.k /In.k.m (1−ak.m (1 − ak.m E Un.m γn. respectively.k. we have ′ µ − ak.m(µ) satisﬁes ′ ˜n.k.k.m since Un.k.m xn.k }.k. if In.k. Clearly.m bk.m e−bk.m if and only if 0 ≤ µ ≤ ak.m (1 − ak.m bk.k. I) satisﬁes ′ ˜n.m .m ×γn.I)γn.k.m E Un.k e−bk.m Un. But if In.m xn.m (1 − ak.m = 0. Thus.k /In. I) if 0 ≤ µ ≤ ak.k.k.m (µ.m p )rk.m e−bk.m.m (µ)γn.8) ′ for some positive xn.m µ ′ µ − ak. 12 (3.m (µ)γn.m γn. I) = 0.m E Un.m E{γn.m x )rk.k.m bk.k.m (1 − ak.m rk.k e−bk. ·) is an increasing12 function of xn. x∗ n.k e−bk.k /In.m rk.k.I.m rk.k.k.m(µ.m (·) is a decreasing function of xn.k /In. 45 .k.m ) = × rk.x) ∂xn.m p . x) ∂xn.k.m µ = ak.k.k (1 − ak.m E Un. (3.k.k.10) From (3.m(µ)In. where p ˜n.m since µ ≥ 0.m (µ.k.m µ = ak. I) = p ˜n.

I) is a decreasing continuous function of µ.m (µ) w.m (·) is a continuous decreasing ˜n.m where In = {In.m(µ) exists that satisﬁes (3.m(µ). x∗ (µ. m)} that 46 .k } 0 otherwise.m(µ) is unique and decreases continuously with increase in µ. ′ then it is unique.m )rk.m (µ)γn. we can write for any I ∈ ICSRA and (n. which makes the right side of (3.m ∀(k.k.m(µ) .k.11).k.k.k. (3. minimizing Ln (µ.m(µ) if 0 ≤ µ ≤ ak. I) from (3.m (µ)) (3.k.2 Optimizing over user-MCS allocation matrix I for a given µ Substituting x∗ (µ. (3.m p is a strictly-decreasing continuous function of p ˜n. we get the Lagrangian L(µ.m (µ.m pn.m bk.13) and p ˜n.k.k.Combining the above observations. k.p∗ n. In ) over n. I. x∗ n. p ˜n. m) that ∗ x∗ n.k. Therefore. I)) Vn.m (µ.2 shows an example of the variation of p∗ n.5).k )rk.k.k.12) into (3. I) = In. Since the above Lagrangian contains the sum of Ln (µ.m (1 − ak.k.m (µ)γn. Un.k.m (µ) = ′ p ˜n.k.m (1 − ak. 3.k. This is because.k.4.m pn.m Ln (µ.11) a strictly-decreasing continuous function of p ˜n. Recall that Ln (µ.m(µ).m rk.k.m (µ).11).r.k. Note that if such a p ˜n. m)}.k. In ) is a linear function of {In. n k. in the domain of its existence.m (µ.m − E Un.m E{γn.12) where p∗ n.k. µ.k.m e−bk.k. Consequently.m ∀(k.t.k.11). in (3. Figure 3. In ) for every n (over all possible In ) minimizes the Lagrangian.14) In.m(µ) satisﬁes (3.k positive function and e−bk.m Un.In ) ∗ = −µPcon + + µp∗ n.

k.m(µ)) over all (k.m(µ)). The choice of system parameters are the same as those used in Section 3.16) . satisﬁes k.m (µ ) = 1 if (k.k. pn.m′ (µ)). then the optimal allocation on subchannel n is given by ∗ In.m ≤ 1.2: Prototypical plot of p∗ n.k′.m′ ) (3. p∗ n. If Sn (µ) is a null or a singleton set.k.k.k. 47 (3. In ) is minimized by the In that gives maximum possible weight to the (k.m (µ)) ≤ 0 .6.m In.k.k.50 45 40 35 30 p∗ n. p∗ n. let us deﬁne. p∗ n. Ln (µ.m (µ) 25 20 15 10 5 0 −5 −2 10 10 −1 10 0 µ Figure 3. a set of participating user-MCS combinations that yield the same most-negative value of Vn.m′ (µ. To write this mathematically. m) ∈ Sn (µ) 0 otherwise. m) = argmin Vn.k.m (µ. m) : (k. for each µ and subchannel n.m (µ) as a function of µ. and Vn. m) combination with the most negative value of Vn.15) ∗ (k.k.k. Therefore.m(µ. m) as follows: Sn ( µ ) (k ′ .k ′.m(µ.

it belongs to the space [0. (k|Sn(µ)| (n).m In.m ∗ In.m1 (n) .17) where the vector (In. . let us suppose that Sn (µ) = { k1 (n).3 Optimizing over µ In order to optimize over µ.m rk. |Sn (µ)|} otherwise. and then maximize it over all possible values of µ ≥ 0 to ﬁnd µ∗ . In particular. .m (1 − ak.19) − µPcon .k.m e−bk.k )rk.20) 48 . the optimal allocation of subchannel n is given by ∗ In. Notice from (3.k )rk. Then.ki (n).18) that we have ∗ ∗ k.. clearly.18) 3.m Pcon γn. I∗ (µ))) = n. if |Sn (µ)| > 1 (where |Sn (µ)| denotes the cardinality of Sn (µ)). is not the optimal solution.ki(n). . I∗ (µ). m|Sn (µ)| (n) }. . (3.k.m (µ ) = In.m ∗ + µp∗ n. .m pn. and thus the optimum can be reached by sharing subchannel n.me−bk. (3.m Pcon γn. In. m) = (ki (n). m1 (n)).k|Sn(µ)| (n).k n.m (µ ) = 1 for at least one n. .k1(n). . . .m|Sn (µ)| (n) ) is any point in the unit(|Sn (µ)| − 1) simplex. I∗ (µ∗ ) = 0 which. 1]|Sn (µ)| and satisﬁes |Sn (µ)| i=1 In.k.16)(3. µ∗ ≥ µmin > 0.m (µ) (3. m) combinations contribute equally towards the minimum value of Ln (µ. Otherwise.mi (n) = 1.m (3. mi (n)) for some i ∈ {1.mi (n) 0 if (k. I). then multiple (k.m (µ) − E Un.k.m bk.k e−bk. i. x∗ (µ.m γn.k.However. where ′ µmin = min ak.m E Un.4.k.k.m (1 − ak.m (µ)γn. .k. . Therefore.e. . we can calculate the Lagrangian optimized for a given value of µ as L(µ.k.

where ′ µmax = max ak. I (µ )) = n.m ( µ ∗ ) p∗ n.m (µ.m(µ) → Pcon for all (n. (3.m(µ) → 0 in the right side of (3.k.1).21) is obtained by taking p ˜n. m.m n. µ∗ . we have µ∗ ∈ [µmin .m (µ) is a decreasing continuous function of µ (seen in Section 3.k.m )rk. the optimal power allocation p∗ CSRA allocates p∗ n.m (1 − ak. However. I.e. I (µ )) − Pcon = 0. k. ∞). Now. for any µ > µmax .ki (n).17). I) = 0 ∀n. i. Since p∗ n.k.k. Moreover. we have ∗ n. I∗ CSRA .ki(n). m) combination.m (µ .k. At the optimal µ.m (3.k.16).m xn.22) to every possible (n. we have |Sn (µ∗ )| > 1.23) This is because µ∗ ≥ µmin > 0 (shown earlier) and the complementary slackness condition gives that µ∗ ∗ ∗ ∗ ∗ n.k.CSRA = ∗ ∗ ∗ p∗ n.k. then the optimal CSRA ∗ ∗ allocation.m (µ ) = 0 0 otherwise (3.k.mi (n) i=1 ∗ p∗ n.k. µmax ] ⊂ (0. k. In order to ﬁnd the optimal user-MCS allocation in such cases.m. i..mUn.m (µ ) = Pcon .11).k.m rk. we have that x∗ n.k. we use the fact that the CSRA problem in (3..k.k.m ∗ ∗ In.e. I) > Pcon for all I = 0 and µ < µmin . We can also obtain an upper bound µ∗ ≤ µmax .4. Since the primal objective in (3.m xn. k.k.k } n.m (µ . m) in the right side of (3.m (µ ) if In.is obtained by taking p ˜n. if we have |Sn (µ∗ )| ≤ 1 ∀n. ∗ ∗ ∗ x∗ n. recall that the total |Sn (µ∗ )| In.m E{γn.k. equals I (µ ) and can be calculated using (3. then ambiguity arises due to multiple possibilities of I∗ (µ∗ ) obtained via (3.m (µ.3) is certainly not maximized when zero power is allocated on all subchannels. if for some n.m bk.k.3) is a convex optimization problem whose exact solution satisﬁes the sum-power constraint with equality.k.11). Thus.mi (n) (µ ) power allocated to any subchannel n at µ∗ is 49 .

n where {In,ki(n),mi (n) }i=1

|S (µ∗ )|

**satisﬁes (3.18). This quantity is dependent on the choice
**

|S (µ∗ )|

n of values for {In,ki (n),mi (n) }i=1

and takes on any value between an upper and lower

**bound given by the following equation: min
**

i ∗ p∗ n,ki (n),mi (n) (µ )

≤

|Sn (µ∗ )| i=1

∗ ∗ ∗ In,ki(n),mi (n) p∗ n,ki (n),mi (n) (µ ) ≤ max pn,ki (n),mi (n) (µ ). i

**(3.24) Note that the existence of at least one I = I∗ (µ∗ ) satisfying
**

∗ In,ki (n),mi (n) p∗ n,ki (n),mi (n) (µ ) = Pcon n i

(3.25)

is guaranteed by the optimality of the dual solution (of our convex CSRA problem over a closed constraint set). Therefore, we necessarily have Pcon , and

n n ∗ mini p∗ n,ki (n),mi (n) (µ ) ≤

∗ maxi p∗ n,ki (n),mi (n) (µ ) ≥ Pcon . In addition, all choices of user-MCS allo∗ ∗ ∗ ∗ n,k,m In,k,m (µ ) pn,k,m(µ )

cations, I∗ (µ∗ ), given by (3.17) that satisfy the equality Pcon , are optimal for the CSRA problem.

=

In the case that the optimal solution I∗ (µ∗ ) is non-unique, i.e., |Sn (µ∗ )| > 1 for some n, then one instance of I∗ (µ∗ ) can be found as follows. For each subchannel n, deﬁne

∗ (kmax (n, µ∗ ), mmax (n, µ∗ )) := argmax p∗ n,ki (n),mi (n) (µ ), i ∗ (kmin(n, µ∗ ), mmin (n, µ∗ )) := argmin p∗ n,ki (n),mi (n) (µ ), i

(3.26) (3.27)

**and ﬁnd the value of λ ∈ [0, 1] for which λ
**

n

pn,kmin(n,µ∗ ),mmin (n,µ∗ ) (µ∗ ) + (1 − λ)

**pn,kmax (n,µ∗ ),mmax (n,µ∗ ) (µ∗ ) = Pcon ,
**

n

**(3.28) i.e., λ= pn,kmax (n,µ∗ ),mmax (n,µ∗ ) (µ∗ ) − Pcon . ∗ ∗ n pn,kmax (n,µ∗ ),mmax (n,µ∗ ) (µ ) − n pn,kmin (n,µ∗ ),mmin (n,µ∗ ) (µ )
**

n

(3.29)

50

**Now, deﬁning two speciﬁc allocations, Imin (µ∗ ) and Imax (µ∗ ), as
**

min In,k,m (µ ∗ ) =

1 (k, m) = (kmin (n, µ∗ ), mmin (n, µ∗ )) and 0 otherwise, 1 (k, m) = (kmax (n, µ∗ ), mmax (n, µ∗ )) 0 otherwise,

max In,k,m (µ ∗ ) =

(3.30)

min ∗ respectively, the optimal user-MCS allocation is given by I∗ CSRA = λI (µ ) + (1 −

λ)Imax (µ∗ ). The corresponding optimal power allocation is then given by (3.22). It can be seen that this solution satisﬁes the subchannel constraint as well as the sum power constraint with equality, i.e.,

∗ ∗ In,k,m ( µ ∗ ) p∗ n,k,m (µ ) = n,k,m n,k,m ∗ ∗ ∗ x∗ n,k,m (µ , I (µ )) = Pcon .

Two interesting observations can be made from the above discussion. Firstly, for any choice of concave utility functions Un,k,m(·), there exists an optimal scheduling and resource allocation strategy that allocates each subchannel to at most 2 userMCS combinations. Therefore, when allocating N subchannels, even if more than 2N user-MCS options are available, at most 2N such options will be used. Secondly, if Imin(µ∗ ) = Imax (µ∗ ), then the exact CSRA solution allocates power to at most one (k, m) combination for every subchannel, i.e., no subchannel is shared among any two or more user-MCS combinations. This observation will motivate the SRA problem’s solution without subchannel sharing in Section 3.5.

3.4.4

Algorithmic implementation

In practice, it is not possible to search exhaustively over µ ∈ [µmin , µmax ]. Thus, we propose an algorithm to reach solutions in close (and adjustable) proximity to

51

the optimal. The algorithm ﬁrst narrows down the location of µ∗ using a bisectionsearch over [µmin , µmax ] for the optimum total power allocation, and then ﬁnds a set of resource allocations (I, x) that achieve a total utility close to the optimal. To proceed in this direction, with the aim of developing a framework to do bisection-search over µ, let us deﬁne the total optimal allocated power for a given value of µ as follows:

∗ Xtot (µ ) n,k,m ∗ x∗ n,k,m (µ, I (µ)),

(3.31)

**where I∗ (µ) and x∗ (µ, I∗(µ)) (deﬁned in (3.6)) minimize the Lagrangian (deﬁned in
**

∗ (3.5)) for a given µ. The following lemma relates the variation of Xtot (µ) with respect

to µ.

∗ Lemma 2. The total optimal power allocation, Xtot (µ), is a monotonically decreasing function of µ.

**Proof. Proof is given in Appendix B.2.
**

∗ A sample plot of Xtot (µ) and L(µ, I∗ (µ), x∗ (µ, I∗ (µ))) as a function of µ is shown

in Figure 3.3. From the ﬁgure, three observations can be made. First, as µ increases, the optimal total allocated power decreases, as expected from Lemma 2. Second, as

∗ expected, the Lagrangian is maximized for that value of µ at which Xtot (µ) = Pcon .

Third, the optimal total power allocation varies continuously in the region of µ where the optimal allocation, I∗ (µ), remains constant and takes a jump (negative) when I∗ (µ) changes. This happens for the following reason. We know, for any (n, k, m), that p∗ n,k,m (µ) is a continuous function of µ. Thus, when the optimal allocation remains constant over a range of µ, the total power allocated,

∗ ∗ n,k,m In,k,m (µ)pn,k,m (µ)

also

varies continuously with µ. However, at the point of discontinuity (say µ ˜), multiple optimal allocations achieve the same optimal value of Lagrangian. In other words, 52

∗ |Sn (˜ µ)| > 1 for some n. In that case, Xtot (˜ µ) can take any value in the interval

n

p∗ µ ), n,kmin (n),mmin (n) (˜

n

p∗ µ) n,kmax (n),mmax (n) (˜

**while achieving the same minimum value of the Lagrangian at µ ˜. Applying Lemma 2, we have
**

∗ Xtot (˜ µ − ∆1 ) ≥ ∗ p∗ µ) ≥ Xtot (˜ µ) ≥ n,kmax (n),mmax (n) (˜ ∗ p∗ µ) ≥ Xtot (˜ µ + ∆2 ) n,kmin (n),mmin (n) (˜

n

n

for any ∆1 , ∆2 > 0, causing a jump of

n

p∗ µ) − n,kmin (n),mmin (n) (˜

n

p∗ µ) n,kmax (n),mmax (n) (˜

in the total optimal power allocation at µ ˜.

500 400

∗ Xtot (µ )

**300 200 100 0 −2 10 10
**

−1

10

0

L(µ, I∗ (µ), x∗ (µ, I∗ (µ)))

**−20 −30 −40 −50 −2 10
**

−1 0

10 µ

10

∗ Figure 3.3: Prototypical plot of Xtot (µ) and L(µ, I∗ (µ), x∗(µ, I∗ (µ))) as a function of µ for N = K = 5, and Pcon = 100. (See Section 3.6 for details.) The red vertical lines in the top plot show that a change in I∗ (µ) occurs at that µ.

∗ Lemma 2 allows us to do a bisection-search over µ since Xtot (µ) is a decreasing ∗ function of µ and the optimal µ is the one at which Xtot (µ) = Pcon . In particular,

53

We use this method of measuring complexity because it allows us to easily compare all algorithms in this chapter. Then. k. µ). µ ¯]. and µ such that µ ¯ − µ ≤ κ and µ∗ ∈ [µ. µ ∗ UCSRA . otherwise . µ ¯] and µ ¯ − µ ≤ κ. Since our algorithm stops when µ ¯ − µ ≤ κ. 54 . see Appendix B. be U Proof.if µ∗ ∈ [µ. where κ (> 0) is a tuning-parameter. µ ¯] be the point where the proposed CSRA algorithm stops. and allocates resources based on optimal resource ¯.µ ¯ 2 ∗ if Xtot µ+¯ µ 2 > Pcon . then µ∗ ∈ µ∗ ∈ µ. Note that. µ ¯] for some µ and µ ¯. allocations at µ and µ The following lemma characterizes the relationship between the tuning parameter κ and the accuracy of the obtained solution. the number of steps taken by the proposed bisection algorithm is proportional to log2 κ. the proposed algorithm takes at most NKM log2 µmax −µmin κ (3. µ ¯]. The proposed algorithm requires at most log2 µmax −µmin κ iterations of µ in order to ﬁnd µ ¯. Therefore. µ ¯) and UCSRA . respectively. measuring the complexity of the algorithm by the number of times (3. For proof. 0 ≤ UCSRA −U ¯) ≤ (¯ µ − µ)Pcon .3. such that µ∗ ∈ [µ. µ+¯ µ 2 µ+¯ µ . Let µ∗ ∈ [µ. for a given κ. Lemma 3. we propose an algorithm in Table 3. Using this concept. and the total utility obtained by the proposed algorithm and the exact CSRA solution ∗ ∗ ˆCSRA (µ.1 (see end of chapter) that ﬁnds an interval [µ. µ ˆCSRA (µ. m.32) steps.11) must be solved for a given (n. from Lemma 3. limµ→µ ¯ UCSRA (µ. the gap between the obˆ ¯) = tained utility and the optimal utility is bounded by Pcon κ. Moreover.

1} . there exists a δ > 0 such that for all µ ∈ (˜ µ − δ. around every point on the horizontal axis. that each subchannel can be allocated to at most one combination of user and MCS per time slot. I (µ2 ) ∈ {0. if |Sn (˜ µ)| ≤ 1. i.16).k. Note that this is precisely the constraint we impose in the later part of this chapter. the above lemma implies that the discontinuities in Fig.12). Let us now consider the case where it is possible that |Sn (˜ µ)| > 1 for some n.m ∈ {0. ∗ ∗ N ×K ×M there exists an optimal allocation. that satisﬁes I (µ) ∈ {0.4. then there exists I (µ1 ). µ2 ∈ (˜ µ − δ. In this case.5 Some properties of the CSRA solution In this subsection. µ. Hence. Let us ﬁx a µ ˜ ∈ [µmin . 1}N ×K ×M .3 are isolated and that. µ ˜). 1}. we study a few properties of the CSRA solution that yield valuable insights into the optimal resource allocation strategy for any given value of Lagrange multiplier. at most. For any µ ˜ > 0. there is a ∗ (µ) is continuous.4. ∀n. Lemma 4. which reveals that I∗ (˜ µ) ∈ {0. 1} such that I∗ (µ1 ) = I∗ (µ2 ). The same property holds if both µ1 . we will solve the scheduling and resource allocation (SRA) problem (3. I (µ) ∈ ICSRA ..1) under the constraint that In. In conjunction with (3. 3. is given by (3. Now. the deﬁnition of ICSRA implies that every subchannel is allocated to at most one user-MCS combination. µ ˜ + δ ) \{µ ˜ }. countable.3. then the optimal allocation at µ ˜. 3. µ ˜ + δ ). I∗ (˜ µ).e. Proof. We will refer to this problem as the “discrete scheduling and resource allocation” (DSRA) problem. µ2 ∈ (˜ µ. Proof is given in Appendix B. µmax ]. ∗ ∗ N ×K ×M Moreover.5 Scheduling and Resource Allocation without subchannel sharing In this section. the number of such discontinusmall region over which Xtot ities are. 55 . if µ1 .

3.m pn.5. we will exploit the DSRA problem structure. k.k )rk.mpn.m (3. one can exploit the problem structure to design polynomial-complexity algorithms that reach solutions in close vicinity of the exact solution.. by solving the power allocation sub-problem for every possible choice of I ∈ IDSRA ).m E Un. 1 } N ×K ×M . The DSRA problem is a mixed-integer programming problem. In.1 Brute-force algorithm Consider that.33) by I∗ DSRA and pDSRA . such as ours.k.k. 56 . if we attempted to solve our DSRA problem via brute-force (i. n.m (1 − ak.m ≥0} n.Storing the values of In. the DSRA subchannel constraint can be expressed as I ∈ IDSRA . Later.k. We ﬁrst describe an approach to solve the DSRA mixed-integer programming problem exactly by exhaustively searching over all possible user-MCS allocations in order to arrive at the optimal user. We will see that this “brute-force” approach has a complexity that grows exponentially in the number of subchannels.k.1).m ≤ Pcon .m ≤ 1 ∀n .33) {pn. rate. the DSRA problem can be stated as DSRA := max In.k.m Then. Fortunately.m e−bk. respectively. where IDSRA := I : I ∈ { 0 .m In.m in the N × K × M matrix I. meaning that polynomial-time solutions do not exist [43].m I∈IDSRA s. to design an algorithm with near-optimal performance and polynomial complexity. and power allocation. and its relation to the CSRA problem.k.t.k. using (3. in some cases. Mixed-integer programming problems are generally NP-hard. ∗ Let us denote the optimal I and p for (3.k.k.k.e.m γn.

m ≥0} xn.k.k.m = In.k.m via the relation: xn.k.m(In. the dual of the brute-force problem can be written as ∗ ∗ max min LI (µ.k. Borrowing our approach to the CSRA problem.m pn.k. x) over {x (3.34) {pn.k.m ≤ Pcon . for any subchannel n.m In.m γn.m . its solution is equal to the solution of its dual problem (i.m pn.k.k.k.e.k.k. n.t. (3. x (µI )).m.m ) + n.k.5). x) w.m.1 for the CSRA problem).k.m ≥0} n.k.36) is exactly the same as the Lagrangian for the CSRA problem in (3. k.m(In.m − Pcon µ.m pn.m e−bk.m n.36) where µ is the dual variable and x is the N × K × M matrix containing actual powers allocated to all (n.m ≤ Pcon .36).k. xn. we get that. The problem in (3.38) 57 .m.4).k.m (1 − ak.k.k.m (µ) = In. Minimizing LI (µ.we would solve the following sub-problem for every given I. xn. therefore.m(In.k.k.k. m.m = Pcon /2NKM for all n.mFn.34) can. we could transform the variable pn.m (µ).k. µ≥0 x 0 µ≥0 ∗ ∗ for optimal solutions µ∗ I and x (µI ).35) as LI (µ. (3. This problem is a convex optimization problem that satisﬁes Slater’s condition [42] when xn. max In.t.m into xn.m {xn.m (3.k.m In. x∗ (µ)) = LI (µ∗ I . To formulate the dual problem.m to zero (which is identical to the approach taken in Section 3.m s. n. m) combinations.k. we write the Lagrangian of the primal problem (3.k.37) 0} by equating the diﬀerential of LI (µ.k.k.m) s.k.m pn.k.4.r. xn. (3.t. Note that the Lagrangian in (3.k. x) = max LI (µ. Using (3.m) is deﬁned in (3. xn..m xn. zero duality gap) [42]. Therefore.k. be written as: min In.k.k.k. k.m E Un. ∗ x∗ n.k )rk.35) where Fn.k. x) = n.m Fn.

m (µ)γn.m (1 − ak.1 (see end of chapter). where µmin and µmin are deﬁned in (3.k } 0 otherwise.k.k.m p )rk.m γn. respectively.m p . to solve the resource allocation problem for a ˜n. Then.k. p∗ n.21) hold even when I∗ (µ) is replaced by arbitrary I.m (µ) = ′ p ˜n.40) a strictly-decreasing positive function of p ˜n.k.m (µ).k e−bk. we have µ∗ I ∈ [µmin . which makes p∗ n.m(µ) is a strictly-decreasing continuous function of µ.1.k.4.m (µ)γn.19)-(3. While there are many ways to ﬁnd µ. 13 58 .k. p ˜n. µmax ] for which Xtot (I.m(µ) is the unique13 value satisfying (3. Let us now deﬁne ∗ Xtot (I . µ∗ I ) one can use the bisection-search algorithm given in Table 3. (3.k.k. µmax ].m E{γn.m p By assumption.40) Note that the Lagrangian as well as the power allocation in (3. Therefore.e. This reduces our problem to ﬁnding the mini∗ mum value of µ ∈ [µmin .m (µ) = n.g.m decreasing positive function of p ˜n.m rk. respectively. As discussed in Section 3. µ) is also as the total optimal power allocation for allocation I at µ.m (µ) a decreasing continuous function of µ. which makes the right side of (3..38) are identical to that obtained for the CSRA problem in (3.Here. Also recall that (3.m Un. 44]).m bk. repeated as (3.m (µ) (3.m p∗ n. Such a problem structure (i.k ˜n. [33.m (µ).m bk.k. µ ) n. Un. ﬁnding the minimum Lagrange multiplier satisfying a sum-power constraint) yields a water-ﬁlling solution (e.k ′ is a strictly(·) is a decreasing positive function and e−bk.m E Un.5) and (3.m x∗ n. ′ ˜n.m(µ) if 0 ≤ µ ≤ ak.m rk.k. Xtot a decreasing continuous function of µ.k.21). To obtain such a solution (in our case.m e−bk.39) and p ˜n.12).k. Thus.11).m )rk. we focus on bisection-search for easy comparison to the CSRA algorithm.k.40) for convenience.41) ∗ (I.k.m (1 − ak.36) and (3.m (µ)γn.k.. µ) = Pcon .m In.k µ = ak. (3.20) and (3.k.k.k.

m In. is n. we focus. equivalently. In this section. 1}N ×K ×M . i.30) in Section 3.given I ∈ IDSRA . log2 µmax −µmin κ × (KM + 1)N −1 NKM.42) Because this “brute-force” algorithm may be impractical to implement for practical values of K . (Note that if I ∈ ICSRA and I ∈ {0.e. 1}N ×K ×M . where Imin (µ∗ ) = Imax (µ∗ ) and Imin (µ∗ ). the corresponding complexity needed to ﬁnd the exact DSRA solution is log2 N n=1 µmax −µmin κ × n N n (KM )n or. In doing so. If the solution of the Lagrangian dual of the CSRA problem (3. M . I∗ (µ∗ ) ∈ IDSRA .2 Proposed DSRA algorithm Equation (3. The following lemma will be instrumental in understanding the relationship between the CSRA and DSRA problems and will serve as the basis for allocating resources in the DSRA problem setup.11)) is solved to yield µ ˆI such that |µ ˆ I − µ∗ I | < κ.. in the sequel. Since the brute-force algorithm examines |IDSRA | = (KM + 1)N hypotheses of I.. and N . Imax (µ∗ ) ∈ IDSRA .6) for a given µ is such that I∗ (µ) ∈ {0. and the corresponding total power is 59 . on lower-complexity DSRA approximations. we provide the details of such an approach.40) (or (3.2 demonstrated that there exists an optimal userMCS allocation for the CSRA problem that either lies in the domain of DSRA problem. Lemma 5.) This observation motivates us to attack the DSRA problem using the CSRA algorithm. we exploit insights previously gained from our study of the CSRA problem.e. the complexity. or is a convex combination of two points from the domain of DSRA problem. in terms of the number of times (3.k.5.m log2 µmax −µmin κ .k. then I ∈ IDSRA .4. (3. i. 3. I∗ (µ∗ ) = λImin(µ∗ ) + (1 − λ)Imax (µ∗ ).

(3.43) Recall that the optimal total power achieved for a given value of Lagrange multiplier ∗ µ. I∗(µ))).m E Un.k. P∗ n.m (µ ) = 0 0 otherwise. then the solution to the optimization problem (P∗ .31).m ≤ Xtot (µ ) satisﬁes I = I (µ) and. p∗ DSRA .∗ Xtot (µ) as in (3. is p∗ n. n. i.m.k.k.m (µ) ∗ if In. lies in one of those “gaps.m Pn.k.t. In Table 3. In such cases. we detail an implementation of our proposed DSRA algorithm that has signiﬁcantly lower complexity than brute-force. Imax (µ)} yielding highest utility. Proof is given in Appendix B.mPn.k.k.k.m ∗ ∗ ∗ In.k.6 show that its performance is very close to optimal.m = ∗ x∗ n.k. The numerical simulations in Section 3.m ∗ if In. Proof. we conclude that if a µ exists such that I∗ (µ) ∈ IDSRA ∗ and Xtot (µ) = Pcon . Moreover.m (µ. i. and the CSRA solution is not admissible for DSRA.k.m xn.m γn.k.k.5.k. Xtot (µ ) = ∗ ∗ n. I∗ ) = argmax {P 0} n. we are motivated to choose the approximate DSRA solution ˆ IDSRA ∈ {Imin (µ). is piece-wise continuous and that a discon- tinuity (or “gap”) occurs at µ when multiple allocations achieving the same optimal value of Lagrangian exist.e. k.m (µ ) = 0 0 otherwise.” the optimal allocation for the CSRA problem equals a convex combination of two elements from the set IDSRA .k.I (µ)) ∗ (µ) In.k. I (µ)).m (µ.k.m e−bk. When the sum-power constraint. From the above lemma.. Pcon .k )rk..k.m s.m I∈IDSRA In.1 (see end of chapter).I (µ))) ∗ In. x∗ (µ. k. m). for every (n. the following lemma bounds 60 .m (1 − ak. then the DSRA problem is solved exactly by the CSRA solution ∗ (I∗ (µ). the optimal user-MCS allocation I∗ DSRA equals I (µ) and the optimal power allocation. m). DSRA = ∗ x∗ n.m (µ. for any (n.e.

µ ¯]. In units of solving (3. µ). µ ¯ be such that ∗ ∗ ˆ µ ∈ [µ.3 Discussion Before concluding this section.e. ∗ ˆDSRA (µ.46) Comparing (3. The proof is given in Appendix B. The complexity of the proposed DSRA algorithm is marginally greater than that of the CSRA algorithm.m. M .44) (3. (3. 61 . respectively. Let µ∗ be the optimal µ for the CSRA problem and µ. Lemma 6.46). we note that the DSRA problem is an integer-programming problem due to the discrete domain {0.45) does not scale with number of users K or subchannels N .6.. the DSRA complexity is at most N (KM + 2) log2 µmax −µmin κ .the asymptotic diﬀerence in utility achieved by the exact DSRA solution and that produced by our proposed DSRA algorithm. µ ¯) be the utilities achieved by the exact DSRA solution and the proposed DSRA algorithm. µ∗ ) (3. m. which is considerably less than that of the bruteforce algorithm (i. µ ¯) 0 ≤ UDSRA − lim U ¯ µ→µ ∗ ≤ (µ∗ − µmin ) Pcon − Xtot (Imin (µ∗ ).45) ≤ 0 µmax − µmin Pcon if |Sn (µ∗ )| ≤ 1 ∀n .5. 3. otherwise Proof.42) and (3. Let UDSRA and UDSRA (µ. we make some remarks about our approach to DSRA and its connections to CSRA. exponential in N ).11) for a given (n. K. It is interesting to note that the bound (3. First.k. k. Then. 1} assumed for In. we ﬁnd that the complexity of the proposed DSRA algorithm is polynomial in N. since an additional comparison of two possible user-MCS allocation choices is involved.

Because integer programming problems are generally NP-hard (recall our “brute force” DSRA solution). in LTE systems [30.2 can also be interpreted as a form of relaxation. which shows that—for a broad class of integer programming problems—the duality gap goes to zero as the number of integer variables goes to inﬁnity. Now. and the obtained solution is mapped back to the discrete domain. 371]. The optimization literature suggests that relaxation is successful in some—but not all—cases. one could cite the analysis in [49. In fact.48]. the relaxed problem is solved (with polynomial complexity). 50]. so that only 25 subchannels are used for 5 MHz bandwidths. One possible approach is based on “relaxation. and for such problems relaxation does not always perform well [47. The DSRA. to suggest that the DSRA problem can be well approximated by its relaxed counterpart. is a mixed-integer nonlinear program (MINLP). For example. and the DSRA approximation that we propose in Section 3. However. The results of 62 . CSRA. relaxation has widely used to solve linear integer programs (LIPs) [45–47]. For example. and only 6 are used for 1. in detail.5. p. The above considerations have motivated us to investigate. preventing the application of this argument. the relationship between the continuous and discrete resource allocation scenarios. each subchannel consists of 12 subcarriers.4 MHz bandwidths. relaxation was previously employed for OFDMA frequency-scheduling in [33. however. the number of subchannels N is often quite small. as the number of OFDMA subchannels N → ∞. implying that relaxation-based OFDMA algorithms must be designed with care.” whereby the discrete domain is relaxed to an interval domain. 35]. one is strongly motivated to ﬁnd a polynomial-complexity method whose performance is as high as possible. in practice.

we have rk.5/(2m+1 − 1) because the actual symbol error rate of a 2m+1 -QAM system is proportional to exp(−1. and we recall that the exogenous subchannel-SNR satisﬁes γn. . and νn..g.47) In (3. . . we use the sum-goodput utility Un.45).k |2 . .k xn + νn. .our investigation include insights into the dissimilarity between CSRA and DSRA solutions (e.m e−bk. N } and k ∈ {1. .47). for n ∈ {1.m(g ) = g .m (pγ ) = ak.k = |hn.m = 1 and bk.k } is unit variance and white across (n.k. 3. We furthermore assume that the k th user’s frequency-domain channel 63 .k the gain of the nth subchannel between the k th user and the BS. the BS employs an uncoded 2m+1 -QAM signaling scheme with MCS index m ∈ {1. . We assume that {νn. K } (3. . . 15}. For downlink transmission.5pγ/(2m+1 − 1)) in the high-(pγ ) regime [51] and is ≈ 1 when pγ = 0. We use the standard OFDM model [52] to describe the (instantaneous) frequency-domain observation made by the k th user on the nth subchannel: yn. we choose ak. .6 Numerical Evaluation In this section. we analyze the performance of an OFDMA downlink system that uses the proposed CSRA and DSRA algorithms for scheduling and resource allocation under diﬀerent system parameters.m = 1. In the error rate model ǫk. . In this case. and an eﬃcient polynomial-complexity DSRA approximation that (as we shall see in Section 3. Lemma 4 and Lemma 5). hn. k ). Unless otherwise speciﬁed.m pγ . xn denotes the QAM symbol broadcast by the BS on the nth subchannel.k .m = m + 1 bits per symbol and one symbol per codeword.k = hn.6) performs near-optimally for all N and admits the tight performance bound (3. .k a corresponding complex Gaussian noise sample.

k . .k . from which the BS estimates the corresponding subchannel gains.y ˜ k Ry 2 vectors z1 and z2 [53. Rhk . 64 . .hk = σg FF′ . Note that the average SNR ppilot hk + ν per subchannel under pilot transmission is SNRpilot = ppilot E{γn. Furthermore. where F ∈ CN ×L contains the ﬁrst L(< N ) columns of the N -DFT matrix. we ˜k = assume that the BS observes y √ ˜ k ∈ CN . E{hk |y ˆ n.y ˜ k Ry ˜ k hk . . we assume that there is a channel-estimation period during which the mobiles take turns to each broadcast one pilot OFDM symbol. To estimate hk . Since the total available power for all subchannels at the base-station is Pcon . . to show that the elements on the diagonal of Cov(hk |y ˜ k } can be recognized as the pilot-aided MMSE estimate of hk . hn.d. and with variance σe nth element of E{hk |y given by the ﬁrst diagonal element ˜ k ).z2 denotes the cross-correlation of random ˜ k .k )T ∈ CN are related to its channel impulse response gk = (g1. where Rz1 . conditioned on the pilot observations. γn.y ˜ k = ppilot σg FF + I (where I denotes the identity matrix). over (l.k } = 1.y ˜k = √ 2 ppilot σg FF′ .y is Gaussian with mean E{hk |y ˜ k Ry ˜ k . we assume that the channels do not vary between pilot and data periods. gL.y ˜k y −1 Rhk . .gains hk = (h1.k is Gaussian with mean h 2 ˜ k }.hk − Rhk .k has a non-central of Cov(hk |y chi-squared distribution with two degrees of freedom.k )T ∈ CL via hk = Fgk . .k }. . . The channel hk ˜ k are zero-mean jointly Gaussian. and furthermore hk |y ˜k and the pilot observations y −1 ˜ k and covariance Cov(hk |y ˜k) = ˜ k } = Rhk .k }. To model imperfect CSI. In summary. k ) and drawn 2 from a zero-mean complex Gaussian distribution with variance σg chosen so that E{γn. Thus. and where {gl. and ′ 2 Ry ˜ k .k given by the conditioned on the pilot observations. hN.k } are i. pp. the average available SNR per subchannel will be denoted by SNR = Pcon N E{γn.i. Furthermore. it is straightforward ˜ k ) are equal. Since Rhk . 155].

.We will refer to the proposed CSRA and DSRA algorithms implemented under imperfect CSI as “CSRA-ICSI” and “DSRA-ICSI. Figure 3. the pilot SNR is SNRpilot = −10 dB. as SNRpilot is increased. the BS uses more accurate channel-state information for scheduling and resource allocation. In this curve. and the DSRA/CSRA tuning parameter is κ = 0. goodput values were empirically averaged over 1000 realizations.” i. the number of OFDM subchannels is N = 64. FP-RUS schedules. K }. although the goodput achieved by CSRA-ICSI scheme exceeded that of DSRA-ICSI scheme in 65 .4 plots the subchannel-averaged goodput achieved by the above-described scheduling and resource-allocation schemes for diﬀerent grades of CSI. which serves as a performance lower bound. with increasing SNRpilot . The plot shows that. SNRpilot is varied so as to obtain estimates of subchannel SNR with diﬀerent grades of accuracy. which serves as a performance upper bound. to which it allocates power Pcon /N and the ﬁxed MCS m that maximizes expected goodput. and thus achieves higher goodput.e. and ﬁxed-power random-user scheduling (FP-RUS). The plot also shows that. the impulse response length is L = 2. In all plots. In particular. All other parameters remain unchanged. one user selected uniformly from {1. .3/Pcon (recall Table 3. CSRA implemented under perfect CSI.” respectively. . . Their performances will be compared to that of “CSRA-PCSI. the performance of the proposed schemes (under the availability of imperfect CSI) increases from that of FP-RUS to that achieved by CSRA-PCSI.. This is expected because. their performances almost coincide. even though the proposed CSRA algorithm exactly solves the CSRA problem and the proposed DSRA algorithm approximately solves the DSRA problem.1 at the end of this chapter). the number of users is K = 16. Unless speciﬁed. on each subchannel. the average SNR per subchannel is SNR = 10 dB.

the maximum diﬀerence in the subchannel-averaged goodput was merely 4 × 10−3 bits per channel-use (bpcu).4: Average goodput per subchannel versus SNRpilot . This is because. ranging between 1 and 32.up-to 49% of the realizations. in the former case. whereas.5 plots the subchannel-averaged goodput versus the number of available users. K . N = 64. 4. whereas that achieved by the FP-RUS scheme remains constant. Here. K = 16. Figure 3.5 4 Goodput (bpcu) 3.5 FP−RUS 2 −50 −40 −30 −20 −10 0 10 20 SNRpilot (in dB) Figure 3. it can be deduced that the proposed DSRA algorithm is exhibiting nearoptimal performance. the goodput per subchannel achieved by the proposed schemes increase under both perfect and imperfect CSI. users are 66 . and SNR = 10 dB. the availability of more users can be exploited to schedule users with stronger subchannels. Since the DSRA-ICSI schemes cannot achieve a sum-goodput higher than that achieved by the CSRA-ICSI scheme. It shows that. as K increases. in the FP-RUS scheme.5 3 CSRA−PCSI CSRA−ICSI DSRA−ICSI 2.

the diﬀerence grows slower than the diﬀerence between 67 . the maximum diﬀerence in the subchannel-averaged goodput was merely 7 × 10−4 bpcu. N = 64.6.44) on the DSRA-ICSI optimality gap as a function of SNR.5: Average goodput per subchannel versus number of users.5 DSRA−ICSI FP−RUS 2 0 10 10 1 10 2 Number of users. and SNRpilot = −10 dB.scheduled without regard to the instantaneous channel conditions. as SNR increases. However. Similar to the observations in the previous plots. In this plot.5 3 CSRA−PCSI CSRA−ICSI 2. DSRAICSI) increases.5 4 Goodput (bpcu) 3. SNR = 10 dB. the performance diﬀerence between the proposed CSRA and DSRA algorithms remains negligible. the top plot shows the subchannel-averaged goodput and the bottom plot shows the subchannel and realization-averaged value of the bound (3. In the top plot. 4. In Figure 3. In particular. K Figure 3. although the goodput achieved by CSRA-ICSI exceeded that of DSRA-ICSI in up-to 29% of the realizations. it can be seen that. K . the diﬀerence between CSRA-PCSI and CSRA-ICSI (or.

44) is quite tight at high SNR. which illustrates the average value ∗ of (µ∗ − µmin ) Pcon − Xtot (Imin .e. N = 64. and SNRpilot = −10 dB. The bottom plot shows the average bound on the optimality gap between the proposed and exact DSRA solutions (given in (3. 68 .CSRA-PCSI and FP-RUS. even when the subchannel-averaged goodput of DSRA-ICSI is of the order of tens of bpcu. the maximum diﬀerence in the subchannel-averaged goodput was merely 4 × 10−5 bpcu. The bottom plot. In this plot.r.t. SNR. although the goodput achieved by CSRA-ICSI scheme exceeded that of DSRA-ICSI scheme in up-to 28% of the realizations. In particular. shows that the loss in sum-goodput over all subchannels due to the sub-optimality of proposed DSRA solution under imperfect CSI is bounded by 7 × 10−3 bpcu. µ∗ ))/N . These results conﬁrm that the bound (3. Interestingly. i. the performance of CSRA-ICSI and DSRA-ICSI remain almost identical. even for high values of SNR.44)). K = 16.. the average value of ∗ (µ∗ − µmin )(Pcon − Xtot (Imin . 15 Goodput (bpcu) CSRA−PCSI 10 CSRA−ICSI DSRA−ICSI 5 FP−RUS 0 −15 x 10 −3 −10 −5 0 5 10 15 20 25 30 8 Goodput (bpcu) 6 4 2 0 −15 −10 −5 0 5 10 15 20 25 30 SNR (in dB) Figure 3. µ∗ ) over all realizations and subchannels w.6: The top plot shows the average goodput per subchannel as a function of SNR.

e. There the behavior is as expected: when w1 ≪ w2 = 1 (i. 28]. . The utility can be regarded as the revenue earned by the operator: when wi > wj ..” and we ran DSRA with the utility Uk (g ) (1 − e−w1 g )1k∈K1 + (1 − e−w2 g )1k∈K2 .85 and w2 = 1. 3. for ﬁxed w2 = 1 and SNR = 0 dB. For comparison. . at low SNR. There it can be seen that. In Fig. the overwhelming majority of resources are allocated to Class-1 users. Meanwhile.e. . it is evident that the naive goodput-maximizing scheme does not earn the operator as much revenue as the utility-maximizing scheme (outside of the trivial case that w1 = w2 ). weighted-goodput utility. we show the utility (summed over all users) when DSRA is “naively” used to maximize sum-goodput instead of sum-utility. the utility of Class-1 users tend to zero. as well as that summed over each individual user class.7 shows the performance of the proposed DSRA algorithm under a sumutility criterion that is motivated by a common pricing model for an elastic application such as ﬁle-transfer [27. the two classes achieve proportional utilities while.7. 8} K1 is “Class 1” and k ∈ {9. Moreover. so that Uk (g ) ≈ wi g 1k∈Ki . . .. The bottom plot in Fig. where 1E denotes the indicator of event E . . The top plot in Fig. This behavior can be explained as follows: At low SNR. 3. Class-i users pay more (for a given goodput g ) than Class-j users in exchange for priority service.7 shows performance as a function of w1 . at high SNR. .7 shows the above described sum-utilities as a function of SNR. when w1 ≫ w2 = 1. 16} K2 is “Class 2. in an eﬀort to earn more revenue. . we show the resulting DSRA-maximized utility summed over all users. Class-1 users pay much less) DSRA allocates the overwhelming majority of the resources to Class-2 users. In particular. the goodputs g are small. this approximation does not hold because 69 . i. 3. in which case 1 − e−wi g ≈ wi g .Figure 3. we partitioned the K = 16 users into two classes: k ∈ {1. At high SNR. for ﬁxed w1 = 0.

k.m(g ) = 1 K log(1 − log(1 − g )) ∀n. For DSRA. Here. and SNRpilot = −10 dB. N = 64.k | +σe pn. Eq. m. w2 = 1. we compare the performances of our proposed algorithms to the state-of-the-art algorithms in [31.the goodputs g are usually large. we compare the subgradient-based algorithm proposed for discrete allocation in [33] to our DSRA algorithm. In particular. In Figure 3. For CSRA.85. we choose the utility Un.8. so that we maximize 70 1 K n. we choose the utility function and the SNR distributions to maximize the upper bound on capacity computed via the eﬀective SNR 1 K ˆ n.k log 1+ |h ˆ n. k. we ﬁrst compare the goldensection-search based algorithm from [31] to our CSRA algorithm.k E{log(1 + pn.7: The top plot shows sum utility versus w1 when w2 = 1.k.k |4 pn.k. K = 16.1γn. as . 33].k )}.k | from [31. and this particular pricing-based utility becomes increasingly unfair.k. The bottom plot shows the sum-utility versus SNR when w1 = 0. SNR = 0 dB.1 |hn. (4)]. Second. 80 Sum utility 60 40 20 0 −2 10 80 Sum Utility–Class 1 10 −1 10 0 10 1 10 2 10 3 weight w1 Sum utility 60 40 20 0 −40 Sum Utility–Class 2 Sum Utility Sum Utility–Naive −30 −20 −10 0 10 20 30 SNR (in dB) Figure 3.1 |h 2 2 2 ˆ n.

and the bottom plot shows the total utility achieved as a function of the number of µ-updates. and SNRpilot = −10 dB. For the subgradient-based algorithm in [33]. In the top plot. The bottom plot shows that the proposed algorithms achieve a much higher utility than the algorithms in [31. K = 16. illustrating the speed of our approaches. µ∗ )..33] and converge toward µ∗ at a much faster rate.8: The top plot shows the mean deviation of the estimated dual variable µ from µ∗ . Note that the golden-section algorithm only provides estimates of µ∗ at even numbers of µ-updates. 33] for the ﬁrst few µ-updates. and the bottom plot shows average sum-utility.e. 3. The top plot in Fig. as a function of the number of µ-updates. N = 64.in [33]. SNR = 10 dB. 10 0 Mean deviation 10 −5 DSRA [9] 10 −10 CSRA [7] 10 −15 10 0 10 1 10 2 20 Sum utility (bpcu) 15 10 5 0 0 10 10 1 10 2 Number of µ-updates Figure 3. it can be seen that the proposed algorithms outperform the algorithms in [31. 71 .8 shows the mean deviation of the estimated value of the dual variable µ from the optimum (i. we set the step-size in the ith µ-update to be 1/i. Here.

we linked the DSRA problem to the CSRA problem. In all cases. Although the optimization problem in the ﬁrst scenario (the so-called “continuous” or CSRA case) was found to be nonconvex. Numerical results were then presented under a variety of settings. and showed that. An algorithmic implementation of the CSRA solution was also provided. our DSRA bound was numerically evaluated and found to be extremely tight. We then demonstrated an application of DSRA to maximization of a pricing-based utility. The optimization problem faced in the second scenario (the so-called “discrete” or DSRA case) was found to be a mixed-integer programming problem. Finally. For the case that the solutions do not coincide. and 2) when it is not. The performance of the proposed CSRA and DSRA algorithms schemes under imperfect CSI were compared to those under perfect CSI and no instantaneous CSI (i. it was found that the proposed imperfect-CSI-based algorithms oﬀer a signiﬁcant advantage over schemes that do not use instantaneous CSI. we showed that it can be converted to a convex optimization problem and solved using a dual optimization approach with zero duality gap. we considered the problem of joint scheduling and resource allocation (SRA) in downlink OFDMA systems under imperfect channel-state information. the DSRA solution coincides with the CSRA solution. ﬁxed-power random scheduling). we proposed a practical DSRA algorithm and bounded its performance. our CSRA and DSRA algorithms were compared to the state-of-the-art 72 .3. To attack it.. We considered two scenarios: 1) when subchannel sharing is allowed. Next.7 Conclusion In this chapter. Both cases were framed as optimization problems that maximize a utility function subject to a sum-power constraint.e. in some cases.

73 .golden-section-search [31] and subgradient [33] based algorithms and shown to yield signiﬁcant improvements in convergence rate.

m)th element of p ˆI and x ˆI .µ)−Pcon ∗ (I. ∗ Xtot (I.I are the (n. If ∗ ∗ Xtot (I. µ). Use (3. i.I x if In. Brute force algorithm for a given I 1. I∗ (¯ 2.m. . µ) using (3. 6.m. Proposed DSRA algorithm 1. ∗ 4. For each subchannel n = 1.µ)−X ∗ (I.k.k.m x∗ n.k. set λ = set λ = 0. otherwise set µ ¯ = µ. ∗ Xtot (µ)−Pcon . calculate p ˆI and L for the brute force algorithm.m p ˆn.m (µ).12) and calculate ∗ ∗ Xtot (µ) = n.30) and set I∗ (µ) = Imin (µ).15).k.k.11) and (3.k. µ ¯ = µmax .k. k. p ˆ CSRA . respectively. and I∗ (µ). Set µ = µmin . else use (3. Choose ˆ IDSRA = argminI∈{I∗ (µ).13) to calculate p∗ n. If Xtot (I. 0 74 . . If µ ¯ − µ > κ.m.16). I∗ (µ)). otherwise set λ tot set = 0. Use the algorithmic implementation of the proposed CSRA solution in to ﬁnd I∗ (µ) and I∗ (¯ µ).m = 0 In..k. µ) ∈ IDSRA . ii. For each (n. Use (3.40) to obtain x∗ n. (b) Calculate Sn (µ) using (3.m. Find x∗ (µ.m. Notice that the obtained solution satisﬁes the sum-power constraint with equality. otherwise go to step 2). is given by ˆn. go to step 2).m. Initialize µ = µmin and µ ¯ = µmax . . 2.k. I∗ (¯ µ)} LI as the user-MCS allocation and p ˆ DSRA = p ˆˆ IDSRA as the associated power allocation. i.m. The best actual power allocation is given by x ˆI = λx∗ (¯ µ) + (1 − λ)x∗ (µ) and the best power allocation. set µ = µ. otherwise set µ ¯ = µ. go to step 7). I∗ (µ)) using (3.14) to calculate Vn. The corˆI = responding Lagrangian.41). Find Xtot (I.k.k. ∗ 5. and µ = 2. 7. µ ¯] and µ ¯ − µ < κ.CSRA = x ˆn. If Xtot (µ) ≥ Pcon . Set µ = µ+¯ µ 2 . λ = 6.m.CSRA and x where I ˆn. k. N : (a) For each (k. µ ¯].m (µ).I and x ˆn.CSRA where p ˆn.k. 4.CSRA denote the th (n.I = 0 otherwise. I∗ (¯ µ)) + (1 − λ)x∗ (µ. 3.m (µ. gives the optimal Lagrangian value. use (3. 7. where the optimal µ for the CSRA problem. found using L ∗ LI (¯ µ. If ∗ ∗ Xtot (µ) = Xtot (¯ µ). If |Sn (µ)| ≤ 1 ∀n.m (µ)). m) component of ˆ ICSRA and x ˆCSRA . ∗ 5. µ) > Pcon .µ Xtot ¯ ) . ˆn. µ ¯) = Xtot (I.e. set µ = µ. else proceed.Table 3.k. k. ˆn. For both I = I∗ (µ) and I = I∗ (¯ µ) (since they ˆ I as described may diﬀer). p (µ)). p∗ n. µ∗ lies in the set [µ. 1. m). 3. m).CSRA I ˆn.38)-(3.k. p ˆI . respectively. If µ ¯ − µ < κ.m (µ. µ ¯ − µ < κ.CSRA = 0 if I otherwise. 8.k. The optimal user-MCS allocation is given by ˆ ICSRA = λI∗ (¯ µ) + (1 − λ)I∗ (µ) and the corresponding optimal x is given by x ˆCSRA = λx∗ (¯ µ.1: Algorithmic implementations of the proposed algorithms Proposed CSRA algorithm µ µ+¯ 2 . . then ﬁnd I∗ (µ) using (3. ∗ (µ)−X ∗ (¯ Xtot tot µ) else 8. I (µ)).k.m.k. The optimal power allocation. then can be found using p ˆn. ˆ 3. Now we have µ∗ ∈ [µ. Set µ ˆI = µ ¯.k.m.

We assume standard ARQ. and presented results assuming pilot-aided channel knowledge at the base-station..g. For simplicity and ease of exposition. it is diﬃcult for the BS to maintain perfect CSI for all users and all subchannels (particularly when the number of users is large) since CSI is most easily obtained at the user terminals. and the bandwidth available for feedback of CSI to the BS is scarce. we studied an instantaneous resource allocation problem under the availability of a generic distribution on the channnel-gains of users.Chapter 4: Joint Scheduling and Resource Allocation in OFDMA Downlink Systems via ACK/NAK Feedback 4. e.14 where every scheduled user provides the BS with either 14 The approach we develop in this chapter could be easily extended to other forms of link-layer feedback. 54]).g. however. such as quantized channel gains. In this chapter. practical resource allocation schemes use some form of limited feedback [55]. as provided by the automatic repeat request (ARQ) [56] mechanism present in most wireless downlinks. As mentioned in the previous chapter. 44. In the previous chapter. in practice. we consider only standard ARQ. Hence. 75 . [34–37. Type-I and Type-II Hybrid ARQ.1 Introduction The OFDMA scheduling-and-resource-allocation problem has been addressed in a number of studies that assume the availability of perfect channel state information (CSI) at the BS (e. 39.. we consider the exclusive use of ACK/NAK feedback.

the transmission parameters applied at a given time-slot aﬀect not only the throughput for that slot. when using error-rate feedback. On the other hand. We consider the exclusive use ACK/NAK feedback provided by the link layer. For example. if the most recent data packet has been correctly decoded. they do provide relative information about channel quality that can be used for the purpose of transmitter adaptation (e. as opposed to quantized channel-state feedback. There are interesting implications to the use of (quantized) error-rate feedback (like ACK/NAK) for transmitter adaptation. if the transmission parameters are chosen to maximize only the instantaneous throughput.. [57. but also the corresponding feedback. such as feedback about quantized channel gains. e. or a negative acknowledgment (NAK). Thus. which will impact the quality of future transmitter-CSI. the BS must navigate the classic tradeoﬀ between exploitation and exploration [59]. by scheduling those users that the BS believes are currently best. if an NAK was received for a particular packet. For example.an acknowledgment (ACK). then it is likely that the subchannel’s signal-to-noise ratio (SNR) was below that required to support the transmission rate used for that packet. With error-rate feedback. Although ACK/NAKs do not provide direct information about the state of the channel. because this allows us to completely avoid any additional feedback. implying that future scheduling decisions may be compromised.. and thus future throughput. 58]). then instantaneous throughput may be compromised. 76 . if the BS schedules not-recently-scheduled users solely for the purpose of probing their channels. then little will be learned about the changing states of other user channels.g. if not.g.

as we show in the sequel. we propose a low-complexity implementation based on particle ﬁltering. and rate-selection. [26–28]).g. Our use of a generic utility-based criterion allows us to handle. Our use of ACK/NAK-feedback.. we instead consider (suboptimal) greedy utility-maximization schemes. Due to the impracticality of the POMDP solution. As justiﬁcation for this approach. long-term. sum-capacity maximization. from these distributions. we ﬁrst establish that the optimal utility maximization strategy would itself be greedy if the BS had perfect CSI for all user-subchannel combinations. throughput maximization under practical modulation-and-coding schemes. the optimal solution to our expected long-term utility-maximization problem is a partially observable Markov decision process (POMDP) that would involve the solution of many mixed-integer optimization problems during each time-slot. For example. We then propose a novel. In doing so. To this end. and throughput-based pricing (e. greedy utility-maximization scheme whose performance is shown (via the upper bound) to be close to optimal. we establish that the performance of this perfect-CSI (greedy) scheme upper-bounds the optimal ACK/NAK-feedback-based (POMDP) scheme. due to the computational demands of tracking the posterior channel distribution for every user.g. Moreover.. we propose a scheme whereby the BS uses ACK/NAK feedback to maintain a posterior channel distribution for every user and. as discussed in the sequel. power-allocation. 77 . Finally.In this work. e. performs simultaneous user subchannel-scheduling. we exploit the results in Chapter 3. however. generic utility criterion that is a function of the per-user/channel/rate goodputs. makes our problem considerably more complicated than the one considered in Chapter 3. the BS aims to maximize an expected. which oﬀers an eﬃcient near-optimal scheme for utility-based OFDMA resource allocation under probabilistic CSI.

we discuss some past related works.6.4.The rest of the chapter is organized as follows. and conclusions are stated in Section 4. our approach is based on particle 78 . we propose a suboptimal greedy scheme in Section 4. In [61]. assuming a discrete channel model. In Section 4. While [60] considered a single channel.2 Past Work We now describe the relation of our work to the existing literature [60–62]. we consider joint user/rate scheduling and power allocation in a multi-channel OFDMA setting. In particular.8. Numerical results are presented in Section 4. Furthermore. In [60]. a learning-automata -based user/rate scheduling algorithm was proposed to maximize system throughput based on ACK/NAK feedback while satisfying per-user throughput constraints.2. we investigate the optimal scheduling and resource allocation scheme.7. In Section 4. Its solution led to a POMDP which was solved using a dynamicprogram. we show how these posteriors can be recursively updated via particle ﬁltering.5 that maintains posterior channel distributions inferred from the received ACK/NAK feedback. In Section 4.3. goodput maximization was considered under a target maximum packet-error probability constraint and a sum-power constraint across all time-slots. a statespace-based approach was taken to jointly schedule users/rates and allocate powers in downlink OFDMA systems under slow-fading channels in the presence of ACK/NAK feedback and imperfect subchannel-gain estimates at the BS. 4. in Section 4. Due to the implementation complexity of the optimal scheme. we outline the system model and. ours is applicable to generic utility maximization problems under continuous-state channels. While the approach in [61] is applicable to only goodput maximization under discrete-state channels.

and propose a polynomial-complexity joint scheduling and resource allocation scheme with provable performance guarantees. the scope of this work was limited by two other assumptions: i) in each time-slot. In [62]. Henceforth.ﬁltering and lends itself to practical implementation. 4. Apart from assuming a discrete-state quasi-static channel model. the user/rate scheduling and power allocation problem in OFDMA systems with quasi-static channels and ACK/NAK feedback was formulated as a Markov Decision Process and an eﬃcient algorithm was proposed to maximize achievable sum-rate while maintaining a target packet-error-rate and a sum-power constraint over a ﬁnite time-horizon. we will use “time” when referring to the packet index. Furthermore. where the fading channel is assumed to be time-invariant over the packet duration. the BS 79 . through N OFDMA subchannels (with N ≶ K ). During each time slot. Each packet propagates through a fading channel on the way to its intended mobile user. At each time-instant. we consider the scenario where multi-user diversity is eﬃciently exploited by scheduling diﬀerent users across diﬀerent subchannels.e.3 System Model We consider a packetized downlink OFDMA system with a pool of K users. but is allowed to vary across packets in a Markovian manner. “controller”) transmits packets of data. we consider general utility maximization under continuousstate time-varying channels. composed of codewords from a generic signaling scheme. and ii) all users decoded the broadcasted data-packet and sent ACK/NAK feedback to the BS. the BS scheduled only one user across all subchannels for data transmission.. and only the scheduled users report ACK/NAK feedback. the BS (i. In contrast.

Since we assume that only one user/rate (k. we denote the scheduling decision t t by In. and how much power to allocate.k. whereas In..k.m (·) is a generic utility function that we assume (for technical reasons) is twice diﬀerentiable. and the error rate of the combination (n.m(·) to transform goodput into other metrics that are more meaningful from the perspective of quality-of-service (QoS). where am and bm are constants [33]. E t n. t.m ) t denotes the goodput contributed by . we have the “subchannel resource” constraint assume a “sum-power constraint” of the form t k.m Our goal in scheduling and resource allocation is to maximize an expected longterm utility criterion that is a function of the per-user/rate/subchannel goodputs. and ǫt n.k.m 1 for all n. or pricing (e. one would simply use Un. .k. where the MCS index m ∈ {1. m) at time t.m (gn. Here. We also Xcon for all t. m) was scheduled on t subchannel n at time t.m (1 − ǫt n.m = 0 indicates otherwise. m) can be scheduled on a given subchannel n at a given time t.k.k.t Un. . which can be expanded as gn. m) represent the combination of user k and t t MCS m over subchannel n. where In.k. gn.k.m(0) < ∞. Un.k. [26–28]). We assume M choices of MCS. to maximize sum-goodput. Additionally. 1}.k.m In.m = t In.e. t t n. For example.k.m .k.must decide—for each subchannel—which user to schedule.m. Meanwhile.m to denote— respectively—the power allocated to.g.k . We use Un. Let (n.k. k.k.k.m(x) = x. . In the sequel. M } corresponds to a transmission rate of rm bits per packet and a packet error rate of the form ǫ = am e−bm P γ under transmit power P and squared subchannel gain (SSG) γ . k.k.m In.k. with Un.m )rm .m ∈ {0.m t user k with MCS m on subchannel n at time t. the SSG experienced by. and concave. which modulation-andcoding scheme (MCS) to use.k. we use Pn.m Pn. γn. To enforce fairness across users. i. . fairness [63]..m = 1 indicates that user/rate (k. strictly-increasing.k.k. one could instead maximize 80 .

k In.k ∈ {1. we describe the optimal solution to the problem of scheduling and resource allocation over the ﬁnite time-horizon t ∈ {1.m }. and to denote the collection of all time-t t user-k feedbacks we use fk ∈ {1.weighted sum-goodput via Un. To maximize sum capacity.m }.e. To denote the collection of all time-t powers t {Pn. To denote the collection of all time-t ACK/NAK t feedbacks {fn.k. i. . For this purpose. 1). 81 .k ). ∅}. in the case of an inﬁnite past horizon and a feedback delay of d −d τ 1 packets. 0. we use Pt ∈ [0. one would choose M = a1 = b1 = r1 = 1 and Un. 0 indicates a NAK.k. ∅}N K . 0. the BS performs scheduling and resource allocation based on post terior distributions on the SSGs {γn. For each time t. ∅}N . .k ∀n.4 Optimal Scheduling and Resource Allocation In this section. t t t n. To incorporate user-fairness into capacity maximization.1(·) = wk log(1 − log(1 − x)). . 1}N KM . the BS would have access to the feedbacks {fn. Thus.k. 0. where {wk } are appropriately chosen user-dependent weights.m(x) = wk x. and ∅ covers the case that user k was not scheduled on subchannel n at time t. In the sequel. To denote the collection of all time-t scheduling t variables {In.1γn. T }. 4.k.k } inferred from previously received ACK/NAK feedback. some additional notation will be useful.k.k } we use Ft ∈ {1.k.. k }t τ =−∞ for time-t scheduling. one could instead choose Un.k. where again {wk } are appropriately chosen user-dependent weights [33]. we use It ∈ {0.1 log(1 + Pn. ∞)N KM . .1 (x) = log(1 − log(1 − x)) for x ∈ [0. where 1 indicates an ACK. we write the ACK/NAK feedback about the packet transt mitted to user k across subchannel n at time t by fn.

k. I−∞ .k. .opt ˜τ U n.k.m γn. .opt τ t t + τ =t+1 τ. . and X {I ∈ {0. It then uses this knowledge to de- termine the schedule It and power allocation pt maximizing the expected utility of the current and remaining packets: (It. It−d }. ∀k }.opt −d Utot (Ft −∞ ) 0) as −d Ft −∞ . the domain of Xcon }.m γn. .m τ expectation in (4.k. the Bellman equation is more complicated.k. ∞)N KM .For time-t scheduling and resource allocation.k.k. the following Bellman equation [49] speciﬁes the corresponding ﬁnite-horizon dynamic program: t.k. .m (In. . ∀n. . The n.Pt )∈X T n. T. .m (1 − am e−bm Pn.k )rm Ft −∞ . . .opt τ.k. (4.k.opt ) = argmax E (It .k. d = 1).k )rm and Ft −∞ t In. Using the abbreviations U n.m Un.e.m (4.opt −d t−d t−d In. .mUn.m Pn. E τ =t n.k. .m In. the optimal expected utility over the remaining packets {t.m 1 ∀n}.m τ.Pt )∈X max E n. 82 .k.k. Pn. .m (1 − am e−bm Pn. scheduling decisions It −∞ {I−∞ . P−∞ .k )rm τ. .1) is jointly over the squared subchannel gains (SSGs) {γn.m γn.k.k. .m Un.k. Pn. .m) −d am e−bm Pn. .m In. 1}N KM : { (I . and so we omit it for brevity. Ft−d }.opt −1 t t t + E Utot Ft −∞ ∪ {F .m .opt .m . Pt−d }. (4. . T } can be written (for t T t. .k. −d and power allocations Pt −∞ {P−∞ .k :τ = ˜t t.k.3) For the d > 1 case.k. P−∞ }.m In.k.m In. the controller has access to the pre−d vious feedback Ft −∞ −d {F−∞ . I−∞ .k.k.2) For a unit-delay15 system (i.m (1 − −d t−d t−d {Ft −∞ .m t t ˜t U n. P } 15 −1 Ft −∞ .opt −1 Utot (Ft −∞ ) = (It .1) where the domain of It is I Pt is P [0. Pn. .m . Pt. I .m −1 Ft −∞ t+1. P ) ∈ I × P : k.k.m t In.k.

we derive an upper bound on POMDP performance.3) are dependent on (It . Although this could be circumvented (at the expense of perfort mance) by restricting the powers Pn. the optimal controller is not practical to implement. Pt ).where the second expectation is over the feedbacks Ft . which in turn depends on the solution of the problem at time t + 2. the problem would still remain computationally intensive.k. since POMDPs (even with ﬁnite states and actions) are PSPACE-complete.m to come from a ﬁnite set. Because both terms on the right side of (4.. however.e. i. notice from (4. the solution of the problem at time t also depends on the solution of the problem at time t + 1.k . Consequently. they require both complexity and memory that grow exponentially with the horizon T [64]. To better understand their performance relative to that of the optimal POMDP.. i. those that do not consider the eﬀect of current actions on future utilities. and so on. even under power/SSG quantization. In conclusion. While these SSGs could then be quantized (causing additional performance loss).1) is typically referred to as a partially observable Markov decision process (POMDP) [59]. we will turn our attention to (sub-optimal) greedy strategies. To see why. 83 .e. the problem would t remain very complex due to the continuous-state nature of the SSGs γn.3) that the solution of the problem at every time t depends on the optimal solution at times up to t − 1. The solution obtained by solving (4. The deﬁnition of P implies that the controller has an uncountably inﬁnite number of possible actions.

and we deﬁne γ−∞ {γ −∞ . Evaluating the performance of the CGG.4.k. γ t−d }.m I−∞ .k. given knowlτ τ edge of ǫτ n. k.Pt )∈X T n. for any scheduling time t 0.k.m In. the SSG γn. To see why. not just the previously scheduled ones.4. the bounding solution would remain a POMDP with an uncountable number of state-action pairs. Although a tighter bound might result if the (perfect) error-rate feedback was restricted to only previously scheduled user/subchannel pairs.4) 84 . ∞)N K to denote the collection of all time-t SSGs t−d t {γn. Pt. Pn. the ACK/NAK feedback available to the POMDP is a form of degraded error-rate feedback on previously scheduled user/subchannel combinations. t t t ˜n. is straightforward since—under CGG feedback—optimal scheduling and resource maximization can be performed greedily. is based on the presumption of perfect error-rate feedback of all previous user/subchannel combinations.e.cgg t−d t−d t−d ˜τ U n. notice that.k ∀n. Pn. Since. P−∞ . equivalently.k can be obtained by simply −bm Pn. τ τ τ t − d }. For comparison. however. .k. {ǫτ n.m .m E U + τ =t+1 (4.k.k.k.m for any rate index m. γ−∞ . We characterize the CGG as “global” since it uses feedback from all user/subchannel combinations.. making it impractical to evaluate.m γn. n}.k. k.k.cgg ) = argmax (It . . our genie-aided bound is inverting the error-rate expression ǫτ n.cgg .k . τ t − d}.k. the CGG scheme allocates resources according to the following mixed-integer optimization problem: (It.m = am e τ based. on perfect feedback of all previous SSGs {γn. In the sequel.cgg τ.1 The “Causal Global Genie” Upper Bound Our POMDP-performance upper-bound.m . .k. i.m In.k. we use γ t ∈ [0.m ∀n.k ∀k. .m τ.m and Pn. which we will refer to as the “causal global genie” (CGG).

k.cgg ). .opt τ.k.m .opt .10) follows by the deﬁnition of degraded feedback.11) follows by deﬁnition of the causal global genie.k.m In.k. Pn.k.opt τ ˜n.10) = argmax E (Iτ .k. 85 .cgg .m argmax E (Iτ . .m . T } and any realization of (It −∞ .Since the choice of (It+1.k.11) where (4.7) follows since (It. we can write E n. γ−∞ −d t−d Ft −∞ .opt ) is chosen to maximize the long term sumutility—not the instantaneous sum-utility.cgg ) does not depend on the choice of (It.cgg τ.9) −d Ft −∞ (4. Pn.11) over τ = {t. .k.m τ τ τ ˜n.Pτ )∈X n.k.m . Pt+1.5) In the following lemma.k. we formally establish that the utility achieved by the CGG upper-bounds that achieved by the optimal POMDP controller with ACK/NAK feedback.m . T } yields (4.k.m E U In. z )} E{maxy.9) follows since maxy. Pn.m t t t−d ˜t E U n.cgg .Pt )∈X n.k.cgg ˜τ E U n.m τ τ τ ˜n. (4.6). Finally. Pn. .cgg ) = argmax (It .m In. −d t−d t−d It −∞ .z f (y.cgg ).m In.k.k.8) (4. the expected total utility for optimal resource allocation under the latter feedback is no higher than the expected total utility under CGG feedback. γ−∞ −d Ft −∞ −d Ft −∞ argmax E (Iτ . (4.m In.k.k.m . Pt.Pτ )∈X E n.k.m −d Ft −∞ −d t−d Ft −∞ .m γ−∞ . Pn.m . −d t−d Lemma 7. . the previous optimization problem simpliﬁes to (It.k. PT. .m n.m −d Ft −∞ . i.m τ. P−∞ ). .k.k.m U In. .cgg . Pt.6) −d t−d Proof.k. Pn..m τ =t −d Ft −∞ . (IT. T τ.m τ =t T τ.e.k.k.m n.k.Pτ )∈X n. γ−∞ τ. . summing both sides of (4.k.z E{f (y. .m .k.k.k.k.Pτ )∈X (4.k. Pn. .m In.m U In.k.m n. P−∞ .k.m .k. ·). Pt.k.opt τ.m (4.k. Pn. .cgg τ. P−∞ ). and the corresponding ACK/NAKs t−d F−∞ . Given arbitrary past allocations (It −∞ .m τ τ τ ˜n. For any τ ∈ {t.k.m F−∞ τ τ ˜τ U n.m n. −d Ft −∞ (4.m E = E = E argmax E (Iτ .7) (4.cgg ˜τ U n.cgg .m U In.m .opt t−d ˜τ U n. z )} for any real-valued function f (·. Pn. and (4. (4.

5.m(0) < ∞.5.2. Supposing that It = I.m (· ) 0. ′ ′′ and concave. 4.k. meaning that polynomial-complexity solutions do not exist.m E Un.1).12) is a mixed-integer optimization problem. Pt ) on future utility.5 Greedy Scheduling and Resource Allocation The greedy scheduling and resource allocation (GSRA) problem is deﬁned as follows. the optimal power 86 . each with the corresponding optimal power allocation.k. a “brute force” optimal solution whose complexity grows exponentially in N . the greedy objective (4. with Un. strictly-increasing. Such problems are generally NP-hard.12) Note that. Therefore. (4.m (·) > 0 and Un.m γn.k.12) with polynomial complexity. the number of subchannels. the GSRA problem (4. To better explain that scheme. N K M −d t In. using ′ to denote the derivative.k.In the next section. 4.k. we ﬁrst describe.t.1 Brute-Force Algorithm The brute-force approach considers all possibilities of It ∈ I . Thus.m Pn. n. As stated earlier.k.k )rm Ft −∞ t t GSRA It ∈I n=1 k =1 m=1 Pt ∈P max s.5.m t t In. in contrast to the T -horizon objective (4. we detail the greedy scheduling and resource allocation problem and propose a near-optimal solution.k. we propose a near -optimal algorithm for (4.k.k.1. Un. in Section 4.12) does not consider the eﬀect of (It .k.m (1 − am e−bm Pn. Since it involves both discrete (It ) and continuous (Pt ) optimization variables. in Section 4.m(·) to be any real-valued function that is twice diﬀerentiable. we allow Un.m Xcon .

respectively. P (µI )).k. (4.k. (4.m (µ)γn.16) ˜n.k )rm Ft −∞ t max P∈P n=1 k =1 m=1 s. P) = max LI (µ.m (1 − am e−bm Pn. First.m Pn.m (1 − am e−bm Pn. P (µ)) = LI (µI .k.k.k.k. A detailed solution to (4.k.13) as Lt I (µ.m(µ) if 0 µ am bm rm U ′ P n.m γn. for a given value of the Lagrange multiplier µ.13) To proceed.k e−bm Pn.k F−∞ 0 otherwise.m(µ) is deﬁned as the (unique) solution to where P ′ −d t µ = am bm rm E Un.k.k.m γn.mUn.m In. µ 0 P∈P µ 0 (4.k.k )rm γn.m − Xcon µ −d E In.k. P) = n.allocation can be found by solving the convex optimization problem N K M −d In.m Pn. we identify the Lagrangian associated with (4.k.k.k.k.m (1 − am e−bm Pn. and so we describe only the main points here.k.17) ˜ t ˜ t 87 . n.15) ∗ ∗ where µ∗ I and P (µI ) denote the optimal Lagrange multiplier and power allocation.k. (4.15) is given in Chapter 3.k Ft −∞ .m which yields the corresponding dual problem t ∗ t ∗ ∗ ∗ max min Lt I (µ.k.m Xcon .k )rm Ft −∞ . it has been shown that the optimal powers equal ∗ Pn.m (µ ) = t−d t ˜n.m In.14) t − n.m (1 − am )rm E γn.k. (4.m (µ)γn.k.t.m E Un.

k.k.m from the set {0. In doing so.k. 1].19). by adjusting κ. Since |I| = (KM + 1)N values of I must be considered. the optimal value of µ (i. n.m Pn.k Ft −∞ . Based on (4.k )rm γn.k e−bm Xcon γn.16)-(4. κ (4. µ µ ¯ − µ < κ.2 Proposed Algorithm We propose to attack the mixed-integer GSRA problem (4. the corresponding utility I that lies in [µ. 4.m In. µ is guaranteed to be no less than κXcon from the optimal (for the given I).k Ft −∞ . Therefore. one can achieve a performance arbitrarily close to the optimum.k. n. we relax the domain of t the scheduling variables In.∗ Then.12) using the well known Lagrangian relaxation approach [49]. allowing the application of low-complexity dual optimization techniques.m (µI ) = Xcon .18) ′ t −d µmax = max am bm rm Un. Table 4. ∞).m (1 − am )rm E γn.19) and satisﬁes ∗ ∗ n. µ∗ I ) obeys µI ∈ [µmin .. the total complexity of the bruteforce approach—in terms of the number of times (4. µmax ] ⊂ (0.m t t (4. Using an approximation of µ∗ ¯]. for a given I. where ′ −d t µmin = min am bm rm E Un.k.k.e.m (1 − am e−bm Xcon γn.20) which grows exponentially with N . 1} to the interval [0.k.m (4.1 details the brute-force steps for a given I. Although the solution to the relaxed problem does not necessarily coincide with that of the original greedy 88 . for a speciﬁed tolerance κ.k. In the end.5.17) must be solved—can be shown to be µmin log2 ( µmax − ) × (KM + 1)N −1 NKM. these steps ﬁnd µ and µ ¯ such that µ∗ ¯] and I ∈ [µ.

problem (4. (4.m Bn.t.k.k. If Xtot (I. otherwise set λ = 0.k. µ). 2.m rGSRA = min 0 n. Set µ ˆI = µ ¯. n.m (µ). xt ). Calculate Xtot (I. n.m In. set µ = µ. The relaxed version of the greedy problem (4.m Xcon . xt n.k.k. we have In.m . In this case. ∗ 4. Initialize µ = µmin and µ ¯ = µmax .m It ∈Ic xt t t t In. Although (4. set λ = ∗ Xtot (I.m convex optimization problem due to non-convex constraints. µ) n.16)-(4.m xt n. µ ¯) = Xtot (I.k.12). 1]N KM : k.m ∗ Pn.µ)−Xtot ¯) .21) 1 ∀n . If Xtot (I.m Pn. (4.k.m Xcon .Table 4.21) is a non- where Ic I ∈ [0.k.m t t .m Pn.k. otherwise set µ ¯ = µ. µ) > Xcon .k. P (I)) gives the best Lagrangian value.k. go to step 2). ∗ 5.m t t In. If µ ¯ − µ > κ.k. and in some cases zero.k.k. 6.k. else proceed to step 7). m). where xt n.1: Brute-force steps for a given I 1.m (µ). 3. Set µ = µ µ+¯ 2 .µ)−Xcon ∗ ∗ (I . ˆ (I) = λP∗ (¯ 8.t.k.k. ∗ (a) Use (4.m γn.m In.17) to obtain Pn.k )rm Ft −∞ t rGSRA It ∈Ic n=1 k =1 m=1 Pt ∈P max s.m E Un.22) 89 . The best power allocation is given by P µ) + (1 − λ)P∗ (µ).m ) s.m (In. For each (n. it can be converted into a convex optimization problem by using the new set of variables (It .12) is N K M t −d In.k.m (1 − am e−bm Pn.k. µ Xtot (I. k. we establish in the sequel that the corresponding performance loss is very small. and t ˆ ˆ LI = LI (¯ µ.k. ∗ ∗ 7.k.

m t t t In.m }. xt.23) otherwise.∗ (µ∗ ).m (1 − am )rm E γn.k. and so we describe ∗ t only the main points here.k. It. A detailed solution to this problem is given in Chapter 3.m Bn.m −d t ′ e−bm Pn. It . For given values of µ and It . xt.m Pn. the dual problem can be written as max min L(µ.m n.m (µ).∗ (µ∗ ))). y 2 ) −d − E Un. I).∗ (µ).k.k.26) t.k.∗ (µ∗ .k.m (µ)γn.27) ˜t t ˜t t 90 .k. It. xt. xt ) n. n.m xt n.k.m (µ ) = ˜ t (µ) is deﬁned as the (unique) solution to and where P n. It. It )) µ 0 xt 0 It ∈Ic µ 0 It ∈Ic = max L(µ.k.m (y 1 . The modiﬁed problem (4. In particular.24) µ 0 where L(µ.k.∗ (µ. It .k.25) where x∗ (µ. It.m (·.k )rm γn.m (1 − am e−bm Pn.∗ Pn.m (µ. xt n. I) is the optimal x for a given (µ. I ) = t.22) is a convex optimization problem and can be solved using a dual optimization approach with zero duality gap. we have xt. (4.k −∞ .m .k.k.m ) + n.k F−∞ 0 otherwise. xt ) = max min L(µ. ·) is deﬁned as t Bn.m (1 − am e−bm γn. (4. (4. (4.∗ t In. where I∗ (µ) denotes the optimal I ∈ Ic for a given µ. and Bn.k.k.k y2 /y1 )rm Ft −∞ t 0 0 if y1 = 0 (4.m (In.k.∗ (µ))) = L(µ∗ . It . and where µ∗ denotes the optimal µ 0.m − Xcon µ.k.∗ (µ.k. x t denotes element-wise non-negativity.k.m (µ)γn. where t−d t ˜ t (µ) if 0 µ am bm rm U ′ P n.k Ft µ = am bm rm E Un.k.k.t where xt ∈ RN KM denotes the collection of all time-t variables {xt n.

.k.m (µ.m (4.k.∗ t. we ﬁrst deﬁne t. (k|Sn n t.∗ (µ) for a given µ.k. if Sn (µ) has cardinality greater than one. (4.k ′ .e.m′ (µ)) : Vn.m (µ)) (k ′ ..m (µ∗ ) = Xcon . respectively.m (µ ) t..k.m′ (µ. (4. . . m) ∈ Sn (µ ) 0 otherwise. 91 . ∞) and t. .k.mi (n) 0 t if (k.k.To give equations that govern It.31) where the vector In. then the optimal schedule on subchannel n is given by t. .28) and t Sn (µ ) t. n. µmax ] ⊂ (0.k|S t (µ)| (n). m) combinations can be scheduled simultaneously while achieving the optimal value of the Lagrangian.∗ (µ ) = In.∗ In. t t (µ)| (n).e.30) t However. if Sn (µ) = {(k1 (n).32) where µmin and µmax were given in (4.19).29) t If Sn (µ) is a null or a singleton set.k.m (µ∗ ) Pn. 1]|Sn(µ)| and satisﬁes the equation t (µ)| | Sn In.∗ (4. it lies within the region [0. then In particular. . i. Pn. mi (n)) for some i ∈ {1. |Sn (µ)|} otherwise. then multiple (k.∗ In. .m (1 − am e−bm Pn.ki (n).k. In. the optimal Lagrange multiplier µ (i. (4. m) = (ki (n). µ∗ ) is such that µ∗ ∈ [µmin .mi (n) i=1 t = 1. m1 (n)). . m) = argmin Vn.18) and (4. . .ki(n).k.k )rm Ft −∞ ∗ + µPn.m (µ ) = In.m t 1 (k. Pn.k. m|S t (µ)| (n))}.m|S t (µ)| (n) lies anywhere in the unitn n t ( | Sn (µ)|−1) simplex.k1 (n).m (µ)) −d − E Un. .m (µ.m1 (n) . .∗ t.∗ t t (k.m′ ) 0 .k ′ . Pn.∗ t Vn.m (µ)γn.k.k. Finally.

∗ (µ) be given by (4.k. The complexity of the proposed algorithm—in terms of number of times (4.m′ )∈Sn t (µ) Pn. the proposed algorithm performs a bisection search over µ ∈ [µmin ..2 summarizes the proposed algorithm.m′ (µ) 0 otherwise. If. µ∗ ) is the one that achieves the power constraint Xtot (µ ) = Xcon .pro (µ) be given by (4. equivalently. It.26). xt ) (or. Table 4. (4. For any given value of µ.. between the two schedules I ∈ {It. κ (4.pro (¯ µ)}.k ′ . µ ¯] until µ ¯ − µ < κ.12).34) 92 . Lemma 8.pro allocation be deﬁned as Xtot (µ ) n. µmax ] that reﬁnes the search interval [µ. then the proposed algorithm transforms the candidate into an admissible solution as follows: t. |Sn (µ )| 1 for all n (i.pro (µ). over (It .pro grange multiplier µ (i. and thus retained by the t proposed algorithm. If. Lemma 8 (see Chapter 3 for a proof) implies that the optimal value of the Lat.k. for a given µ. Then. it chooses the one that maximizes utility.k. on the other hand.pro t. the can- didate employs more than one user/MCS on some subchannels).25) over (It . |Sn (µ)| > 1 for some n (i.27) is solved—is µmin log2 ( µmax − ) × N (KM + 2). then the candidate solution is admissible for the non-relaxed problem.33) The following lemma then states an important property of these ﬁxed-µ admissible solutions. where κ is a user-deﬁned tolerance.k.For several ﬁxed values of µ.∗ 1 (k.m (µ) Pn.e.e. the candidate employs at most one user/MCS per subchannel).∗ t.pro In. and let the total power t.m (µ ) = t. let the power allocation Pt. Xtot (µ) is monotonically decreasing in µ.m In.e. Then. the proposed algorithm minimizes the relaxed Lagrangian (4. reminiscent of the brute-force algorithm. Pt )) to obtain candidate solut tions for the original greedy problem (4.. To ﬁnd this µ∗ .m(µ). m) = argmin(k′ .33). let the user-MCS allocation It.pro t.

m (µ) to calculate Vn.33). 93 . . In Section 4. set µ = µ. we evaluate (4. t.36) 0 if |Sn (µ∗ )| (µmax − µmin )Xcon otherwise. t.k. Use Pn. It.35) 1 ∀n (4.pro (µ) using (4. t.k. For each subchannel n = 1.k.k.pro t.k. .28).26)-(4. Now we have µ∗ ∈ [µ. µ ¯) UGSRA − lim U ¯ µ→µ t.20). For both I = It. t.2: Proposed greedy algorithm 1. as µ → µ ¯. Calculate Xtot (µ) = t. which is signiﬁcantly less than the brute-force complexity in (4. . m). . and show that the performance loss is negligible.k. Find It.∗ i. the diﬀerence between the optimal GSRA utility ∗ ˆGSRA (µ. 7.∗ t ii. Use (4. 3.pro (µ∗ − µmin ) Xcon − Xtot (µ ∗ ) (4. ˆ t = argminI∈{It.pro 5. N : (a) For each (k. Pn. t (b) Calculate Sn (µ) using (4. µ ˆ ˆ may diﬀer).pro 6. Initialize µ = µmin and µ ¯ = µmax .7.pro (¯ µ) (since they 8.m (µ)) via (4.m (µ). 4.29). If µ ¯ − µ > κ.Table 4. If Xtot (µ) > Xcon . otherwise set µ ¯ = µ. Although the proposed algorithm is sub-optimal.27) to calculate Pn.m (µ) Pn.∗ n. Set µ = µ µ+¯ 2 .∗ t. ¯] and µ ¯ − µ < κ. µ UGSRA and that attained by the proposed algorithm U ¯).pro (µ). go to step 2).m (µ).35) by simulation. 2.pro (µ) and I = It.k. calculate P (I) and LI as described for the brute force algorithm. else proceed to step 8).pro (¯ ˆ ˆt ˆ ˆt 9.m In. can be bounded as follows (see Chapter 3 for details): ∗ ˆGSRA (µ.m (µ. Choose I µ)} LI as the user-MCS allocation and P = P (I ) as the associated power allocation.

we ﬁnd that t−d p(ht k | F−∞ ) = −d −d p(ht | Ft −∞ ) = k t−d t−d −d p(ht | Ft k | hk ) p(hk −∞ ) −d ht k (4. along with the fact that (It−d . Using the channel’s Markov property and Bayes rule. HN.k = |Hn. The corresponding t t ⊺ N [H1 are then given by . . F−∞ \ fk ) p(hk . F−∞ \ t−d t−d −d t−d fk ) = p(fk | ht . where δ (·) is the Dirac delta and en is the n column of the identity matrix.40) t−d ¯ t−d −d t−d −d t−d ¯ t−d | Ft | hk . . .38) ht k t t ⊺ t 2 th with p(γn. .k | ht k )p(hk | F−∞ ) (4. .k ] ∈ C . Assuming additive white Gaussian noise with unit variance. (4. we propose a recursive procedure to compute the posterior pdfs −d t p(γn.k ] ∈ C frequency-domain subchannel gains Ht k t Ht k = Ghk .k − |en Ghk | ).k .I 16 The extension to higher-order Markov channels is straightforward. .k | Ft −∞ ) = t t t−d p(γn. Let the time-t user-k channel be described by the discrete-time channel impulse response ht k t ⊺ L ⊺ [ht 1.37) where the OFDMA modulation matrix G ∈ CN ×L contains the ﬁrst L columns of the N -DFT matrix. 94 . (4.k .k | ht k ) = δ (γn.39) t−d −d t−d t−d t−d −d t−d p(fk | ht | Ft −∞ \ fk ) k . Ft ¯ t−d p(fk −∞ \ fk ) p(hk −∞ \ fk ) h k t−d −d t−d where \ denotes the set-diﬀerence operator. where (·) denotes transpose. hL. pt−d ).k | .4. pt−d ) is a deterministic k .k | Ft −∞ ) required by the proposed greedy algorithm in Table 4. . . and so we can write t −d p(γn. Using the fact that p(fk | ht k .6 Updating the Posterior Distributions from ACK/NAK Feedback In this section.2 when the channel is ﬁrst-order16 Markov. the t t 2 SSG of subchannel n for user k is given by γn.

where 0 denotes a NAK.k m In. p ) = t t 2 where γn.k |ht k . we get −d −d−1 p(ht | Ft −∞ ) = k −d −d−1 −d−1 −d−1 p(ht | ht ) p(ht | Ft −∞ ). pt−d ) p(h ¯t−d p(fk −∞ ) k h k (4.k | can be determined from ht k via (4.m γn.45) 95 . Particle ﬁltering is a wellknown technique that approximates the pdf of a random variable using a suitably chosen probability mass function (pmf).k.k = |Hn. We now propose the use of particle ﬁltering [65] to circumvent the evaluation of multidimensional integrals in the recursion of Table 4.k.k is set to ∅ if user k was not scheduled on subchannel n at time t.43) if f = 0 if f = 1 if f = ∅. for simplicity of illustrations.k = (1 − α)hl. using the new feedback obtained at each time t. 1 denotes an ACK. (4. we assume a Gauss-Markov model of the form t t +1 ht l. k k k − d− 1 ht k (4.40) that −d −d p(ht | Ft −∞ ) = k t−d −d t−d −d −d−1 p(fk | ht .42) t Recall that fn.k + αwl. ∅}. t −bm pt t n.3. I . we have N t t p(fk |hk . pt ) = t t t p(fn.m am e t −bm pt t n. which is given in Table 4. the feedback received about user k on channel n at time t.44) 1 − t m In. (4.k .k m In.41) Using the Markov property again. the feedbacks generated by user k are independent across subchannels. (4.k. I . pt−d ) p(ht | Ft −∞ ) k . Together.I k .3.38)-(4. I .m suggest a method of recursively updating the channel distributions.k . Assuming that. fn.k.−2d t−d−1 function of Ft −∞ (and therefore of F−∞ ). 1.k.k = f | ht k. It .m 1 − am e (4. Here. we then have from (4. and ∅ t denotes no feedback. takes values from the set {0.37). In the sequel. p ).44) n=1 t t t p(fn. t−d ¯ t−d t−d −d−1 ¯ t−d | Ft | hk . conditioned on ht k .m γn.

43)-(4. for each user k . and −d −d [i] δ (ht − ht k k [i]).k | Ft −∞ ) via (4.k is assumed to be i. Compute p(ht | F−∞ ) using (4. .46). Compute p(fk | ht . . 1] is a known constant that t determines the fading rate. The steps to recursively compute these particles and their corresponding weights are detailed in Table 4. wl. for k ∈ {1. .38). Pt−d ) using the error-rate rule (4. compute p(γn. Using the approximation in (4. For each n. the pdf p(hk | F−∞ ) is available from the previous time-instant. . . . ∅}N . any function of γn. hL. k . . l.47) t where A(·) is an arbitrary function. for i ∈ 1 {1. can be found using E{A(ht k )} ≈ νk i t | t−d [i]A ht k [i] . The user-k recursion is then t−d 1. At each time-step t.k is unit-variance circular Gaussian and α ∈ (0.3: Recursive update of channel posteriors t−d−1 t−d−1 At time t. Here.Table 4. we note that the expectation of any function of subchannel-gain. ht k . −d t 6. Observe new feedbacks fk ∈ {0. S }. . for all t.k is also a function of hk . t where wl. K }.41). νk i=1 t−d | t−d (4.4. (4.I −d −d 4.42). t−d 5. we use S particles in the approximations S t−d p(ht k | F−∞ ) ≈ −d −d p(ht | Ft −∞ ) ≈ k νk i=1 S t | t−d t [i] δ (ht k − hk [i]). . 1. compute p(ht | Ft −∞ ) via Bayes-rule k step in (4. . k . −d t−d−1 2.i.k [i].d. .k is a deterministic t t function of the subchannel-gain. Using the distributions obtained in steps 2) and 3).44).39). Recalling that the SSG γn. 96 . and νk1 2 [i] ∈ R+ is the probability mass assigned to the particle ht k [i] based on the observations received up to time t2 . Compute p(ht k | F−∞ ) using the Markov-prediction step (4. k t−d −d t−d 3.k [i] t |t ∈ CL denotes the ith (vector) particle. ht k .46) where ht k [i] = ⊺ t ht 1.

it is readily seen that the average per-subchannel signal-to-noise ratio is SNR 1 E{ N t t n.k.k is unity for all (n.e.7 Numerical Results In this section. N = 32 OFDMA subchannels. In this case. we consider an OFDMA system with independent ﬁrst-order Gauss-Markov channels (4. For the plots.5/(2m+1 − 1) because the symbol errorrate of a 2m+1 -QAM system is well approximated by exp(−1.k } = Xcon /N . if not otherwise stated.k. .m = Xcon /N . K = 8 available users.5P γ/(2m+1 − 1)) in the high-(P γ ) regime [51] and is ≈ 1 when P γ = 0.k.45).” and one codeword per packet. Thus. m) so that the objective was maximization of sum goodput.m Pn.k. Since the subchannel-averaged total transmit power equals 1 N t t n.5 with the posterior update from Section 4.m E{γn. . we used the identity utility (i. .k.k. the ﬁrst 50 were ignored to avoid transient eﬀects. we assumed am = 1 and bm = 1.k. 97 .k.m = 1 N t n. k ).k } = 1 N t t n.k. Of these 100 time-slots. For illustrative purposes. .m Xn.4.6.k was also unity for all (n. For this. k ). We used the modulation matrix G = β F ∈ CN ×L (recall (4. where F contains the ﬁrst L( and β = N 2−α L α N ) columns of the unitary N -DFT matrix t ensures that the variance of Hn. one symbol per “codeword. each with 100 time-slots.m Xn.m (x) = x for all n. In the packet error-rate model ǫ = am e−bm P γ . k. we assumed uncoded 2m+1 -QAM signaling with MCS index m ∈ {1.k. we numerically evaluate the performance of the proposed greedy scheduling and resource allocation from Section 4.37)). we have rm = m + 1 bits per symbol. Throughout.m γn.m In.m Xn. Un. channel fading parameter α = 10−3 and impulse √ response length L = 2.. and we assumed a feedback delay of d = 1. the t mean of the SSG γn. 15}. we av- eraged 500 realizations. We assumed.

however. it quickly learns the CSI well enough to perform scheduling and resource allocation at a level that yields sum-goodput much closer to the CGG than to the FP-RUS. but without the conditional expectation in (4.e. one can see a large gap between the FP-RUS and the CGG. should perform no better than any feedback-based scheme. The CGG (recall Section 4. From ACK/NAK feedbacks. but assumes perfect knowledge of all τ SSGs at all times.. Figure 4.1 shows a typical realization of instantaneous sum-goodput versus time t.k ∀n. the “causal global genie” (CGG). given t−1 {γn. The FP-RUS scheme schedules users uniformly at random. i. There we see that the performance of the proposed scheme increases with S . but shows little improvement for S > 30. which makes no use of feedback. The NCGG is similar to the CGG. and selects the MCS to maximize expected goodput. The FP-RUS.1) performs optimal scheduling and resource allocation under perfect knowledge of all SSGs at the previous time-instant (since d = 1). The proposed scheme starts without CSI. From Lemma 7. There. allocates power uniformly across subchannels.e. when α = 10−3. k } at time t. The NCGG has a greedy implementation. and a much smaller gap between the CGG and the NCGG.k ∀n. and initially performs no better than the FP-RUS. k.The performance of the proposed greedy algorithm was compared to three reference schemes: ﬁxed-power random user scheduling (FP-RUS). given {γn. Thus. Thus. τ } at time t. Figure 4. we know that the CGG upper-bounds the POMDP. S = 30 particles were used to construct the 98 . like the CGG.4. it provides an upper bound on the CGG that is invariant to fading rate α.2 plots average sum-goodput versus the number of particles S used to update the posterior distributions in the proposed greedy scheme (recall Section 4.. i. and the “non-causal global genie” (NCGG).5).6).

120

110

100 NCGG CGG Proposed Greedy FP−RUS

goodput

90

80

70

60

50

0

10

20

30

40

50 time

60

70

80

90

100

Figure 4.1: Typical instantaneous sum-goodput versus time t. Here, N = 32, K = 8, SNR = 10dB, α = 10−3 , and S = 30.

other plots. Remarkably, with only S = 5 particles, the proposed algorithm captures a signiﬁcant portion of the maximum possible goodput gain over the FP-RUS. Figure 4.3 plots average sum-goodput versus the fading rate α. There we see that, at low fading rates (i.e., small α), the proposed greedy scheme achieves an average sum-goodput that is much higher than the FP-RUS and, in fact, not far from the CGG upper bound. For instance, at α = 10−4 , the sum-goodput attained by the proposed scheme is 92% of the upper bound and 170% of that attained by the FPRUS. As the fading rate α increases, we see that the sum-goodput attained by the proposed scheme decreases, and eventually converges to that of the FP-RUS. This behavior is due to the fact that, as α increases, it becomes more diﬃcult to predict the SSGs using delayed ACK/NAK feedback, thereby compromising the schedulingand-resource-allocation decisions that are made based on the predicted SSGs. In fact, 99

120

110

Average sum-goodput (bpcu)

100

90

80

70

NCGG CGG Proposed Greedy FP−RUS

60

0

10

20

30

40

50

60

70

80

90

100

number of particles

Figure 4.2: Average sum-goodput versus the number of particles used to update the channel posteriors. Here, N = 32, K = 8, SNR = 10dB, and α = 10−3 .

one can even observe a gap between the CGG and NCGG for large α because, even with delayed perfect-SSG feedback, the current SSGs are diﬃcult to predict. Figure 4.3 reveals a gap between the proposed scheme and the CGG bound that persists as α → 0. This non-vanishing gap can be attributed—at least in part— to greedy scheduling under ACK/NAK feedback. Intuitively, we have the following explanation. Because the inferred SSG-distributions of not-recently-scheduled users quickly revert to their a priori form, the proposed greedy algorithm will continue to schedule users as long as their SSGs remain better than the a priori value. There may exist, however, not-recently-scheduled users with far better SSGs who remain invisible to the proposed scheme, only because they have not recently been scheduled. Figures 4.4 and 4.5 plot average sum-goodput versus the number of subchannels (i.e., total bandwidth) N . In Fig. 4.4, the total BS power Xcon is scaled with N such 100

120

110

Average sum-goodput (bpcu)

100

90

80

NCGG CGG Proposed Greedy FP−RUS

70

60 −4 10

10

−3

10

−2

10

−1

10

0

Fading rate α

Figure 4.3: Average sum-goodput versus fading rate α. Here, N = 32, K = 8, SNR = 10dB, and S = 30.

that the per-subchannel SNR remains ﬁxed at 10dB, whereas, in Fig. 4.5, the total BS power Xcon remains invariant to the bandwidth N , and is set such that per-subchannel SNR = 10dB for N = 32. In both cases, the average sum-goodput increases with bandwidth N , as expected, since the availability of more subchannels increases not only scheduling ﬂexibility, but also the possibility of stronger subchannels, which can be exploited by the BS. In Fig. 4.4, where the per-subchannel SNR is ﬁxed, the sumgoodput increases linearly with bandwidth N , as expected. In all cases, the proposed greedy scheme achieves more than 155% of the sum-goodput achieved by the FP-RUS. Figure 4.6 plots average sum-goodput versus the number of available users K . It shows that, as K increases, the average sum-goodputs achieved by the NCGG, CGG, and the proposed greedy schemes increase, whereas that achieved by the FP-RUS

101

500 450 400 NCGG CGG Proposed Greedy FP−RUS 350 300 250 200 150 100 50 0

Average sum-goodput (bpcu)

0

20

40

60 N

80

100

120

140

Figure 4.4: Average sum-goodput versus number of subchannels N . Here, K = 8, SNR = 10dB, α = 10−3 , and S = 30.

remains constant. This behavior results because, with the former schemes, the availability of more users can be exploited to schedule users with stronger subchannels, whereas with the FP-RUS scheme, this advantage is lost due to the complete lack of information about the users’ instantaneous channel conditions. Figure 4.6 also suggests that, as K increases, the sum-goodput of the proposed greedy scheme saturates. This can be attributed to the fact that the proposed greedy algorithm can only track the channels of recently scheduled users, and thus cannot beneﬁt directly from the growing pool of not-recently-scheduled users. In Figure 4.7, the top subplot shows average sum-goodput versus SNR, while the bottom subplot shows the average value of the bound (4.35) on the optimality gap of our proposed approach to the GSRA problem, also versus SNR. The top plot shows that, as the SNR increases, the proposed greedy scheme continues to perform much 102

and S = 30.0025% over all SNR. which is impractical to implement. First. K = 8. closer to the NCGG/CGG bounds than it does to the FP-RUS scheme.g. Consequently. The bottom plot establishes that the sum-goodput loss due to the sub-optimality in the algorithm used to attack the GSRA problem is negligible. we proposed a greedy approach to joint scheduling and 103 .5: Average sum-goodput versus number of subchannels N . we established that the optimal solution to the problem is a partially observable Markov decision process (POMDP). at most 0. with the goal of maximizing an expected long-term goodput-based utility subject to an instantaneous sum-power constraint.300 NCGG 250 CGG Proposed Greedy FP−RUS 200 Average sum-goodput (bpcu) 150 100 50 0 0 20 40 60 N 80 100 120 140 Figure 4..8 Summary In this chapter. e. we considered the problem of joint scheduling and resource allocation in the OFDMA downlink under ACK/NAK feedback. Here. α = 10−3 . Xcon does not scale with N and it is chosen such that SNR = 10dB for N = 32. 4.

SNR= 10dB. a representative simulation using N = 32 OFDMA subchannels. In this plot. Numerical experiments suggest that our greedy scheme achieves a signiﬁcant fraction of the maximum possible performance gain over ﬁxed-power random user scheduling (FP-RUS). and proposed an eﬃcient implementation based on particle ﬁltering. Next. For example. K = 8 available users.6: Average sum-goodput versus number of users. resource allocation based on the posterior distributions of the squared subchannel gain (SSG) for every user/subchannel pair. we outlined a recursive method to update the posterior SSG distributions from the ACK/NAK feedbacks received at each time-slot. we derived a performance upper-bound on POMDP. and S = 30. N = 32. and S = 30 particles. α = 10−3 . K Figure 4. despite its low-complexity implementation. which has polynomial complexity. shows that the sum-goodput of 104 . SNR = 10dB. for Markov channels. known as the causal global genie (CGG). To gauge the performance of our greedy scheme relative to that of the optimal POMDP (which is impossible to implement).150 140 130 120 110 100 90 80 70 60 0 10 NCGG CGG Proposed Greedy FP−RUS Average sum-goodput (bpcu) 10 1 10 2 Number of users.

the average value of (µ∗ − µmin )(Xcon − ∗ (Imin .Average sum-goodput (bpcu) 400 300 200 100 0 −15 NCGG CGG Proposed Greedy FP−RUS −10 −5 0 5 10 15 20 25 30 Optimality gap in the average sum-goodput (bpcu) 0.15 0.2 0.05 0 −15 −10 −5 0 5 10 15 20 25 30 SNR (in dB) Figure 4. 4. N = 32. The bottom plot shows the average bound on the optimality gap between the proposed and optimal greedy solutions (given in (4. and S = 30.3).. K = 8.35)). i. 105 . In this plot.25 0.1 0. µ∗ )).7: The top plot shows the average sum-goodput as a function of SNR. α = 10−3 . Xtot the proposed scheme is 92% of the upper bound and 170% of that attained by the FP-RUS (see Fig.e.

Set the importance weights νk [i] = For any other time-instant t ( t0 + d): t − d}. I t−d −d −d t−d for all i. ∀k.d from CN (0.44). . {νk [i] ∀i}. Pt−d . l. j . samples from CN (0. k k [i ]. k. 106 . k k [i ]. .e. t|t 1 S α 1. τ : τ the underlying Markov model as follows d −1 ht l.i.i.k [i] +α j =0 t−j where yl. Initialize {ht l.d.k [i] ∀l. i. Using the previous samples {hτ l. 1. i.k [i] is drawn i.k [i] ∀i. ∀i. . obtain new samples according to t−j (1 − α)j yl. t−d−1|t−d−1 i. I (b) Normalize the weights via νk t−d|t−d [i ] ← νk t−d|t−d [i ] t−d|t−d [j ] j νk ∀i. If t ∈ {t0 . Pt−d is given by (4. 1) for all i..Table 4. t−d (a) Using the received feedbacks fk and the set of importance weights from time (t − d − 1). k. k. compute the new set of importance weights at time (t − d) using t−d|t−d t−d−1|t−d−1 t−d −d −d t−d νk [i ] = ν k [i] × p fk ht = ht . k. For each user k .43)-(4. p(ht k | F−∞ ) using S t|t−d νk [i ] = j =1 νk t−d|t−d t−d −d t [j ] p(ht = ht k = h k [i ] h k k [j ]). 2−α ). t−d (c) Compute the weights for the posterior distribution. where p fk ht = ht . l. . t0 + d − 1}: 2.k [i]. (d) Normalize the weights via νk t|t−d [i ] ← νk t|t−d [i ] t|t−d [j ] j νk ∀i.k [i] = (1 − α) d −d ht l.4: Particle ﬁltering steps Let the system begin at time-instant t0 . l } by drawing i. 2.

femtocells. Understanding the performance limits of large wireless networks and the optimal balance between the number of serving transmitters. With the widespread usage of smart phones and an increasing demand for numerous mobile applications. the decisions regarding the deployment of transmitters (base-stations. and the revenue model to choose have become much more complicated for service providers. the number of subscribers. the number of antennas used for physical-layer communication. In this chapter. Given that the most signiﬁcant fraction of the performance growth of wireless networks in 107 .1 Introduction In Chapters 2-4. picocells etc. wireless cellular/dense networks have grown signiﬁcantly in size and complexity.).Chapter 5: Large Scale Wireless OFDMA System Design 5. we focus our attention on multiple-transmitter multi-receiver systems. the maximum number of users (subscribers). we studied some aspects of point-to-point systems and singletransmitter multiple-receiver (OFDMA) systems that are useful in eﬃcient system design. the amount to be spent on purchasing more bandwidth. and the amount of available bandwidth to achieve those limits are critical components of the decisions made. Consequently.

• With the developed bounds we give four design principles for service providers and regulators. and LogNormal fading models along with a truncated path-loss model. B . available resource-blocks N .the last few decades is associated [66] with cell sizes (that aﬀect interference management schemes) and the amount of available bandwidth. Weibull. we utilize various results from the extreme value theory. and/or co-located antennas at each transmitter M . To answer some of the above questions. we consider extended multicell networks and derive bounds for the choice of user-density K/B in order for the service provider to maximize the revenue per transmitter 108 . and N to guarantee a nondiminishing rate for each user. In the ﬁrst scenario. we analyze the expected achievable downlink sum-rate in large OFDMA systems as a function of the number of transmitters B . To evaluate the bounds. we make the following contributions: • For a general spatial geometry of transmitters and the end users. and N . Here. B . we develop novel upper and lower bounds on the average achievable rate as a function of K . We evaluate our bounds for Rayleigh. in which user nodes have a uniform spatial distribution. Nakagami-m. the aforementioned issues become more important. We also specify the associated scaling laws in all parameters. users K . a resource block is a collection of subcarriers such that all disjoint sub-collections have associated independently fading channels. In the second and third scenarios. we consider a dense femtocell network and develop an asymptotic condition on K. Using our analysis. • We consider asymptotic scenarios in two networks: dense and regular-extended.

3. where there are a ﬁxed number of co-located antennas at each transmitter. a sum-rate scaling equal to that of the upper bound (on achievable sum-rate) that we developed earlier. • Using the proposed achievability scheme. We also give. we consider an extended multicell network and develop asymptotic conditions for K . In Section 5. B.4. In Section 5. at the same time.6. for a wide choice of {K. The rest of the chapter is organized as follows. We end our discussion with a note on MISO (Multiple-Input Single-Output) systems. we give general upper and lower bounds on expected achievable sum-rate. and obtain a similar distributed resource allocation problem as we found earlier towards achievability of expected achievable sum-rate. Our result extends the result in [67]. we ﬁnd a deterministic power allocation scheme that governs the proposed distributed achievability scheme. In Section 5. wherein it was stated that if B = Ω(log K ).2. we provide details of another achievability 109 .t. for the cases of dense and regular-extended networks. then a linear increase in achievable sum-rate w. N }. we discuss past work. B cannot be achieved. keep the per-user rate above a certain limit. associated sum-rate scaling laws and four networkdesign principles. followed by an analysis of peer-to-peer networks. we ﬁnd a distributed resource allocation scheme that achieves. and N to guarantee a minimum return on investment for the service provider. we introduce our system model. we show that the achievable sum-rate of peer-to-peer networks increases linearly with the number of coordinating transmit nodes B under ﬁxed power allocation schemes only if B = O log K log log K . Finally. B .r.5. In Section 5. • For dense and regular-networks.and. In Section 5.

2 Related Work Calculation of achievable performance of wireless networks has been a challenging. we conclude in Section 5. 110 . whereas. Scaling laws for channel models incorporating both distance based power-attenuation and fading have been considered in [77–79]. 5. For instance. and yet an extremely popular problem in the literature. Finally. and provide a distributed achievability scheme that is optimal in scaling sense for a large set of network parameters. these works assume an unbounded path-loss model. Further. without path-loss. The performance of large networks have been mainly analyzed in the asymptotic regimes and the results have been in the form of scaling laws [67–76] following the seminal work by Gupta and Kumar [68]. such a study has not been done earlier.scheme. To the best of our knowledge. The motivation behind our work is to consider a truncated path-loss model that eliminates the singularity of unbounded path-loss models at zero distance. our main bounds are not asymptotic and we take into account both a distance based power-attenuation law and fading in our model. Unlike these studies. Unbounded path loss models aﬀect the asymptotic behavior of the achievable rates signiﬁcantly. our analyses take into account the bandwidth and number of transmitters (and/or antennas) in large networks. the scaling law changes to Θ(log log K ). similar to that developed in Section 5. for MISO systems.7.5. However.79] arises by exploiting inﬁnite channel-gain of the users close to the transmitter. the capacity scaling law of Θ(log K ) found in [78.

Similarly.3 System Model We consider a time-slotted OFDMA-based downlink network of B transmitters (or base-stations or femtocells or geographically distributed antennas) and K active users. we assume. Theorem 1 gives bounds on the expected achievable sum-rate of the system. p ∝ B . it models a dense network when transmitter locations are random and the network radius p is ﬁxed. i. Under such general settings. it models a multi-cellular regular extended network when the transmitters (or base√ stations) are located on a regular hexagonal grid with a ﬁxed grid-size. O is assumed to be the origin. For example. 111 . as shown in Fig. however. This model too is quite general and can be applied to several network conﬁgurations.e.5.1: OFDMA downlink system with K users and B transmitters. and the users are distributed according to some spatial distribution in a concentric disc of radius p. User Transmitter (TX) O p−R p Figure 5. In the sequel. for simplicity. The transmitters (TX) lie in a disc of radius p − R (p > R > 0). 5..1. that the transmitter locations are arbitrary and deterministic and the users are uniformly distributed.

n is a complex-valued random variable that is i.Let us denote the coordinates of TX i (1 ≤ i ≤ B ) by (ai . Currently.2) Ri.n general. −α Here.k denotes the path-loss attenuation.n −2α |hi. (xk − ai )2 + (yk − bi )2 } (5.n .k νi. and the coordinates of user k (1 ≤ k ≤ K ) by (xk .k. and assume that it is composed of the following factors: −α hi.k. across which the transmitters (TXs) schedule users for downlink data-transmission. We denote the complex-valued channel gain over resource-block n (1 ≤ n ≤ N ) between user k and TX i by hi.n = βRi. βRi. and (xk .n|2 .yk ) x.n|2 = β 2 Ri. and the fading factor νi. r0 (α > 1. Speciﬁc assumptions on the fading model {νi.k.3) for constants α. y = 1 πp2 0 if x2 + y 2 ≤ p2 . n). yk ).k.i.4) We assume that perfect knowledge of the users’ channel-SNRs (or gains) from all TXs is available at every transmitter.k = max{r0 . Assuming unit-variance AWGN. we keep the distribution of νi. r0 < R). k. bi ) are known for all i.n . bi ).k. (5. the channel Signalto-Noise Ratio (SNR) between user k and TX i across resource-block n can now be deﬁned as γi.k |νi. (ai . Therefore.n } will be made in subsequent sections. This can be achieved via a backhaul network 112 . (5.k. across all (i. yk ) is governed by the following probability density function (pdf): f(xk .d. β.k. Note that r0 is the truncation parameter that eliminates singularity in the path-loss model.1) We now describe the channel model.k.k. We assume that the OFDMA subchannels are grouped into N independently-fading resource blocks [80]. otherwise. (5.

ui. P) = where u u∈U .Ui. our results and design principles also hold for a class of networks wherein coordination among TXs is allowed.y. n} and {pi. and {U . and n pi.n (5.n Cx.n } : pi. if a user is being served by more than one transmitter. ν := {νi. n.n ≤ Pcon for all i . and P := {Pi. 113 . U := {Ui.n .n ≤ Pcon for all i. P) log 1 + i=1 n=1 Pi. n}.n is the sum-rate maximizing user scheduled by TX i across resource-block n.n γj.ν (U.7) Later.n.n for all i.6) {ui. p {pi.that enables sharing of users’ channel-state information17 . and Pi.n} : 1 ≤ ui. in each time-slot.k. y := {yk for all k }.n . U P 17 {ui. it treats the signals from all other TXs as noise. One may also write (5.5) pi.n γj. Therefore.n for all i.n for all i.4.5) where x := {xk for all k }. p∈P max log 1 + i=1 n=1 (5.y. We also assume that the transmitters do not coordinate to send data to a particular user.n ≤ K for all i. This assumption is restrictive because one may achieve a higher performance by allowing coordination among TXs to send data to users. P} are the sets of feasible user allocations and power allocations. the total power allocated by each TX is upper-bounded by Pcon .n ≥ 0 for all i. n}. then while decoding the signal from a given TX.ui.Ui.n for all i. However. Ui.ν (U. In particular.n for all i. as B N n Pi. n}.n γi.n . The maximum achievable sum-rate of our system can now be written as B N Cx.n is the corresponding allocated power. Therefore. Here. n}.n 1 + j =i Pj. we will propose a distributed resource allocation scheme that does not require any sharing of CSI among the transmitters and its sum-rate scales at the same rate as that of an upper bound on the optimal centralized resource allocation scheme for a wide choice of network parameters. 1 + j =i pj.n. n}. (5. We assume that. k.n γi. as will be explained after Theorem 1 in Section 5.

In the next section.1. as C ∗ = E Cx. k. and the lower bound is obtained by allocating equal powers 114 Pcon N to every resource-block by .k.8) where the expectation is over the SNRs {γi. we derive novel upper and lower bounds on the expected value of Cx. Finally. See Appendix C.n E i.k. P) . The expected achievable sum-rate of the system. n}. we write f (t) = O (g (t)) if there exists constants c1 ∈ R+ and r1 ∈ R such that f (t) ≤ c1 g (t) for all t ≥ r1 . The upper bounds in Theorem 1 are obtained by ignoring interference.ν (U. P) that are later used to determine the scaling laws and develop various network-design guidelines.5).9) E log 1 + Pcon max γi. (5.n ∀ i.n log 1 + Pcon maxk γi.8) that depend only on the exogenous channel-SNR process.ν (U. The following theorem gives upper and lower bounds on (5.k. we use the following notations: for two non-negative functions f (t) and g (t). In other words. Theorem 1 (General bounds). g (t) = O (f (t)). Similarly. we write f (t) = Ω(g (t)) if there exists constants c2 ∈ R+ and r2 ∈ R such that f (t) ≥ c2 g (t) for all t ≥ r2 .n N + Pcon j =i γj. we write f (t) = Θ(g (t)) if f (t) = O (g (t)) and f (t) = Ω(g (t)).y. To state the scaling laws. 5.4 Proposed General Bounds on Achievable Sum-Rate The expected achievable sum-rate of the system can be written. can be bounded as: E i. (5.k. C ∗ .n N n.n ≤ C ∗ ≤ min N i log 1 + Pcon max γi.k. using (5.y.n k . .k Proof.

Let S transmitters coordinate to send data to user k on resource block n and let {γ1. . we evaluate the bounds in Theorem 1 to two classes of networks – dense and regular-extended – using extremevalue theory. .k. .every TX. This can be explained using the following argument.n). . However. this term is upper bounded by S S s=1 log(1 + Ps. one can achieve better performance.4. that assume an uncoordinated system. which is S times the upper bound on sum-rate obtained by ignoring interference in a completely uncoordinated system (same as that used in Theorem 1). the scaling laws for the upper bound and the resulting design principles remain unchanged. As mentioned earlier.k.k.n [81]. The lower bound. on the other hand.n γs. coupled with the fact that Theorem 1 does not assume any speciﬁc channel-fading process or any speciﬁc distribution on transmitter and user-locations.1. Since S is bounded.n} be the corresponding instantaneous exogenous Signalto-Noise ratios. In the next subsection. make our bounds valid for a wide variety of coordinated and uncoordinated networks. also serve as bounds (up to a constant scaling factor) for the expected max-sum-rate of a class of networks wherein the number of transmitters coordinating to send data to any user on any resource block are bounded. Then. our bounds. assumes no coordination and allocates equal power to every TX and every resource-block. γS.n . 115 .n γs. The above arguments.n is the power allocated by transmitter s across resource block n. an upper bound on the sum-rate of those S transmitters across resource block n is log 1 + S s=1 2 Ps. Section 5. Clearly.k. and then provide interesting design principles based on them. by coordinating among transmitters. where Ps.

the C ∗ satisﬁes. N ) ≤ C ∗ ≤ log(1 + Pcon lK ) + O (1) BN. For proof.n ∼ CN (0.1 Scaling Laws and Their Applications in Network Design We ﬁrst present an analysis of dense networks. Lemma 9. If |νi. for large K . lK = β 2 r0 log p20 . we use extreme-value theory and Theorem 1 to obtain perfomance bounds and associated scaling laws. The following two lemmas use extreme-value theory and Theorem 1 to give bounds on the achievable sum-rate of the system for various fading channels. k.11) Similar results under diﬀerent fading models are summarized in the following lemma. then. νi. In particular. Theorem 2. B. 1) ∀ i.n| belongs to either Nakagami-m. for dense networks. N ) = where r > 0 is a constant. (5.10) −2α DN (r. and flo The following scaling laws result from (5. For dense networks with large number of users K and Rayleigh fading channels. followed by an analysis of regularextended networks. Proof. Typically. and K. see Appendix C.4.. B. N are allowed to grow.k. Dense Networks Dense networks contain a large number of transmitters that are distributed over a ﬁxed area. N } log log K ).k. Weibull. n.10): Kr 2 r2 . −2α (1+r 2 )(N +Pcon β 2 r0 (1+r )B ) C ∗ = O (BN log log K ). a dense network corresponds to the case in which p is ﬁxed.e. In our system-model. i. or LogNormal family of distributions. such networks occur in dense-urban environments and in dense femtocell deployments. (5.2. DN log(1 + Pcon lK ) + O (1) BNflo (r.5. B. 116 . and C ∗ = Ω(min{B.

In the sequel. N } log log K ) 2 C ∗ = Ω(min{B.. see Appendix C.12) Therefore. The above equation implies BN log log K = Ω(1). We use the dense-network abstraction for a dense femtocell deployment [82] where the service provider wants to maintain a minimum throughput per user. i. a necessary condition that the service provider must satisfy is: −2α BN log(1 + Pcon β 2 r0 log 2 Kr0 ) p2 +s ≥s ¯ K for some s ¯ > 0. where s = O (1).e. For proof. with the condition that the per-user throughput remains above a certain lower bound. ω ): C ∗ = O (BN log log K ) 2 C ∗ = O (BN √ log log t K ) C ∗ = O (BN log K ) C ∗ = Ω(min{B. the product of number of transmitters and the number of resource blocks (or bandwidth). 117 . Based on Theorem 2. N } log K ). t): For LogNormal (a. we call our system “scalable ” under a certain condition. In such cases. and ﬁnd performance bounds that motivate the subsequent design guidelines for such networks. we consider another class of networks. the total number of independent resources BN . N }√ log log t K ) C ∗ = Ω(min{B. must scale no slower than K . Next. if the condition is not violated as the number of users K → ∞.For Nakagami-(m. namely regular-extended networks. Principle 1. w): For Weibull (λ. log log K Otherwise. In dense femtocell deployments. K (5. for the system to be scalable.3. Proof. the total number of independent resources BN must scale as Ω K log log K . then the system is not scalable and a minimum per-user throughput requirement cannot be maintained. we now propose a design principle for large dense networks.

α−2) (5. N ) = The associated scaling laws are: Kr 2 (1+r 2 )−1 r 2 . and C ∗ = Ω B log log .14) C ∗ = O BN log log K K . N ) ≤ C ∗ ≤ log(1 + Pcon lK ) + O (1) BN. Theorem 3. the radius of the √ network p = Θ(R B )..k.2. keeping the number of nodes per unit area ﬁxed. νi.Regular Extended Networks In extended networks. i. Here we study regular extended networks. in which the TXs lie on a regular hexagonal grid as shown in Fig. N +(1+r )c0 2−2α Pcon β 2 r0 R2 and c0 = 4 + √3(2π .13) −2α EN where lK = β 2 r0 log BR0 2 . The distance between two neighbouring transmitters is 2R. 5.2: A regular extended network setup.e. the area of the network grows with the number of nodes.n ∼ CN (0. EN log(1 + Pcon lK ) + O (1) BNflo (r. flo (r. For regular extended networks with large K and Rayleigh fading channels. 1). Hence. (5. The following two lemmas use Theorem 1 and extreme-value theory to give performance bounds and associated scaling laws for regular extended networks under various fading channels. Transmitters O 2R Figure 5. B B 118 .

for large . In addition.Proof. Using the upper bound in Theorem 1 obtained via Jensen’s inequality. then the system is scalable for ﬁxed N only if B = Θ(K ). see Appendix C. w): For Weibull(λ. log K B Proof. If |νi. 2 BR (5. For proof. b) there is a unit cost for each TX installed and a cost cN for unit resource block incurred by the service provider. then for ﬁxed B . for regular extended networks. ≈ BN log N N log 1 + 2 KN r0 . and not scalable for ﬁxed B. If the service provider wants to maintain a minimum level 119 . if a) the users are charged based on the number of bits they download. if a minimum per-user throughput requirement is also required to be met. let Pcon = β = r0 = R = 1 (in their respective SI units). the scaling laws for the upper bounds are: For Nakagami-(m. and for ﬁxed N . or LogNormal family of distributions. see Appendix C. ω ): C ∗ = O BN log log K B C ∗ = O BN log log t C ∗ = O BN log K B 2 K B ) C ∗ = Ω(B log log K B C ∗ = Ω B log log t C∗ = Ω B 2 K B . t): For LogNormal(a.15) −2α where lK = β 2 r0 log For simplicity of analysis. Weibull.2. Using Theorem 3. For proof.3. the system is scalable only if B = O (K ). c) the return-on-investment must remain above a certain lower bound. we now propose three design principles. Lemma 10. then.k. the system is scalable only if N = O (log K ). In regular extended networks. Consider the case of a regular extended network with large K .n | belongs to either Nakagami-m. Principle 2. we have C∗ ≤ Pcon Pcon lK + log log K + O (1) BN N N Pcon lK Pcon lK .

Equations (5.17) yield that the system is not scalable under ﬁxed B . one must solve: max c log 1 + K.18) for some s ¯ > 0. the total return on investment of the service provider is proportional to the achievable sum-rate per TX. (5. we have C ∗ = Θ B log log K . In addition. Therefore.t. K (5. Consider a regular extended network with ﬁxed number of resource blocks N . Assuming a revenue model wherein the B service provider charges per bit provided to the users. By variable-transformation. then it must satisfy BN log B + cN N 1 KN log N B > s ¯.16) for some s ¯ > 0. if the users are charged based on the number of bits they download and there is a unit cost for each TX incurred by the service provider. The above equation implies N = O (log K ) for ﬁxed B .16) and (5. and for ﬁxed N . then the service provider must satisfy the following equation in addition to (5. let β = r0 = R = Pcon = 1 (in respective SI units). if a minimum per-user throughput is also required. Principle 3.17) for some s ˆ > 0.B −2α Pcon β 2 r0 2 Kr0 log BR2 Kr 2 −2α cB log(1 + Pcon β 2 r0 log BR0 2) ≥s ¯. in large scale systems (large K ).of return-on-investment.14). In a large extended multi-cellular network. s. the system is scalable only if B = Θ(K ). 120 . (5. where c is a constant bounded according to (5. then there is a ﬁnite range of values for the user-density K B in order to maximize return-on-investment of the service provider while maintaining a minimum per-user throughput. and B = O (K ) for ﬁxed N . For simplicity.13)-(5.16): BN log K KN 1 log N B > s ˆ. In this case.

12. Since ρ∗ (λ) = 4. straint curve (see the constraint in (5. 1.18)) is given by c s ¯ log(1 + log ρ).18). Therefore. the Karush- (λ + 1)10 . We observe from the ﬁgure that ρ∗ (λ) decreases with increasing λ. ∞ in Fig.the above problem becomes convex in ρ Kuhn-Tucker condition is ρ= K . The plots of LHS and RHS of (5.19) as a function of ρ. (1 + log ρ)λ (5. 5. There.1. B Solving it via dual method.19) along with the constraint curve are plotted for λ = 0. the optimal ρ lies in the set [1.19) where λ ≥ 0 is the Lagrange multiplier. Figure 5.4 shows the 121 .7]. Note that according to (5.19) 10 3 LHS RHS for λ = 0.1. when ρ ∈ [1. the optimal ρ is greater than or equal to 4. 12.1.e.1.. In Figure 5.1 RHS for λ = 1 RHS for λ = ∞ Constraint curve 10 2 10 1 10 0 10 −1 10 −1 User density ρ = 10 0 10 1 K B 10 2 Figure 5.1 when λ = ∞. the constraint is satisﬁed only when the constraint curve (in Fig.3. the optimal ρ for a given λ (denoted by ρ∗ (λ)) is the value of ρ at which the LHS and RHS curves intersect for that λ. i.7].3. the con- 10 4 LHS and RHS of (5. 5.3: LHS and RHS of (5.3) lies above the LHS curve.

for λ > 0. then there is a ﬁnite range of values for K B in order to maximize return-on-investement of the service provider while maintaining a minimum per-user throughput. From the plot. we assume a revenue model for the service provider wherein the service provider charges each user a ﬁxed amount regardless of the number of bits 122 . similar to that assumed in Principle 3.variation of ρ∗ (λ) as a function of λ. the optimal user-density ρ∗ (λ) is a strictly-decreasing convex function of the cost associated with violating the per-user throughput constraint.7 users/BS. In a large extended multi-cellular network.1 ≤ ρ∗ (λ) ≤ 12. and satisﬁes 4. ρ∗ (λ).e. Principle 4. if the users are charged a ﬁxed amount regardless of the number of bits they download and there is a unit cost for each TX incurred by the service provider.. i. Consider a regular extended network with ﬁxed N .e.29. Furthermore. Here. i.4: Optimal user-density. we observe that ρ∗ (λ) exists only 13 12 11 10 ρ∗(λ) 9 8 7 6 5 4 −1 10 Langrange multiplier λ 10 0 10 1 10 2 Figure 5. as a function of λ. λ.

Hence. 0. If not. 0.t. Let us denote it by ρ∗ (¯ s/c).26]. for some positive constants clb . then the optimal user-density ρ∗ = ρ∗ (¯ s/c). the associated (5. sK/B .14) that clb ≤ c ≤ cub . B Let the optimal solution be denoted by ρ∗ . 5.20).14 (since point B lies to the right of point A in Fig. c ρ (5. the value of ρ at point B in Fig. the set of feasible ρ c lies in a closed set (for which the RHS curve remains above the LHS curve). the return on investment of the service provider is proportional to the user-density ρ = optimization problem is: K max s K. ≥s ¯ K Kr 2 K . and c can be bounded according to (5.14.5. ρ∗ (¯ s/cub )]. i. For simplicity of analysis. Now.5. 5. 5.B B −2α cB log(1 + Pcon β 2 r0 log BR0 2) s.5).14 for all s ¯/c ∈ [0. Here.13)-(5. The above problem becomes convex in ρ K . Moreover. Examining (5. we have ρ∗ (¯ s/cub ) ≥ ρ∗ (¯ s/clb ) ≥ 2. Note that ρ∗ (¯ s/c) ≥ 2. since ρ∗ (¯ s/c) ≥ 2. If s ¯/c is known exactly. B In large scale systems (large K ). s. we can write from (5.the user downloads. The maximum value of ρ in this closed set. s depends on the amount users are charged by the service provider. Then. s ¯ > 0.e.21) and Fig.5. it is the optimal ρ for the given value of s ¯/c....21) The plot of LHS and RHS of (5.21) as a function of ρ (for ρ ≥ 1) is plotted in Fig. is the one that maximizes the objective in (5.26].e. 123 . the constraint in terms of ρ is s ¯ log(1 + log ρ) ≤ .13)-(5. Moreover. for a given value of s .14). let β = r0 = R = Pcon = 1 (in respective SI units). cub . i. 5. ρ∗ ∈ [ρ∗ (¯ s/clb ).20) for some constants c. Then. we note that the per-user throughput constraint is satisﬁed only if s ¯ c ¯ ∈ [0.

21) as a function of ρ.15 0. To this end.dense and regularextended.0.ui. 5.25 0.05 0 0 10 A B User density ρ = K/B 10 1 10 2 10 3 Figure 5.1 0. In this section. To analyze the CLB .n Pi. we derived general performance bounds and proposed design principles based on them for two speciﬁc types of networks .5 Maximum Sum-Rate Achievability Scheme In the previous section. 124 .21) 0.n .n. (5.22) ∗ ∗ Note that CLB ≤ C ∗ .n 1 + j =i γj. Let us deﬁne an approximation of C ∗ as follows: B ∗ CLB N max E P∈P max u∈U i=1 n=1 log 1 + γi.35 0.3 RHS LHS LHS and RHS of (5. we give a novel extreme-value theoretic result in Theorem 4.n. we construct a tight approximation of C ∗ and ﬁnd a distributed resource allocation scheme that achieves the same sum-rate scaling law as that achieved by C ∗ for a large set of network parameters.n Pj.5: LHS and RHS of (5.ui.2 0. we propose a distributed scheme for achievability of maxsum-rate under the above two types of networks.

.d. i. for large K . Then. 1). .n −2α c2α r0 for all i.27) Theorem 5 can be easily extended for Nakagami-m. . Then.5 and V (x) = log(1 + px) to prove (5. S1 ∈ (0. E 0. let a class of deterministic optimization problems be deﬁned as follows: OP c. The following corollary can be stated as a result of the above theorem. . and LogNormal fading channels. Let {X1 .n xi.i. XT } be i.i.n xi.Theorem 4.25) 1+ Pj. n. for any increasing concave function V (·). Under Rayleigh-fading channels. T 1≤t≤T max Xt . See Appendix C. and V (x) = log(1 + px) in Theorem 4 to prove the ﬁrst equation. Proof.n| ∼ CN (0.23) .t.k. XT } be i. random variables with cdf FX (·).24) 1 log T log 1 + p lT / log log T ≤ E log 1 + p max Xt t ≤ E V S1 . h(K ) B N max P∈P log(1 + Pi. Corollary 1. Theorem 5. Then. T ] and FX lT /S1 = 1 − Proof. 1−e 18 − S1 OP r0 . . ∗ We will now use the above corollary to ﬁnd upper and lower bounds on CLB for dense networks and Rayleigh-fading channels18 via a class of deterministic optimization problems.26) s. random variables with cumulative distribution function (cdf ) FX (·). .24).n ) i=1 n=1 r 2 h(K ) x 0 2 p xi. K l(2p. K/S1 ≤ ∗ CLB ≤ −2α β 2 r0 u 1+ ¯ OP 2p.63 ≤ log 1 + p maxt Xt log 1 + p lT . =e β 2 r −2α 0 j =i where h(·) is an increasing function and c is a positive constant. 125 . .4.n (5. Let {X1 . Weibull. Substitute S1 = 0. (5. . we have 1 − e−S1 V lT /S1 Here. (5.e. |νi. Substitute S1 = log log T . (5. 1− Furthermore. .d. K ) (5.

K ) (5. In particular. we have 0. Proof. h(ρ) . K ]. ≤ CLB ≤ 1 + .27) and use (5. Proof. if l = ˆ l(η1 . K ). h(K ) 2α (5. h(K ) c2 ≤ c1 OP c1 . (5.30).29). ρ) ≤ ∗ CLB ≤ −2α β 2 r0 u √ 1+ ¯ l (R 3 / 2 . OP(·. Note that p = Θ( B ) in this case. ﬁxed p) and Rayleigh-fading channels. 126 . then l l(2p. then √ −2α 1 ρ β 2 r0 u R 3 ∗ √ 1− OP r0 . if ρ K/B users are distributed uniformly in each cell and each TX schedules users only within its cell. u is the Euler-Mascheroni constant. For regular extended networks and Rayleigh-fading channels. The above theorem leads to following two corollaries for dense and regular-extended networks. Therefore we use. Corollary 2. 2 α −2α p Further. Put S1 = 1 in (5.e. K ) ≈ ˆ 0 = e 1 + 2lP for any η1 . ¯ l(2p. Corollary 3.63 OP(r0 . K ) ≤ CLB ≤ K 1 OP r0 . Also note that 2p is √ replaced in Theorem 5 to obtain the above result. instead of h(K ). K ) (5. K ) √ R 3 2 r0 2α OP(r0 .28) for positive constants c1 and c2 (c1 ≤ c2 ). ρ).where S1 ∈ (0.28) to prove (5. where ρ = K B √ by R 2 3 since the maximum distance between a user and its serving TX is R2 3 . K ) = Θ(log K ). ·) satisﬁes η1 r0 1≤ OP c2 . K ) Moreover. η2 ) be the solution to l B −1 2η r0 2 β 2 r −2α con ¯(2p. K ) is a large number that increases with increasing K .63 OP(r0 . For dense networks (i. and ¯ l (2p.. Put S1 = log log K in Theorem 5 to prove (5. See Appendix C. we have 1− and ∗ 0. K ) 2p r0 2α β 2 r −2α u 1+ ¯ 0 OP(r0 .29) l(2p. (5. K ) for large K . l(2p.5.30) Note that for ﬁxed B and large K .32) √ Proof. ρ .31) OP ¯ log ρ log log ρ 2 l (R 3 / 2 . log K log log K ∗ ≤ CLB ≤ β 2 r −2α u 1+ ¯ 0 OP(2p. η2 .

34) In this case.30) for dense networks. Principle 5. Note that this can be computed oﬄine. schedule the user k (i.n } by solve the LHS of (5.n j =i Pj.32) to give a distributed resource allocation scheme. If B = Ω log K log log K . n) that satisﬁes: k (i. We will now compare low-powered peer-to-peer networks and high-powered single TX systems to give a design principle using on the bounds in Corollary 2.n Pi. then there is no gain with increasing B .32) for regular-extended networks. 2. we consider a peer-to-peer networks with B nodes randomly dis- tributed in a circular area of ﬁxed radius p.30) and (5. The sum-rate of a peer-to-peer network with B transmit nodes (geo¯ across every graphically distributed antennas).The above two corollaries highlight the idea behind the proposed achievability strategy. each transmitting at a ﬁxed power P resource-block.n γi. Assuming ﬁxed power allocation.n γj. we use the lower bounds in (5. n) = argmax k Pi.n γj. we 127 . In particular. The steps of the proposed achievability scheme are summarized below.k. thus making the algorithm distributed. (5. 1 + j =i Pj. n) combination and feeds back the value to TX i. if B = Ω(log K ). log K log K if B = Ω log log K and B = O (log K ). or LHS of (5.n γi.n . the gain obtained by implementing ¯ across a peer-to-peer network over a high-powered single-TX system (with power B P each resource-block) is Θ(B ) log K Θ log log K log K Θ log B log K if B = O log . For each TX i and resource-block n.n (5. Further.k.k. Find the best power allocation {Pi. increases linearly with B only if B = O log K log log K .33) We propose that each user k calculates 1+ for each (i.k. 1.

In particular. [67] showed that that linear scaling of sum-rate C ∗ w. if B = O log K log log K . B = M ).n ¯ xi. (5. we note that our results extend the results in [67]. CLB = Θ OP(c.e. h(K )) = Θ i..n = P −2α xi. we get a linear log K log log K scaling in sum-rate capacity w. i. Therefore. K ) is the expected maximum achievable sum-rate C ∗ . However. Then. then C ∗ = Θ(BN log log K ). Only in the special case of M = Θ(log K ) is C ∗ = Θ(N log K ) = Θ(NM ). B .n ≈ Θ min β 2 r0 log 2 r0 h(K ) 1 c . Therefore. we have h(K ) = K . and OP(c.36) . ¯ p2 P r0 2α B −1 2 r0 h(K ) p2 . the achievable sum-rate increases with increasing M only until M = O sum-rate stabilizes.n ) = Θ min BN log log h(K ).30).35) . 128 log K log log K . if B = Ω C ∗ = Θ(N log K ). We establish that even if M scales slower than log K . (5.t. the achievable sum-rate scaling is not linear in M unless M = O log K log log K .¯ for all i. n.e. comparing our results to those in [67].t.26). Note that this is also the scaling of the upper bound on sum-rate capacity given in Theorem 2. then . using (5. from (5. Another way to state the above result is that for a given number of users K (K is large).. we get have Pi. N log h(K ) log(1+P Note that in the scenarios of either ﬁxed power allocation schemes or channel co∗ herence time being much smaller than the codeword length. beyond which the achievable . N log K In other words. One can also view the above scenario as a multi-antenna system with a single basestation in which all B transmitters are treated as co-located antennas (i.r.r. and the sum-rate capacity under ﬁxed power-allocation scaling as: C ∗ = Θ min BN log log K. number of antennas M holds when M = Θ(log K ) and does not hold when M = Ω(log K ).

Assume that each TX has M antennas and each user (or.k.n.n + wu(i. Pcon at each TX).i.m). denoted by u(i. let us suppose that the signal received by user scheduled by TX i across resource block n using beam m.m).i.u(i. the gain high-powered single-TX system.7. we assume.n + m′ =m M φm′ xi.6 A Note on MISO vs SISO Systems Until now.n. 5. is given by yu(i.k. . We use the opportunistic random scheduling scheme proposed in [67]. for a TX system.n φm ˜ ). −α νi. .n (5. that P1.n ∼ CN (0. complex Gaussian random variables. n.n. Abbreviating E{|xi.u(i. n).n. Hence.37) + ˜ =1 j =i m Hj. for fair comparison.k.n ∈ C1×M is the channel-gain matrix.m.n.n φm xi. 1) is AWGN that is i.Now.n = B P ¯ log K )). Then. we can 129 . νi. we discussed systems where either every transmitter had a single antenna or diﬀerent transmitters were treated as geographically distributed antennas with independent power constraints (i.n.n .u(i. Every TX constructs M orthonormal random beams φm (M × 1) for m ∈ {1.u(i. We wrap up our analysis with a discussion on mutiple antennas at each TX followed by conclusions in Section 5.m′ ).34). M } using an isotrophic distribution [83].e.k vector containing i. and wk.m).n = βRi.m ˜ xj.u(j. .m).n is the 1 × M where Hi. we have C ∗ = Θ(N log(B P of peer-to-peer networks over a high-powered single-TX system is given by (5. m).m ˜ ).m).n. comparing the low-powered peer-to-peer network to a high-powered single¯ ∀ n.d.n. .d.u(i.n. which achieves the sum-rate capacity in the scaling sense for ﬁxed power-allocation schemes.. receiver) has a single antenna.n = Hi. for all (k.n |2 } by Pi. With some abuse of notation.

n.m 2 h(K ) Pi. (5.k. over (k.40).n.k.m Pi.30) under dense networks for the problem in (5.n.k.42) s.m ≥0 ∀n.n φm |2 N0 for all (i.39) (5.n.MISO B N M {Pi. n. Since Hi.n. we get 1− K 1 OPMISO r0 . K ). Therefore. k.n. and for all (i. under opportunistic random beamforming can be written as: ∗ CLB . A lower bound on sum-rate capacity.k.n.k.m + j =i M m ˜ =1 Pj.n. m. over all (k.n.t. log K log log K ∗ ≤ CLB .d.n.n φm are i.t.m γi. γi.m)} max log 1 + SINRi. n.n.u(i.22) with BM transmitters.m) i=1 n=1 m=1 (5.n.n. −2α 2 α c r0 130 .m. m).n Pi.n. ≤ where OPMISO (c.m′ γi. m) as: SINRi.40) n.k. k.m xi. x i.m ˜ xi.k.m ≤ Pcon for all i.m are i. m.n.n.m).n.22).39)-(5.m ˜ . n) [67].k.n. similar to that in (5.i.k.m Pi.m N0 + m′ =m Pi.m ˜ γj.n.m ˜ . The above optimization problem is similar to that in (5.MISO (5. n).n.m i=1 n=1 m=1 (5.m = where γi.n.22)-(5.m} max s. m.k.n. E {u(i.write the SINR corresponding to the combination (i.k.38) |Hi.m r0 β 2 r −2α 0 1+ = e −2α p2 c2α r0 M 1+ j m ˜ =1 Pj.n} N M 1+O max log(1 + Pi.41) 1 log K OPMISO (2p.k.n. repeating the analysis in (5.n.m ≥0 ∀ i.d. m).i.mxi.m ≤ Pcon ∀ i. h(K )) B {Pi.

Weibull. Using these results. a truncated path-loss model. we showed that then the system is scalable only if BN scales as Ω K log log K . 131 . In the second and third principles. We evaluated the bounds in dense and extended networks in which tranmitter nodes are distributed uniformly for Rayleigh. for a minimum per-user throughput requirement. the system is scalable only if N = O (log K ). towards developing an achievability scheme. and a general channel-fading model. Nakagami-m. we showed that the sum-rate capacity of a peer-to-peer network with B transmitters (or. while maintaining a minimum per-user rate. In the fourth principle. In particular. and LogNormal fading models. we considered diﬀerent pricing policies and showed that the user density must be kept within a ﬁnite range of values in order to maximize the return-on-investment. Thereafter. for ﬁxed B . K ≤ 2p r0 2α +O 1 .Furthermore. number of transmitters B . 0. log K (5. and number of resource-blocks N . and for ﬁxed N . we developed bounds for the downlink sum-rate capacity in large OFDMA based networks and derived the associated scaling laws with respect to number of users K . we also considered the cost of bandwidth to the service provider along with the cost of the transmitters and showed that.63 ≤ ∗ CLB . According to the ﬁrst principle.43) 5.MISO OPMISO r0 .7 Conclusion In this paper. the system is scalable only if B = O (K ). in densefemtocell deployments. Our bounds hold for a general spatial distribution of transmitters. we proposed a deterministic distributed resource allocation scheme and developed more design principles. we developed four design principles for service providers and regulators to achieve QoS provisioning along with system scalability.

a MISO system with B transmitter antennas at a single transmit node) increases with B only when B = O log K log log K . 132 .

we propose an optimal algorithm and for cases in which one subchannel is assigned to one user only. Second. and multi-cell OFDMA systems and proposed guidelines for service providers to design eﬃcient wireless communication systems. We showed that our greedy rate adaptation scheme performs signiﬁcantly better than the ﬁxed rate scheme and is close to an upper bound on the optimal POMDP-based rate-adaptation scheme. In cases where subchannel-sharing among users is allowed. Our algorithm is faster than the traditional subgradient based/golden-section based algorithms. for point-to-point systems. we proposed simultaneous user-scheduling and resource (power and rate) allocation algorithms under imperfect channel-state information. especially under slow-fading channels. we proposed an algorithm that was near-optimal. we studied downlink resource allocation strategies for pointto-point systems. Finally. for single-cell OFDMA downlink systems. single-cell OFDMA (Orthogonal Frequency Division Multiple Access) systems.Chapter 6: Conclusions and Future Work In this dissertation. First. and 133 . we considered large multi-cellular OFDMA-based networks and proposed performance bounds as a function of the number of users K . we gave theoretical performance guarantees as a function of number of iterations of the algorithm. Further. we proposed greedy rate-adaptation schemes based on ACK/NAK (Acknowledgement/Negative Acknowledgement) feedback for continuous-state channels. the number of base-stations B .

Nakagami-m. This may possibly lead to the answer to questions such as: When does beamforming fail to provide signiﬁcant gain in the sum-rate of large 134 . we provided a scheme that achieves the same sum-rate scaling as that of the optimal resource allocation scheme. Weibull. Furthermore. and a variety of fading models (Rayleigh. Some future work directions in which we wish to continue our work are as follows: 1. at the same time. Similar analyses can be done for systems that allow transmitter cooperation and beamforming techniques at the transmitters/receivers. along with some guidelines for the regulators. We also derived the associated scaling laws and developed design principles for service providers. Our analysis in Chapter 5 was based on transmitters that did not cooperate to send data to a particular receiver.the number of resource-blocks N . base-stations). in order to achieve provisioning of various QoS (Quality of Service) guarantees for the end users and. maximize revenue for the service providers. and LogNormal). Considering imperfect channel-state information (CSI) in multi-transmitter systems and developing distributed scheduling and resource allocation schemes (similar to those proposed in Chapter 5) that achieve the same sum-rate scaling as that of the optimal resource allocation scheme remains a topic of future work. what trade-oﬀs exist between accuracy of CSI and the accumulated revenue? 2. a truncated path loss model. we derived novel upper and lower bounds on the achievable sum-rate for a general spatial geometry of transmitters (or. The motivation for this line of work comes from the question: How does the accuracy of CSI aﬀect the sum-rate of a communication system? Another relevant question that needs to be answered is: Under a given pricing scheme. In particular.

there will always exist transmitters that are close to a given user and observe strong channel conditions between themselves and that user. Note that. Extensions can also be made to OFDMA systems with multiple antennas at the transmitters and receivers. Thus. a more general analysis for MIMO-OFDMA systems remains a topic of future work. The transmitting user (say. 4. due to availability of large number of transmitters within a ﬁxed area. user scheduling alone may perform close to beamforming. and the rate of incoming data at the transmitting user A. which then sends the data to another base-station (say. 135 . Another potential problem involves resource allocation under a combination of both uplink and downlink communication. the optimal resource allocation scheme might force some transmitters to not transmit at all. downlink channel (from BS-B to B). it will also limit the beamforming gain. or large MIMO-OFDMA (Multiple Input Multiple Output OFDMA) systems. This will reduce the interference caused to other transmitters’ signals received at user-terminals. A and B) communicate via a communication link. The amount of data that can be sent on this communication link is limited by the channel conditions in uplink channel (from A to BS-A). 3. A) sends data to its serving base-station (say BS-A). in large dense networks.dense networks? The motivation behind this idea is that. However. Consider the case where only two users (say. BS-B) that serves receiving user B. with large number of transmitters in a dense network (with ﬁxed network size). However. Related results involving multiple antennas at transmitters and single antenna at receivers have been discussed in Chapter 5. Thus.

arrival rate of data and uplink & downlink channel-conditions.e. i. 136 ..allocation of resources needs to be done so that the sum-rate/throughput of this system is maximized. where the sum-rate/throughput should take into account all the aforementioned factors.

θ) p(θ)dθ. Then 2π p(γt | γt−nd ) = 0 p (γt | γt−nd . the random 2 2 variable |gt | = gt.R and gt. 2π ).I | γt−nd . K (A. θ).44). θ} = (1 − α)nd γt−nd sin θ.5) .R and gt. θ) in order to evaluate p(|gt | | γt−nd ). θ) = |gt | exp 2 σZ − |gt |2 + (1 − α)2nd 2 2σZ −nd |gt |(1 − α)nd γtK .R | γt−nd .2) and |gt | = γt .I be the real and imaginary parts of channel gain.R + gt. θ). K then.3) 2 respectively.1) We ﬁrst ﬁnd p(|gt | | γt−nd . the random variables gt. gt . p. 78]: p(|gt | | γt−nd . Since gt = (1 − α)nd |gt−nd |ejθ + Z for Z = α nd−1 j i=0 (1 − α) wt−j (A.I is Rician [1. we derive the expression for p(γt | γt−nd ) given in (2. Also let gt−nd = |gt−nd |ejθ for θ ∼ U(0. Thus conditional on (γt−nd . Let gt. and variance σZ = E{Z 2 }. θ} = (1 − α)nd and E{gt.I are both Gaussian with mean E{gt. conditional on the pair (γt−nd . (A. × I0 2 σZ 137 γt−nd K (A.4) γt−nd cos θ K (A.Appendix A: Proofs in Chapter 2 Here.

(A. we get 1 p(γt | γt−nd ) = exp 2 2KσZ × I0 2 Finally. 138 .44). given γt−nd .One can see that. Since γt = K |gt |2 . 2 KσZ (A.6) Hence combining (A. we have p(γt | γt−nd ) = 1 exp 2 2KσZ × I0 − γt K nd + (1 − α)2nd γt− K 2 2σZ √ (1 − α)nd γt γt−nd 2 KσZ .6).7) yields (2. plugging σZ = α 2−α − γt + (1 − α)2nd γt−nd 2 2KσZ √ (1 − α)nd γt γt−nd . the random variable |gt | is independent of θ.7) 1 − (1 − α)2nd into (A.1) and (A.

mFn.k.m γn.m as gradient of −In.k.m (In. In. we have the diﬀerentiable function given by In.t.k.m γn.k.m.m U .k.m ˜′ −am bm rm U n.m (1 − am e−bm xn.1 Proof for convexity of CSRA problem In this proof.k e In. For this.k.m −bm xn.k.4).m E Un.m )rm ˜n.2) where the expectation is over γn.m (In.m ˜n. is a convex function of I (∈ ICSRA ) and x.k. Therefore.m + m m m n.k .k e−bm xn.k. xn.k. (B.k. Then. For this.m γn.m γn.k. m.k.k.r. xn. ﬁrst we consider the case where In.m) = −In.k.m (B.k.k.k /In. We show that the Hessian of this function with respect to In.m as Wn.k.k.m is positive semi-deﬁnite.k.k /In.k.k. the gradient can be 139 . ﬁrst we write the ˜n.1) where Fn. let us denote am bm rm γn.k e a b r x ⊺ .k.k.m w. k. we will show that the primal objective function in the CSRA problem given by In. For simplicity.k.mFn.Appendix B: Proofs in Chapter 3 B.k /In. xn.k. (B.k.m(In.m U −bm xn.m γn.m.m.k.k /In.k.3) where ⊺ denotes the transpose of a matrix and ′ denotes the derivative.k.m γn.k.m and xn.m.k.m) is deﬁned in (3.k.m).k.m and xn. n.m > 0 ∀n.m U ˜′ −U n.k.k.

k.k.k.m In.m .k.k.k. the Hessian of −In. ∂xn.m n.m Fn.m.k.m In.m U x2 W xn.m x 1.m Wn.k.m − bm γn. 1 In. ∂xn.m Wn.k.k.m ≥ λIn.m.m − In. λxn.k.k.m n. 1]. k.k.m Fn.k. Now.m = .k.k.k.k.m and xn.m + bm γn.k.m In.m ∂Wn.k.k.k.m − In.k.m In.k.m) is convex in In.m n.m (B.k.k.m whenever In.m n.k.m Wn. Therefore.k.m > 0.k U ˜′ ˜ ′′ Wn.k.m + (1 − λ)In.k.k.mγn.k.m ≥ In.m n.m + (1 − λ)In.5) ˜n.m Fn. if In.k.k. λxn.k.k.m). for a given λ ∈ [0.k.m (1 − λ)In.k.m n.k.m I n.k. The above matrix is positive semi-deﬁnite.k.m ′ ˜ −Un.k.m U W −U In.6) In this case.k.k.k U ˜′ = −U n.m ˜′ ˜n.k.m n.m Fn.k.k.k.m In.k.k.m n. (B.k. U 2 n. in order to show convexity.k.m ∂U xn.k.m = Wn.k.m . Then.m n.k.k.m + xn.k.k.k.k. xn.4) (B.k.m .k.m In.k.m 2 In.m In. (B.m In.k. let us suppose that for a particular (n. xn.k.m(In.m n.k.m In.k.m n.k.k. . ∂In. In.mFn.7) 140 (2) (2) (2) (2) (2) (1) (2) . and 2 ∂In.m x2 n.m ˜ ′′ Wn.k.k n.k.k.m is given as: Using the gradient.m = 0.k.m n.m + (1 − λ)In.m Wn.k.k.k.k.k.m ˜ ′′ Wn.m U ∂U n. m).m) and (In.k.m xn.m.k.m .m.k.written as: ˜′ ˜n.k. xn.m ′′ ′ ′′ ′ ˜ ˜ ˜ ˜ − U W + b γ U U W − b γ U 3 2 m n.m ˜ ′ = − 2 U Wn.m .m. xn. xn.m = 0. without loss of generality.m + bm γn.m In.k.k.m λIn. Then.k.m n.k.k.k = − Wn.m where ˜n.k.m + (1 − λ)xn.k ∂Wn.m W xn.k.m n.k.k.k. xn.m .k.m m n.k. the above inequality is equivalent to (1) (2) (1) (2) (1) (2) (1) (1) (1) (2) (2) (2) (1) (1) (2) (2) In.m n.mFn.k. we consider 2 points in the domain of CSRA problem as (In.m + (1 − λ)xn.k.k.m bm xn.m In.k n.m Wn.k.m. we should have λIn.m .k U ˜′ − U .k.m bm γn.k.

10) n.8) − bm γn.m = 0.k.mFn.m or − E Un.k. λxn.m + (1 − λ)xn.k.k.k.k.k.k.m Fn.m .m .k.k.m (and.m (·) is a decreasing convex function.k.k.k. If not.k. the argument of Un.k.k.k.k.m (·) in the RHS is greater. xn. if not equal. Therefore. I∗(µ))) = = {x 0} I∈ICSRA min L(µ.m and xn.m.m In. −Un. i.k.m (1) In the above equation. xn.k.k.k.2 Proof of Lemma 2 Say that µ1 < µ2 .k. than in the LHS. n. B.m (1 − λ)In.If In.k..k λxn. by extension In.k.e.k. where µ1 . x) min xn.k. xn.m In. it is a convex function of I and x.k.m {x 0} I∈ICSRA (B.m . ˜n.m )) The above proof shows that −In.m(In.k.k.m).m Fn.m − E Un. Therefore. the minimization problem reduces to L(µ.k. x∗ (µ.6) that.k. after ﬁxing µ.m(In. it becomes equivalent to Fn.m 1 − am exp xn.m ≥ Fn.9) 1 − am exp −bm γn.m In. µmax ]. I∗ (µ).k. it is also convex in I and x.k. Since the primal objective function of the CSRA problem.m(In.m . In.k.m (2) (2) (2) rm rm ≥ (B.m (1 − λ)In.m U is convex in In. µ2 ∈ [µmin .9) is true.k. Recall from (3. Since Un.m (2) (2) (2) (2) (2) (1) (2) (2) (B.k + xn.m.m) n.m(·) is an increasing concave function.m .m is a sum of functions that are convex in I and x.m − Pcon µ + 141 In. xn. the above inequality is satisﬁed with equality. (B.k. I.k.k.

x∗ (µ2 . I∗ (µ2 )) are suboptimal values of I∗ (µ) and x∗ (µ. we get (µ 1 − µ 2 ) Since µ1 < µ2 . ∗ ∗ ∗ x∗ n.16) 142 .k. I∗ (µ1 ))).k. (B.m ≤ −µ2 Pcon + (B.m ≤ 0. I∗ (µ1 ).m (µ1 ) n.m (µ)Fn.k.m (µ1 . I (µ2 ))µ2 + Gn. we can say that L(µ2 .k. we know L(µ1 . I (µ2 ))µ1 + Gn. n.m (µ2 ) . Then.k. Abbreviating ∗ In. I∗ (µ2 ).m (µ1 .k. Xtot (µ) is monotonically decreasing in µ.Suppose that µ = µ1 .m ∗ ∗ x∗ n.k. I∗(µ)). I (µ2 )) n. I∗ (µ2 ).15) (B. I∗ (µ2 ))) ≤ L(µ2 . x∗(µ. I∗ (µ)) (B.10)) the optimal values of I∗ (µ) and x∗ (µ. n.11) since I∗ (µ2 ) and x∗ (µ2 . x∗ (µ1 .m (µ2 .m ∗ ∗ x∗ n.m (µ1 ) .13) and (B. ∗ Therefore.14) Adding (B.k.m I∗ (µ).k. Similarly. so that (from (B.k. and using (B. x∗ (µ2 .m (µ2 . I (µ1 ))µ1 + Gn.k. (B. I∗(µ1 )).k. we have ∗ ∗ Xtot (µ1 ) ≥ Xtot (µ 2 ).k. I (µ1 ))µ2 + Gn. I∗ (µ2 ))).k. I∗ (µ1 ))) ≤ L(µ1 .m ≤ −µ1 Pcon + and −µ2 Pcon + (B. I∗ (µ)) are I∗ (µ1 ) and x∗ (µ1 . I (µ1 )) − xn.12) by G∗ n.14). respectively.k.m (µ). x∗ (µ1 .10).m (µ1 .m (µ2 ) n.k.m (µ2 .13) ∗ ∗ x∗ n.k. we then have −µ1 Pcon + ∗ ∗ x∗ n. I∗ (µ1 ).k.

notice that L(µ. I∗ (µ). x∗(µ. µ L ¯) + (Xtot (¯ µ) − Pcon )λµ ¯ + (Xtot (µ) − Pcon )(1 − λ)µ ∗ ˆCSRA (µ. I∗ (µ∗ ).17) The solution of the proposed CSRA algorithm allocates resources such that the sumpower constraint is satisﬁed while achieving a Lagrangian value of ˆ CSRA L λL(¯ µ. µmax ]. I∗ (µ∗ ))) + L ∗ − (Xtot (¯ µ) − Pcon )(¯ µ − µ)λ. x∗ (µ. I∗ (µ∗ ). µ = −U ¯) + (Xtot (¯ µ) − Pcon )(¯ µ − µ)λ. Since the resource allocation obtained by the proposed CSRA algorithm and the exact CSRA solution satisfy the sum-power constraint with equality. x∗ (µ∗ . (B. I∗ (µ))) = −U ∗ (µ) + (Xtot (µ) − Pcon )µ. where U ∗ (µ) is the total utility achieved due to optimal power allocation at that µ. µ ¯] ⊂ [µmin . I∗ (µ∗ ). I∗ (µ). x∗ (µ∗ .19) ∗ ∗ Equation (B. x∗ (¯ µ. I∗ (µ∗ ))). ∗ For any µ. (B. I∗ (¯ µ))) ≥ 0. and L(µ∗ .19).18) and (B. we have ∗ UCSRA = −L(µ∗ . and (B. From (B. I∗ (µ∗ ))) − L(¯ µ. I∗ (µ∗ ))) − L(µ. L(µ∗ . I∗ (µ).19) holds since λXtot (¯ µ)+(1 − λ)Xtot (µ) = Pcon . I∗ (¯ µ). x∗ (µ∗ . µ ˆ CSRA 0 ≤ UCSRA −U ¯) = −L(µ∗ .B. I∗ (¯ µ). x∗ (¯ µ. I∗ (¯ µ))) + (1 − λ)L(µ. x∗ (µ∗ . I∗(µ))). Recall µ∗ ∈ [µ. (B.3 Proof of Lemma 3 To compare the utilities obtained by the proposed CSRA algorithm and the exact CSRA solution. I∗ (µ∗ ). x∗ (µ. we compare the Lagrangian values achieved by the two solutions. we get ∗ ˆCSRA (µ.18) ∗ ∗ ˆ CSRA = −U ˆCSRA (µ.20) 143 . I∗ (µ))) ≥ 0. Therefore.

In this case. Then. we have I∗ (˜ µ) ∈ {0.4.k.k. then for every ν > 0.17). for all (k. Since p∗ n.m (˜ 144 1 min Vn. 3.m (µ)) − Vn. p∗ µ)) < n.m (˜ n. we can ﬁx a δ such that |f (x) − f (x0 )| < ν. n. Then. let us suppose that |Sn (˜ µ)| = 0. µ ¯) ≤ (Pcon − Xtot (¯ µ))(¯ µ − µ)λ ≤ (¯ µ − µ)Pcon .k.k. at µ ˜. |Sn (˜ µ)| > 1 but no two combinations in Sn (˜ µ) have the same allocated power. p∗ µ.m (> 0) such that Vn.k.k.k. Vn.21) 0 ≤ UCSRA −U B. (B. For some n.16).k. For some n.m(µ. for any subchannel n.m (˜ n. one of the following three cases hold.m(˜ µ .k. p∗ µ)). 1. 2.m (µ) is a continuous function of µ.m(µ.4 Proof of Lemma 4 Let µ ˜ ∈ [µmin . 1}N ×K ×M that can be found by using (3.k. we have ∗ ∗ ˆCSRA (µ. |Sn (˜ µ)| ≤ 1. we can ﬁx a δn. For all n. µmax ] be any value of the Lagrangian dual variable for the CSRA problem. p∗ µ)) > 0.m(µ)) is a continuous function of µ.m(˜ µ .k.From the above equation and (B. Now. We will now study each case individually. we have Vn. |Sn (˜ µ)| > 1 and at least two combinations in Sn (˜ µ) have the same allocated power. 1 In our setting. if f (·) is a function continuous at x0 .m(˜ 2 n. B. p∗ n. whenever |x − x0 | < δ.1 Case I : |Sn (˜ µ)| ≤ 1 ∀n. m). By deﬁnition [84].m .k. therefore.k.

m(µ. m∗ (kmax max (n)) ∗ (kmin (n). using arguments from the scenario 1 of |Sn (˜ µ)| = 0 (see (B. and argmax p∗ n. we can ﬁx a δ1 (> 0) such that Vn. (B. µ + δ1 ). |Sn (˜ µ)| > 1 but no two combinations in Sn (˜ µ) have the same allocated power. m∗ (n))}.4.m(˜ µ) Vn. In other words.22)). 1 Selecting δ1 = minn. For other combinations. n. we have Sn (µ1 ) = Sn (µ2 ) for all n and |Sn (µ1 )| ≤ 1. n. Then. p∗ ˜| < δn.k.m > 0 such that 1 Wn. This implies that I∗ (µ1 ). p∗ n. p∗ µ)).1 whenever |µ − µ ˜| < δn. ∀(k. Now.k. p∗ µ)) − Vn. Wn. n. we have that |Sn (µ)| ≤ 1 ∀n.m∗ (n) (˜ µ.m .m (µ) > 0. This implies 1 Vn. let us deﬁne Wn. µ2 ∈ (µ − δ1 .k ∗ (n).k. m) ∈ / Sn (˜ µ). whenever |µ − µ (B.k.k. I∗ (µ2 ) ∈ {0.m∗ (n) (˜ Clearly.k.22) In other words. From case I.k. 1}N ×K ×M and I∗ (µ1 ) = I∗ (µ2 ). m) = (k ∗ (n). whenever |µ − µ ˜| < δn. for all (k. for any other combination (k. m∗ (n)) ∈ Sn (˜ µ). ∗ (n).m)∈Sn (˜ µ) µ). whenever |µ − µ ˜| < δ1 .k.m(µ) is a continuous function of µ and takes on a positive value at µ=µ ˜ for all (k.2 Case II : For some n.k. m∗ (n)) min (k.m(µ)) > 0.k.m δn. m∗ (n)). m) = (k ∗ (n).m . Sn (µ) = {(k ∗ (n). |Sn (µ)| = |Sn (˜ µ)| = 0. let us start by giving the following deﬁnitions.k.k. if µ1 .m(µ)) > 0 whenever |µ − µ ˜| < δ1 . Therefore.m)∈Sn (˜ µ) 145 .k∗ (n).m (˜ (k.m .k.m(µ.23) Moreover.m (˜ n. B.m (˜ argmin p∗ µ).k.m . we can ﬁx a δn.m(˜ µ.k. m∗ (n)).k.k. m). let us suppose that |Sn (˜ µ)| = 1 and (k ∗ (n).

∗ We will now show that we can ﬁx a δ2 (> 0) such that Sn (µ) = {(kmax (n), m∗ max (n))} ∗ whenever µ ∈ (˜ µ−δ2 , µ ˜) and whenever µ ∈ (˜ µ, µ ˜+δ2 ), either Sn (µ) = {(kmin (n), m∗ (n))} min

**or Sn (µ) = φ. Consider the case when µ < µ ˜. Let us deﬁne Tn,k,m(µ, µ ˜) Vn,k,m(µ, p∗ µ , p∗ µ)) n,k,m(µ)) − Vn,k,m (˜ n,k,m (˜ . µ−µ ˜
**

∂Vn,k,m (µ,p∗ n,k,m (µ)) ∂µ µ=˜ µ

**Note that limµ→µ ˜) = ˜ Tn,k,m (µ, µ ∂Vn,k,m (µ, p∗ n,k,m(µ)) ∂µ ∂p∗ n,k,m (µ) = p∗ ( µ ) + µ n,k,m ∂µ −am bm rm E
**

′ Un,k,m

, where

(B.24)

−bm p∗ n,k,m (µ)γn,k

(1 − am e

)rm γn,k e

−bm p∗ n,k,m (µ)γn,k

∂p∗ n,k,m (µ) . ∂µ

Now, if (3.11) is satisﬁed for a non-negative value of p ˜n,k,m(µ), i.e., p∗ n,k,m (µ) = p ˜n,k,m(µ), then ∂Vn,k,m (µ, p∗ n,k,m(µ)) lim Tn,k,m(µ, µ ˜) = = p∗ n,k,m (µ). µ→µ ˜ ∂µ (B.25)

**On the contrary, if (3.11) is not satisﬁed for a non-negative value of p ˜n,k,m(µ), then µ >
**

′ am bm rm Un,k,m (1 − am )rm E{γn,k }. Since the RHS of (3.11) is a continuous function

**of p ˜n,k,m(µ), p∗ n,k,m (µ) = 0 in a small neighborhood around µ giving Therefore, (B.25) is true for all possible values of µ.
**

∗ Then, for any (k, m) ∈ Sn (˜ µ) \ {(kmax (n), m∗ (n))}, we have max

∂p∗ n,k,m (µ) ∂µ

= 0.

µ→µ ˜ µ→µ ˜

lim

**∗ ∗ ∗ (n),m∗ (n) (µ, p Vn,kmax ∗ (n),m∗ (n) (µ)) − Vn,k,m (µ, pn,k,m (µ)) n,kmax max max
**

∗ (n),m∗ (n) (µ, µ = lim Tn,kmax ˜) − Tn,k,m(µ, µ ˜) max

µ−µ ˜

(B.26)

= p∗ µ ) − p∗ µ) ∗ (n),m∗ (n) (˜ n,kmax n,k,m (˜ max > 0, 146

(B.27)

**where (B.27) follows from (B.25). Note that the denominator in (B.26) is less than
**

1 zero. Therefore, we can ﬁx a δn,k,m (> 0) such that ∗ ∗ (n),m∗ (n) (µ, pn,k ∗ (n),m∗ (n) (µ)) < Vn,k,m (µ, p Vn,kmax n,k,m(µ)), max max max

(B.28)

1 ∗ whenever 0 < µ ˜−µ < δn,k,m . Since, this is true for all (k, m) ∈ Sn (˜ µ)\{(kmax (n), m∗ (n))}, max 1 ∗ (n))} whenselecting δ 1 = min{δ1 , minn,k,m δn,k,m }, we have Sn (µ) = {(kmax (n), m∗ max

**ever µ ∈ (˜ µ − δ1, µ ˜). Note that Vn,k,m(µ, p∗ n,k,m(µ)) being an increasing function of µ, implying that
**

∗ (n),m∗ (n) (µ, pn,k ∗ (n),m∗ (n) (µ)) ≤ Vn,k ∗ (n),m∗ (n) (˜ ∗ (n),m∗ (n) (˜ Vn,kmax µ, pn,kmax µ)) ≤ 0. max max max max max max

**Now, consider that µ > µ ˜. Using the arguments from the above discussion, we
**

2 ∗ (n), m∗ obtain that for any (k, m) ∈ Sn (˜ µ) \ {(kmin min (n))}, there exists a δn,k,m , such

that

∗ ∗ (n),m∗ (n) (µ, pn,k ∗ (n),m∗ (n) (µ)) < Vn,k,m (µ, p Vn,kmin n,k,m(µ)), min min min 2 whenever 0 < µ − µ ˜ < δn,k,m . Let Vn,k,m(˜ µ , p∗ µ)) < 0. Then we can ﬁx a δ3 (> 0) n,k,m (˜

such that

∗ (n),m∗ (n) (µ, pn,k ∗ (n),m∗ (n) (µ)) < 0, Vn,kmin min min min

2 whenever 0 ≤ µ − µ ˜ < δ3 . Selecting δ 2 = min{δ1 , minn,k,m δn,k,m , δ3 }, we then have ∗ Sn (µ) = {(kmin (n), m∗ µ, µ ˜+δ 2 ). However, if Vn,k,m(˜ µ, p∗ µ)) = min (n))} whenever µ ∈ (˜ n,k,m (˜ 2 0, then selecting δ 2 = min{δ1 , minn,k,m δn,k,m }, we have Sn (µ) = φ whenever µ ∈

(˜ µ, µ ˜ + δ 2 ). In both situations, if µ1 , µ2 ∈ (˜ µ, µ ˜ + δ 2 ), we get Sn (µ1 ) = Sn (µ2 ). Summarizing the discussion for case II, selecting δ2 = min{δ 1 , δ 2 }, we have I∗ (µ) ∈ {0, 1}N ×K ×M whenever |µ − µ ˜| < δ 2 , µ = µ ˜. Furthermore, if µ1 , µ2 ∈ (˜ µ − δ2 , µ ˜) or if µ1 , µ2 ∈ (˜ µ, µ ˜ + δ2 ), then Sn (µ1 ) = Sn (µ2 ) ∀n and I∗ (µ1 ) = I∗ (µ2 ). 147

B.4.3

Case III : For some n, |Sn (˜ µ)| > 1 and at least two combinations in Sn (˜ µ) have the same allocated power.

**Let us say that (k1 , m1 ) and (k2 , m2 ) lie in the set Sn (˜ µ) for some n and p∗ µ) = n,k1 ,m1 (˜
**

∗ ∗ p∗ µ). Let us also assume that In,k (˜ µ) and In,k (˜ µ) are the corresponding n,k2 ,m2 (˜ 1 ,m1 2 ,m2

**optimal allocations. Then, their contribution towards the optimal Ln (˜ µ, ·) is
**

∗ ∗ In,k (˜ µ)Vn,k1 ,m1 (˜ µ, p∗ µ)) + In,k (˜ µ)Vn,k2 ,m2 (˜ µ, p∗ µ)) n,k1 ,m1 (˜ n,k2 ,m2 (˜ 1 ,m1 2 ,m2 ∗ ∗ = (In,k (˜ µ) + In,k (˜ µ))Vn,k1 ,m1 (˜ µ, p∗ µ)). n,k1 ,m1 (˜ 1 ,m1 2 ,m2

**Moreover, the total power allocated is
**

∗ ∗ In,k (˜ µ ) p∗ µ) + In,k (˜ µ)p∗ µ) n,k1 ,m1 (˜ n,k2 ,m2 (˜ 1 ,m1 2 ,m2 ∗ ∗ µ ). = (In,k (˜ µ) + In,k (˜ µ))p∗ n,k1 ,m1 (˜ 1 ,m1 2 ,m2

(B.29)

Therefore, we can ignore (k2 , m2 ) and give its allocation to (k1 , m1 ) while maintaining ˜∗ (˜ optimality. Mathematically, the allocation I µ) deﬁned by ∗ µ) (n, k, m) ∈ / {(n, k1 , m1 ), (n, k2, m2 )} In,k,m(˜ ∗ ∗ ∗ ˜n,k,m(˜ I µ) = In,k1 ,m1 (˜ (B.30) µ) + In,k2 ,m2 (˜ µ) (n, k, m) = (n, k1 , m1 ) 0 (n, k, m) = (n, k2 , m2 ),

also achieves the optimal Lagrangian value at µ = µ ˜ and has the same allocated

sum-power. Note that this can be repeated for every subchannel n and for each n, over all such combinations with equal allocated powers. Combining cases I, II, and III, we can ﬁx a δ = min{δ1 , δ2 } for every µ ˜ such that I∗ (µ) ∈ {0, 1}N ×K ×M whenever |µ −µ ˜| < δ , µ = µ ˜. Moreover, for all µ1 , µ2 ∈ (˜ µ−δ, µ ˜), there exists I∗ (µ1 ), I∗ (µ2 ) ∈ {0, 1}N ×K ×M such that I∗ (µ1 ) = I∗ (µ2 ). The same property holds when both µ1 , µ2 ∈ (˜ µ, µ ˜ + δ ).

148

B.5

Proof of Lemma 5

**The proof involves the use of generalized Lagrange multiplier method. From (3.6), we have I∗ (µ), x∗ (µ, I∗(µ)) = argmin
**

x 0 n,k,m I∈ICSRA

In,k,m Fn,k,m(In,k,m , xn,k,m)+

n,k,m

xn,k,m −Pcon µ, (B.31)

**where Fn,k,m (·, ·) is deﬁned in (3.4). Since, I∗ (µ) ∈ {0, 1}N ×K ×M by assumption and I∗ (µ) ∈ ICSRA , we have I∗ (µ) ∈ IDSRA ⊂ ICSRA . Therefore, I∗ (µ), x∗ (µ, I∗(µ)) = argmin
**

x 0 n,k,m I∈IDSRA

In,k,m Fn,k,m(In,k,m , xn,k,m)+

n,k,m

xn,k,m −Pcon µ. (B.32)

Reference [85] clearly states that the theory of Lagrangian extends to arbitrary realvalued functions (diﬀerentiable or not) over arbitrary constraint sets. In particular, applying [85, Theorem 1] to (B.32), we deduce that I∗ = I∗ (µ) and X∗ = x∗ (µ, I∗(µ), where (I∗ , X∗ ) solves the following (primal) problem (I∗ , X∗ ) = argmin

{X 0} n,k,m I∈IDSRA

In,k,mFn,k,m(In,k,m, Xn,k,m) Xn,k,m ≤

∗ x∗ n,k,m (µ, I (µ)). n,k,m

s.t.

n,k,m

(B.33)

**Substituting back Xn,k,m = In,k,mPn,k,m in the above equation, we get (I∗ (µ), p∗ (µ, I∗(µ)) = argmin −
**

{X 0} I∈IDSRA

n,k,m

In,k,m E Un,k,m (1 − am e−bm Pn,k,m γn,k )rm

∗ x∗ n,k,m (µ, I (µ)), n,k,m

s.t.

n,k,m

In,k,mPn,k,m ≤

(B.34)

where

∗ p∗ n,k,m (µ, I (µ))

=

∗ x∗ n,k,m (µ,I (µ)) ∗ (µ) In,k,m

∗ if In,k,m (µ ) = 0

0 149

otherwise.

(B.35)

However. µ ¯ → µ∗ .B. we have 0 ≤ UDSRA −U CSRA − UDSRA . I)).5) and LI (µ. since the total optimally allocated power for I µ = µ∗ is less than or equal to Pcon and the total optimally allocated power for any 150 . deﬁned in (3. Imin (µ∗ ). x∗(µ∗ . let us denote Imin (µ∗ ) and Imax (µ∗ ) (∈ IDSRA ). Let U (I) be the optimal utility achieved for user-MCS allocation matrix I ∈ IDSRA . Imin )) + LImin (µ∗ Imin . U us ∗ ∗ min ˆDSRA ≤ U ∗ UCSRA −U CSRA − U (I ) ∗ ∗ = −L(µ∗ . the allocation Imin (µ∗ ) is one of possibly many values of I ∗ minimizing L(µ∗ . x∗ (µ∗ .30). respectively. We recall from Section 3.3 and I∗ / CSRA ∈ ∗ ∗ ˆDSRA ≤ U ∗ ˆ IDSRA . x) in (3. The left inequality in the lemma is ¯) by U Let us denote limµ→µ ¯ UDSRA (µ. I )). In this case. Therefore. x∗ (µ∗ . at µ∗ . U ∗ (Imax )}. Thus. Imin . 3. I. Imin )) + L(µ∗ Imin . µ straightforward since UDSRA ≥U ¯) ∀µ. for (B. I . x (µImin . if |Sn (µ∗ )| ≤ 1 ∀n. I. Imin . (B. This gives by Imin and Imax . Note that µ∗ at Imin ≤ µ .4. Pcon lies in one of the “gaps” as mentioned in Fig. µ ∗ ˆDSRA (µ. x) in ∗ min (3.6 Proof of Lemma 6 ˆ ˆDSRA . Imin (µ∗ ))). ˆDSRA = max{U ∗ (Imin ).36) where.36). UCSRA = −L(µ∗ . we use the equivalence between L(µ. x∗ (µ∗ . then we ∗ ∗ ˆDSRA . ensuring that the solution obtained via the proposed have UDSRA = UCSRA =U DSRA algorithm is optimal in the limit µ.3 that.36). µ ¯. when |Sn (µ∗ )| > 1 for some n. x (µImin ))) min ∗ ∗ min = −L(µ∗ . Now. For brevity in this proof.

37) .m(p∗ In.m xγn.m (µ ) ∂x (B. we get ∗ ˆDSRA UCSRA −U ≤ − − µ∗ Pcon + n.m where.k.m (µ min ) I From (B.m(x1 ) ≥ (x2 − x1 )U ¯′ that U n.m (µImin )) − Un.k. ·) from (3.k. Therefore.k.k. + − µ∗ Imin Pcon + n.m (µ ) and ∗ x2 = p∗ n. we then get ∗ ˆDSRA UCSRA −U ∗ ≤ µ∗ Pcon − Xtot (Imin.k.m 151 .m(x) = E Un. if x1 ≤ x2 .m min ∗ ∗ min ∗ ∗ ¯n. we ﬁnd that it Calculating the ﬁrst two derivatives of U is a strictly-increasing concave function of x. µ) in (3.m (µ ) (B.k.m ¯n.k.m (x2 ) − U ¯n.k.k.k.41). n. ·. one can write ∗ ∗ ¯n.m −U n.k.m (µ )) + µ pn.k.k. Plugging L(·.m (pn.m (µImin ) − pn.k. U .37) can be re-written as ∗ ˆDSRA UCSRA −U ∗ ≤ µ∗ Pcon − Xtot (Imin . we have Xtot (Imin .k.k.k.k.k )rk.m (pn. µ∗ ) − (B.m (µImin )) + µ (I )pn.given I is a decreasing function of µ. µ∗) ≤ Pcon and Xtot (Imin . n.k. Using the deﬁnition of ∗ ∗ ∗ Xtot (I.m (1 − ak.5) into (B.m (x2 ).k.38) min ∗ ∗ ∗ ¯n.k.k.36).m (µ )) ¯n.k.m (p∗ ¯ In.39).m (µ )) .k.k.k.m pn.m e−bk.m U n.m (µImin ) − pn. Therefore.m U n.m (x) ∂U ∗ ∗ ∗ ≥ p∗ n. (B.40) ∗ ∗ ∗ p∗ n.m (µImin ) into this inequality.k.m (µ ) .m min ∗ ∗ ∗ ∗ ¯n.k.m(p∗ In.k.k.k. µ∗ ) − min ∗ ∗ ¯′ In.k. µ∗ Imin ) = Pcon . Plugging x1 = pn. we get ∗ ∗ ∗ ¯n.m −U n.m (µImin ) ¯n.k.m (µImin )) − Un.m (µImin ) (B.k.m(p∗ ¯ U n.m (x) with respect to x.k.k.39) ∗ x =p ∗ n.38) and (B.k.

k.m pn.m (µImin )γn.∗ ∗ ¯′ Evaluating U n.k ≥ µmin .k.m pn.m e−bk.m ∗ ∗ × γn.m (x) ∂U ∂x ∗ x =p ∗ n.k e−bk.41) ≤ (µmax − µmin )Pcon . (B.k.k.42) 152 .m (µImin ) . From (B.k.m pn. we ﬁnd ¯n.40) and (B.41). µ∗) ∗ ∗ (B.m (µImin )γn.m rk.m E Un.k.m (µ min ) I ′ = ak. we ﬁnally obtain ∗ ∗ ˆDSRA ≤ (µ∗ − µmin ) Pcon − Xtot UCSRA −U (Imin .m bk.m (1 − ak.k.k )rk.

Ui. and (C.k.3) E i=1 n=1 B N max log 1 + Pcon γi.4) (C.2) follows because. ·). maxk E{f (k.2) . {x. for any function f (·. P) ≤ N ≤ N log 1 + i=1 B 1 N Pi.3) follows because log(·) is a non-decreasing function.n k where (C.y. we have C ∗ = E{Cx.n i=1 n=1 (C.k.r.Appendix C: Proofs in Chapter 5 C. P) ≤ log 1 + Pi. N n.y.Ui. P)} B N ≤ ≤ ≤ max E i=1 n=1 B N k log 1 + Pcon γi.5) log 1 + i=1 Pcon max γi.n k E i=1 n=1 log 1 + Pcon max γi.k. (C.1 Proof of Theorem 1 By ignoring the interference.k 153 .n (C. ·)}.ν (U.y.t. ν }. y.ν (U. we have B N Cx.1) Taking expectation w.n γi.n γi. ·)} ≤ E{maxk f (k.k. One can also construct an alternate upper bound by applying Jensen’s inequality to the RHS of (C.n .n .n .ν (U.n n (C.1) as follows: B Cx.

k.k. P)} B ≤ N E i=1 log 1 + Pcon max γi.y.n .ν i. we introduce an indicator variable Ii.9) E Ii. y.y.k log 1 + Pcon γi.n .k Pcon γi.k.n . y.k. C ∗ = E{Cx.k .n N + Pcon j =i γj. N + Pcon j =i γj. P) ≥ log 1 + i=1 n=1 Pcon γi. we have k Ii. P). due to suboptimal power allocation.n N n.n ≤ Pcon . Now.y.ν (U.n (C.k. B N Cx. let Pcon /N power be allocated to each resource-block by every BS.n(x.n.n (x.n log 1 + Pcon max γi.ν (U. N + Pcon j =i γj.ν (U.n. Note that.ki. otherwise takes the value 0.k. n. z) log 1 + i.6). (C.ki.n (C.8) easily. y. Therefore.6) Combining (C. y. all user-allocation strategies {ki.k. (x.8) where ki. y. ν ) which equals 1 if k = ki.y.n . z) i.k.ν (U.n .k.since n Pi.y. ν ) B = 1 ∀ i.y.k.n (x. P) ≥ Ii.k. ∀i.k.n N n.n (x.ν log 1 + Pcon max γi. (C. (C.n is any other user allocated on subchannel n by BS i.t. we get C∗ ≥ ≥ E Ii.n k .n 154 .k.r.8) can be re-written as: N K Cx. each BS i can schedule at-most one user on any resource block n in a given time-slot. y.k. To handle (C. Since.n . Then. ν ).3) and (C. n} achieve a utility that is lower that Cx. ν ) log 1 + i=1 n=1 k =1 Pcon γi.n.7) Ex. we obtain C ∗ ≤ min N i Ex.n Taking expectation w.k For lower bound.n (x. N + Pcon j =i γj. .

y.k.Here. E Ii. 1 N +Pcon j =i d1 d2 d1 − V (0) d2 d2 V (d1 ) − V (0) ≥ + V (0). ν ) = 1 if k = arg maxk′ γi. Therefore. (C.12) Using (C.n log 1 + Pcon max γi. uses Lemma 12 and extreme-value theory to show 155 .11) C∗ ≥ i.k. (C.n (·). N ) E i.n(x.n to be the one for which γi.e.n 0 otherwise. denoted by Fγi. the last equation holds because for any non-decreasing concave function V (·) (for example. B. V (x) = log(1 + x)) and for all d1 . d2 V (C. where C ∗ is expected achievable sum-rate of the system.n. We ﬁrst prove three lemmas.k. The third lemma. i.n yj.e. The ﬁrst lemma. The second lemma.12) in (C.n N + Pcon j =i γj.10) γj. Lemma 12.n| and a truncated path-loss model. ﬁnds the cumulative distribution function (CDF) of channel-SNR.k.. i.k. i.k.n (x. n). Lemma 11.k′ .n attains the highest value for every combination (i. we now select the user ki.n . i.n ≤ 1.e.e.2 Proof of Theorem 2 and Theorem 3 The proof outline is as follows.k. under Rayleigh-distributed |νi.k. d2 > 0.11). we have V (d1 ) − V (0) ≤ =⇒ V Now. y. Lemma 13.k. ν ) log 1 + Pcon γi.k To obtain the best lower bound.k.n k . Ii. we get the lower bound in Theorem 1. uses one-sided variant of Chebyshev’s inequality (also called Cantelli’s inequality) and Theorem 1 to show that DN C ∗ ≥ flo (r. C.

156 . ∞). By assumption. Theorem A.k.k. We have Pr j =i |νj.k maxk log 1 + Pcon γi. −2α (1+r 2 )(N +Pcon β 2 r0 (µ+rσ)B ) −2α −2α Rj.n|2 .i.k. we use Theorem 1. applying Cantelli’s inequality. (C. Now. Hence.k.n .k.n|2 > (B − 1)(µ + rσ ) |νj. x ∈ (−∞.k.k. flD o (r. the lower bound in Theorem 1 reduces to the following equation. across i. K 2α /β 2 ).13) Thereafter.14) µ and σ are N where r > 0 is a ﬁxed number. n with mean µ and variance σ .n (lK ) = 1 − 1 . Lemma 11.2] to obtain the ﬁnal result.n − lK ) converges in distribution to a limiting random variable with a Gumbel type cdf.d. Proof.18) =⇒ Pr =⇒ Pr j =i j =i where r > 0 is a ﬁxed number. We know that γj.n |2 .n = β 2 j =i j =i r2 .k.k. Lemma 13. In a dense-network. (C.n|2 ≤ β 2 r0 j =i |νj. |νi. that is given by exp(−e−xr0 where Fγi.k.that (maxk γi.k. −2α 2 N + Pcon β 2 r0 j =i |νj. B. and [67.k |νj.n|2 > (µ + rσ )B |νj.n log 1 + Pcon max γi. the expected achievable sum-rate is lower bounded as: DN C ∗ ≥ flo (r. B.n|2 ≤ (µ + rσ )B ≤ ≤ ≥ 1 1 + r2 1 1 + r2 r2 1 + r2 (C.k. k.k.n. Lemma 11.16) Now. C∗ ≥ E i. N ) = the mean and standard-deviation of |νi. (C. we give details of the full proof.n|2 in the denominator.k. we apply one-sided variant of Chebyshev’s inequality (also called Cantelli’s inequality) to the term j =i |νj.n | (C.n |2 are i.17) (C.n k . N ) E i.15) Therefore.

independent of γi.k.n |2 ≤(µ+rσ)B × Pr ≥ j =i −2α N + (µ + rσ )BPcon β 2 r0 E i=1 n=1 k max log 1 + Pcon γi.k.n . (C. Therefore.k. νi. and 1 g −1/α cos−1 s( g ) = πp2 β2 d2 + 2d g β2 g −1/α − β2 − 1 / 2 α g β2 p2 + p cos g β2 −1/2α 2 −1 d 2 + p2 − g −1/α β2 2dp − 1 2 p+d− d+ g β2 −1/2α p+ −d .k.k. Under Rayleigh fading..k. i.e.Now.16) into two parts — one with j =i |νj. where (C. To compute maxk γi. N ) i. Note that for Rayleigh fading channels.21) 2 a2 i + bi .n (γ ) = 1 − 0 e − 2 p αβ 2 p2 + where d = β2 (p−d)2α β2 (p+d)2α −2α β 2 r0 β2 (p−d)2α e− g γ g β2 1 −1− α dg exp(−γ/g )ds(g ).19) follows because j =i |νj.n|2 ≤ (µ + rσ )B .19) (C.k.n (and hence.n|2 is independent of νi. We then ignore the ﬁrst part to obtain another lower bound.k.n −2α N + (µ + rσ )BPcon β 2 r0 |νj. we break the expectation in (C. the CDF of γi. we now have B N C ∗ ≥ E i=1 n=1 maxk log 1 + Pcon γi. Lemma 11 and earlier proved Theorem 1 show that the lower and upper bounds on C ∗ are functions of maxk γi.n k (C.n|2 ≤ (µ + rσ )B r2 1+r 2 B N j =i |νj.k.n is given by r 2 − β 2 rγ 1 −2α 0 Fγi.n ∼ CN (0. Lemma 12.n E log 1 + Pcon max γi.k.n . 1).k.k.n|2 > (µ + rσ )B and other with j =i |νj. B.n).k. × −1/2α −p 157 d+p+ g β2 −1/2α . µ = σ = 1.20) DN = flo (r. we prove Lemma 12 and Lemma 13.k.n for large K .k.

p − d = p − a2 i + bi ≥ R > r0 for all i.n|2 . the users are distributed at-least within a distance 2 R (R > r0 ). yk ) can be written as f(xk .yk ) x. We assume that the users are distributed uniformly in a circular area of radius p and there are B base-stations in that area as shown in Fig. Hence.24) −2α We now compute the probability density function of Gi.n|2 (C.25) .k (= β 2 Ri. Pr(Gi.k. Gi. (xk − ai )2 + (yk − bi )2 β 2 |νi. y = 1 πp2 0 x2 + y 2 ≤ p2 otherwise.k.n = max r0 .k > g ) g g −α × Pr (xk − ai )2 + (yk − bi )2 > 2 2 β β g g −1/2α 2 + (y − b )2 < × Pr ( x − a ) = Pr r0 < k i k i β2 β2 −2α = Pr r0 > −1/2α = 0 Pr (xk − ai )2 + (yk − bi )2 < 158 g −1/2α β2 otherwise.22) Note that around any base-station.k γi.k −α −2α β 2 |νi. (C. Now.k. (xk − ai )2 + (yk − bi )2 Ri.23) −2α = min r0 .1. −2α if g ≥ β 2 r0 (C. The probability density function of the user-coordinates (xk .k ).1: OFDMA downlink system with K users and B base-stations. Proof. C. (C.Users Base Station (BS) O p−R p Figure C.

we have: 1 s( g ) > g) = g −1/α 1 β2 p2 0 d2 + 2d −1/2α if if if if Pr(Gi. and the user is stationed at (xk . C.2: System Layout.k g β2 g β2 g β2 g β2 −1/2α −1/2α −1/2α −1/2α ∈ (p + d. p − d ] ∈ [0. This is shown as the shaded region in Fig.26) where s(g ) equals 1 g −1/α cos−1 πp2 β2 − × 1 2 p+d− d+ g β2 g β2 g −1/α − β2 g −1/2α β2 p2 + p2 cos−1 d 2 + p2 − g −1/α β2 2dp p+ g β2 −1/2α −d . p + d] (C.27) −1/2α −p d+p+ g β2 −1/2α 159 . this probability is precisely equal to πp2 times the intersection area of the overall area (of radius p around O) and a circle around BS i with a radius −1/2α of βg2 . yk ). (xk − ai )2 + (yk − bi )2 < g −1/2α β2 is basically the probability that the −1/2α (ai . (C. r0 ]. Therefore. bi ). Since. Pr distance between the user k and BS i is less than βg2 . ∈ (p − d. the users are 1 uniformly distributed.2. ∞) ∈ ( r0 .Now. yk ) 11 00 0011111 11 00000 00000 11111 00000 11111 00000 11111 d 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 p Figure C. The BS i is located at a distance of d from the center with the coordinates (ai . bi) (xk .

0 160 .3. The pdf of Gi. (C.3: Cumulative distribution function of Gi. β 2 r0 −2α β 2 r0 −2α β 2 r0 . The probability density function of Gi. C. β 2 (p − d)−2α if g = if g > −2α if g ∈ β 2 (p − d)−2α .The CDF of Gi. β 2 (p + d)−2α if g ∈ β 2 (p + d)−2α .k can be written as follows: 0 ds(g ) − dg 1 αβ 2 p2 2 r0 p2 fGi. it takes the value 0.k can now be written 0 1 − s(g ) FGi. Pr(Gi.k (g ) = −1/α 1 1 − βg2 p2 1 as if g ∈ 0. β 2 (p + d)−2α if g ∈ β 2 (p + d)−2α .29) (g ) −2α ≤ 0. β 2(p − d)−2α −2α if g ∈ β 2 (p − d)−2α . β 2 r0 −2α if g ∈ β 2 r0 .k ≤ g ) 1 1 − p0 2 d) 1 − ( p− p2 r2 2 0 β 2(p + d)−2α β 2(p − d)−2α −2α β 2r0 g Figure C. and is continuous in [β (p + d) .28) A plot of the above CDF is shown in Fig. β r0 ).k (g ) = g −1−1/α β2 if g ∈ 0. (C. At all other points.k has a discontinuity of the ﬁrst-kind at β 2 r0 (where where ds dg 2 −2α 2 −2α it takes an impulse value).k .∞ .

n (γ ) = = = 1− p |νi.n (lK ) = 1 − 1/K .k.k (g )dg g (C. We have from Lemma 12 Fγi.k. Furthermore.n be a random variable with a cdf deﬁned in Lemma 12.n (γ ) r 2 − β 2 rγ −2α 0 e = 1− 0 − 2 p + β 2 (p−d)−2α β 2 (p+d)−2α −2α β 2 r0 e−γ/g β 2 (p−d)−2α g 1 2 2 αβ p β 2 −1−1/α dg e−γ/g s′ (g )dg −2α β 2 r0 r 2 − β 2 rγ −2α 0 e = 1− 0 − p2 + s( g ) e −γ/g e−γ/g β 2 (p−d)−2α g 1 αβ 2 p2 β 2 −1−1/α dg (C. (C.k.k.n (γ ) −2α where lK is such that Fγi. In particular.30) (C. x ∈ (−∞. (C.36) g2 γ β 2 (p−d)−2α β 2 (p+d)−2α .k.Using (C.34) 2 Kr0 .n (γ ) (when γ ≥ 0) can be written as Fγi.k.35) β 2 (p−d)−2α β 2 (p+d)−2α −γ β 2 (p−d)−2α β 2 (p+d)−2α e−γ/g s(g ) dg g2 1 −1− α r 2 − β 2 rγ −2α 0 = 1− 0 e − 2 p +e − γ β 2 (p−d)−2α −2α β 2 r0 e− g γ (p − d )2 − 2 γ −2α β ( p+ d) − e −γ p2 161 β 2 (p−d)−2α g 1 2 2 αβ p β 2 dg e− g s(g ) dg.k.k..31) (C. ∞).k.29). that is given by 2α 2 exp(−e−xr0 /β ).k. the growth function 1−Fγi.n − lK ) converges in distribution to a limiting random variable with a Gumbel type cdf.32) e−γ/g g 1 2 2 αβ p β 2 −1−1/α 1 − e−γ/g fGi.n Proof.n|2 ≤ γ fGi.n.e. Then.k.k (g )dg −2α β 2 r0 r 2 − β 2 rγ −2α 0 = 1− 0 e − 2 p + β 2 (p−d)−2α β 2 (p+d)−2α dg (C. i. Lemma 13. the cdf of (maxk γi. Let γi. lK = β 2 r0 log belongs to a domain of attraction [86].k (g )dg e−γ/g fGi.n (γ ) fγi. and γi.33) β 2 (p−d)−2α e−γ/g ds(g ). p2 −2α converges to a constant β 2 r0 as γ → ∞. the cumulative distribution function of γi. Fγi.

we claim that γ γ →∞ lim 1 − Fγi. We will consider the rest of the terms now and show that they contribute zero towards the limit in RHS of (C. considering the 4th .40) Now. we have T (γ ) = −2α β 2 r0 e− g +γr0 γ 2α /β 2 β 2 (p−d)−2α g 1 2 2 αβ p β 2 1 −1− α dg. C. 5th . (C. We will show that γ γ →∞ lim e β 2 r −2α 0 × −2α β 2 r0 e β 2 (p−d)−2α −γ g g 1 2 2 αβ p β 2 1 −1− α dg = 0. (C. p2 (C. x2 (C.38) + γ ≤ lim β 2 (p−d)−2α β 2 (p+d)−2α e− g s(g ) dg g2 γ 2α ) 2α ) (p − d)2 − βγ2 ((p−d)2α −r0 − γ ((p+d)2α −r0 e + e β2 2 γ →∞ p +γ = 0.39) (C.3). we consider the third term in (C.41) T (γ ) Taking the ﬁrst exponential term inside the integral. we have γ γ →∞ lim e β 2 r −2α 0 × ≤ e − γ β 2 (p−d)−2α γ (p − d )2 − − e β 2 (p+d)−2α − γ 2 p γ β 2 (p−d)−2α β 2 (p−d)−2α β 2 (p+d)−2α e− g s(g ) dg g2 γ γ →∞ lim e β 2 r0 γ −2α e − (p − d )2 − 2 γ −2α β ( p+ d) e + p2 (C. Now.n (γ ) e β 2 r −2α 0 = 2 r0 .where 2 r0 p2 < (p−d)2 p2 ≤ s(g ) ≤ 1 (see Fig.36) contribute everything to the limit in (C.37).k.37) It is clear that the ﬁrst two terms in (C. e − γ 2α ) ((p−d)2α −r0 β2 β 4 (p + d)−4α (C.37). and 6th terms.42) Substituting γ/g with x. we get T (γ ) = γ (p−d)2α β2 γr 2α 0 β2 e 2α /β 2 −x+γr0 1 γ 2 2 αβ p xβ 2 1 −1− α γ dx. First.36).43) 162 .

.k.37)). i. Since T (γ ) is positive.37) and (C.n belongs to a domain of maximal attraction [86. Solving for lK . Now.n (lK ) = 1 − 1/K . (C. αγp2 0 2α γr0 y+ 2 e β 2α −r 2α γ ( p − d ) ( 0 ) −y β2 1 −1+ α dy (C. we have lim fγi. From (C. (C. pp. γ to obtain the probability density function fγi.n (γ ) lim (C.47) is straightforward to verify (similar to the steps taken to prove (C.44) e−y dy (C.1] 2α 2 exp(−e−xr0 /β ).47) We do not prove the above equation here as (C.e. lK is such that Fγi.47).k.k. 296]. Deﬁnition 8. an upper bound is taken by putting y = 0 in the term inside the integral.3.r.n (γ ).48) The above equation implies that γi.n (γ ) w.50) 163 .k.·. 1 − Fγi. the claim is true.46) shows that limγ →∞ T (γ ) = 0.t. x ∈ (−∞. ∞).45) (C.k. we obtain that the growth function converges to a constant.n (γ ) −2α = β 2 r0 . we have 1 γ T (γ ) = 2 αp β 2 ≤ 1 −α γ 0 2α (p−d)2α −r0 β2 1 2α −1+ 1 γ −α γr0 1 α 2 2 2 αp β β 0 2α 2 (p−d)2α −r0 β −γ −2α+2 β2 = 1 − e r .45).k. γ →∞ fγi. Hence. after computing the derivative of Fγi. p2 β 2 r0 (C. the cdf of (maxk γi.2α Again substituting x − γr0 /β 2 by y .n (γ )eγr0 2α /β 2 γ →∞ = 2 r0 −2α .49) Here. we have 1 K r 2 − β 2 rK −2α 0 = 0 e + 2 p + β2 (p−d)2α β2 (p+d)2α l β2 r 2α 0 β2 (p−d)2α lK g e− lK g g 1 2 2 αβ p β 2 1 −1− α dg e− − s′ (g ) dg (C. that is given by [87.46) γr 2α y + β0 2 1 −1+ α where.k. In particular. in (C.n − lK ) converges in distribution to a limiting random variable with an extreme-value cdf.

we now have p2 −2α lK ≤ β 2 r 0 log 2 1 Kr0 .52). we substitute lK /g by x in the ﬁrst integral of (C. (C. 1+O 2 p lK lK β 2 (p−d)−2α 2α (p−d)2α −r0 In (C.59) −2α =⇒ lK ≥ β 2 r0 log 164 . we note that fact that Therefore.51) β2 (p−d)2α β2 (p+d)2α s′ (g )dg 1 −α lK β 2 (p−d)−2α lK −2α β 2 r0 lK r2 1 lK = exp − 2 −2α 0 + 2 2 2 p αp β β r0 +e − lK β 2 (p−d)−2α e−x x−1+ α dx 1 g= − s( g ) β2 (p−d)2α β2 (p+d)2α g= 1 −α 1 lK r 2 − β 2 rK −2α 0 + ≤ 0 e p2 αp2 β 2 − lK β 2 (p−d)−2α l 2α lK r 0 β2 1 −1+ α lK β 2 (p−d)−2α lK β 2 r −2α 0 e−x dx (C.54) (C.50).57) ds(g ) dg Now.55) (C. 1 K 2 − K r0 2 −2α ≥ 2 e β r0 p 2 Kr0 . to compute a lower bound on lK from (C.52) (p − d )2 1− +e p2 lK 2−2α 2 − r0 lK − 1 −2α r0 + ≤ e β 2 r0 p2 αp2 β 2 ≤ e − lK −2α β 2 r0 ∞ lK β 2 r −2α 0 l − 2K −2α β r0 e−x dx + e +e − − lK β 2 (p−d)−2α (C.58) (C.50).50) and computing an upper bound. and compute an upper bound by taking the exponential term out of the second integral of (C. From the above analysis.53) (C. +O 2 p lK (C.Substituting lK /g by x in the ﬁrst integral in RHS of (C.50).56) 2−2α 2 r0 lK − 1 r0 + e 2 2 p αp β2 l −2α 2 − 2K β 2 r0 p2 − lβK −2α r0 ≤ e β r0 e 2 1 + + 2 2 p αlK r0 l 2 − 2K 1 −2α r0 = e β r0 . we get 1 K r 2 − β 2 rK 1 −2α 0 ≤ 0 e + p2 αβ 2 p2 −e − lK β 2 (p−d)−2α l lK −2α β 2 r0 lK β 2 (p−d)−2α e−x lK xβ 2 1 −1− α −lK dx x2 (C. p2 l ≤ 0. we note that (p− ≤ s(g ) ≤ 1. In d)2 (C.51).

−2α From (C. 2 p ≥ 1−O 1 .n ≤ lK + log log K log(1 + Pcon lK + Pcon log log K ) k −2α + Pr max γi.k.61) Therefore.n k ≤ w. Now.62).n k ≤ Pr max γi. bi ) and is a function of r0 . E log 1 + Pcon max γi. in (C. 2 Kr0 p2 (C. −2α lK ≈ β 2 r 0 log 2 Kr0 2 p −2α ≤ lK ≤ β 2 r 0 log 2 Kr0 2 p +O 1 log K . 1 log 1 + Pcon k γi.k.57) and (C.n (given by lK in large K regime) is independent of the coordinates (ai . we have used the fact that the sum-rate is bounded above by −2α log(1 + Pcon β 2 r0 K ). (C.1. This is because log 1 + Pcon max γi. since the growth function converges to a constant (Lemma 13). p.65) 165 . log K (C.59).k.k. for a given BS i.1.n > lK + log log K log(1 + Pcon β 2 r0 K) k −2α ≤ log(1 + Pcon lK + Pcon log log K ) + log(1 + Pcon β 2 r0 K) × O (C. the scaling of maxk γi.n } ≤ ≤ −2α log 1 + Pcon β 2 r0 K E{|νi. Theorem A. we apply [67. where.60) for large K .n (C. we have β 2 r0 log Therefore.k.64) − − − → log 1 + Pcon K E{γi.n |2 } −2α log 1 + Pcon β 2 r0 K .63) = log(1 + Pcon lK ) + O (1).k. Interestingly.n ≤ lK + log log K k −2α where lK = β 2 r0 log 2 Kr0 .62) 1 log K (C.2] giving us: Pr lK − log log K ≤ max γi.p.k.

1 log K . 2 KN r0 2 p (C. (C. (C.70) (C. N ) ≤ C ∗ ≤ log(1 + Pcon lK ) + O (1) BN.k.k. and DN C ∗ = Ω(BNflo (r. we use the upper bound in Theorem 1 obtained via Jensen’s inequality.71) determines the SNR ≤ BN log 1 + Pcon lKN N −2α where (C. N ) log log K ).n k log(1 + Pcon lK ) + O (1) BN. and lKN = β 2 r0 log scaling of the maximum over KN i. we get. Therefore.n 1 log K (C.n N n.n k ≥ log(1 + Pcon lK − Pcon log log K ) 1 − O Combining (C.71) follows from (C.d.63) and (C.61). we get DN log(1 + Pcon lK ) + O (1) BNflo (r. from (C.Further.66) BN log(1 + Pcon lK − Pcon log log K ) 1 − O ≤ ≤ E i.67) log 1 + Pcon max γi.i. In particular. This results in: C ∗ = O (BN log log K ). to prove Corollary 1. we have C∗ ≤ N E i log 1 + Pcon max γi. N (C. B.72) .69) Now.k.66). B. we have E log 1 + Pcon max γi.k + O (1)BN. for large K .67). from Lemma 11 and Theorem 1. random variables.68) (C. This implies C ∗ = O BN log 166 log KN .

we ﬁnd the scaling.n|2 ≤ γ m. w ) distribution.k. Let the growth function be deﬁned as h(γ ) 1−Fγi. C.n .n (γ ) (when γ ≥ 0) is Fγi.k.3.Fr´ echet. such that Fγi.k. Weibull.k.k (g )dg = 1− (C.k.75) β 2 (p+d)−2α 167 .. Pcon l N KN ≫ 1 to make the approximation C.n K K Fγ (lK ) = (1 − 1/K )K → e−1 .k.k. Fγi.n choice of γ gives information about the asymptotic behavior of maxk γi.e.k (g )dg Γ m.k. lK . and LogNormal. γi. The fact that Fγ (γ ) converges for a particular i.n i.n (γ ) = = p |νi. we have i.k.k (g )dg g (C.74) fGi.1 Nakagami-m In this case.k.n (lK ) = 1 − 1/K . Weibull. Weibull.n|2 is distributed according to Gamma-(m.Note that the above result is only true if log(1 + x) ≈ log x valid for large x.n | is distributed according to Nakagami-(m. The random variable.n (γ ) fγi.3 Proof of Lemma 9 and Lemma 10 We will ﬁrst ﬁnd the SNR scaling laws for each of the three families of distributions — Nakagami-m. and LogNormal. The domains of attraction are of three types .k.k. w/m) distribution. mγ wg Γ(m) −2α β 2 r0 γ fGi.n (γ ) . The cumulative distribution function of γi. i. After showing this. belong to this category.n . |νi.k. Nakagami-m. Hence. It turns out that all three distributions considered.e. belongs to the Gumbel-type if limγ →∞ h′ (γ ) = 0.k. For γ = lK .n is Fγ (γ ). |νi.k. i. mγ wg Γ(m) fGi.k.n. and Gumbel..73) (C. This involves deriving the domain of attraction of channel-SNR γi. The intuition behind K this choice of lK is that the cdf of maxk γi.n for all three types of distributions.

we can approximate (C. ignoring the constant Γ(m).80) 168 . mγ β 2 (p+d)−2α where fGi.75) as 1 Fγi.79) 2 m−1 r0 m −2α m−1 .n (γ ) ≈ 1 − Γ(m) = 1− −2α β 2 r0 β 2 (p+d)−2α mγ wg m−1 e− wg fGi. Now.79).78) (C.77) contribute everything towards the limit in (C.where fGi.77)) is mγ γ 1−m e wβ 2 r −2α 0 − −2α β 2 r0 β 2 (p−d)−2α mγ wg m−1 e− wg mγ g 1 2 2 αβ p β 2 1 −1− α dg + = − β 2 (p−d)−2α β 2 (p+d)−2α −2α β 2 r0 mγ wg m wg m wg m−1 e− wg ds(g ) 1 − 2 1 g β r −2α 0 mγ m−1 − mγ w e β 2 (p−d)−2α β 2 (p−d)−2α β 2 (p+d)−2α g 1 2 2 αβ p β 2 1 −1− α dg T1 (γ ) m−1 − mγ w + e 1 − 2 1 g β r −2α 0 ds(g ) = T1 (γ ) + T2 (γ ).76) 2 r0 mγ −2α 2 p Γ(m) wβ 2 r0 −2α β 2 r0 m−1 − e mγ −2α wβ 2 r0 1 − Γ(m) 1 + Γ(m) β 2 (p−d)−2α mγ wg mγ wg m−1 e− wg mγ g 1 2 2 αβ p β 2 1 −1− α dg (C.k (g )dg mγ (C.77) β 2 (p−d)−2α m−1 e− wg ds(g ). We will show that the rest of the terms contribute zero to the limit in RHS of (C.k.k (g ) is deﬁned in (C. T2 (γ ) (C. for large γ . In particular.29).29). We claim that mγ γ →∞ lim 1 − Fγi.k (g )dg mγ (C. 2 p Γ(m)(wβ 2 r0 ) Note that the ﬁrst two terms in the RHS of (C. the contribution of the two integral-terms (in (C.n (γ ) γ = = γ →∞ 1−m e wβ 2 r −2α 0 −2α β 2 r0 lim γ 1−m e mγ wβ 2 r −2α 0 1 Γ(m) β 2 (p+d)−2α mγ wg m−1 e− wg fGi.79).k (g ) is deﬁned in (C.k.

Further. we have γ m−1 fγi.k (g )dg m wg m−1 mγ (m − 1)γ m−2 − Γ(m) −2α β 2 r0 β 2 (p+d)−2α e− wg fGi.79).k (g )dg mγ Using (C. 2α r0 β2 1 m+ α −2 1 − e− 2α β −2 (p−d)2α −β −2 r0 mγ w (C.81) (C.82).87) 2α ) ((p−d)2α −r0 β 2 (p−d)−2α β 2 (p+d)−2α m wg m−1 → 0.Now.88) 169 .n (γ ) = Γ(m) −2α β 2 r0 β 2 (p+d)−2α m wg m e− wg fGi.78)-(C. m wg m−1 − mγ w β 2 (p−d)−2α β 2 (p+d)−2α mγ wβ 2 e 1 − 2 1 −2α g β r0 ds(g ) ds(g ) (C.83) (C.82) m−1 β −2 (p−d)2α xm+ α −2 e− 1 mγ w 2α x − β −2 r 0 2α β −2 r 0 m−1 βα max αp2 e− mγ w (p − d)2α β2 2α x − β −2 r 0 1 −2 m+ α .77).86) (C. as γ → ∞. T1 (γ ) and T2 (γ ) have zero contribution to the RHS in (C. and the our claim is true.84) → 0. in (C. where. 2α r0 β2 β −2 (p−d)2α 2α β −2 r 0 dx 1 m+ α −2 m−1 βα max αp2 mγ w 2 (p − d)2α β2 . as γ → ∞. from (C. it is easy to verify that γ →∞ lim fγi. |T1 (γ )| = = ≤ m w m w m w × = m w × m−1 βα αp2 βα αp2 2 2 2 β2 2α r0 β2 (p−d)2α g 1 −m− α e − mγ w 1 − 2 1 −2α g β r0 dg dx 1 −2 m+ α (C. we substituted |T2 (γ )| = ≤ e − 1 g by x.79). p2 Γ(m)(wβ 2r0 ) (C.k.n (γ )γ 1−m e wβ mγ 2 r −2α 0 = 2 m r0 m −2α m .85) (C. Therefore. Now.k.

we can use [67.k.n belongs to the Gumbel-type [87. Thus.1] and maxk γi. x ∈ (−∞. we obtain that the growth function converges to a constant.93) DN C ∗ = BNflo (r.62)-(C.k. we get C ∗ = O BN log log 2 Kr0 p2 and 2 Kr0 .79) and (C.From (C. ∞).k. N )Ω log log log 2 KNr0 p2 Further. In particular.k.90) 0 log p2 Γ(m)(wβ 2 r −2α )m−1 0 From (C.n − lK converges in distribution to a limiting random variable with a Gumbel-type cdf.72).k. that is given by exp(−e−xr0 where 1 − Fγi. following the same analysis as in (C. −2α 1 − Fγi.n (γ ) m lim (C.61). −2α wβ 2 r0 m (C. we have lK ≈ Kr 2 mm−1 ≥1−O 1 .88).92) (C. (C.3. 170 . γi. Now. then C = O BN log ∗ N . Deﬁnition 8.89) Hence. γ →∞ fγi.n (lK ) = for large K . B. since the growth function converges to a constant and lK = Θ(log K ).79).91) log K This is the same as (C. if log KN N ≫ 1.n (γ ) wβ 2 r0 = .k. Theorem 1] to obtain: Pr lK − log log K ≤ max γi.n ≤ lK + log log K k 1 . p2 (C. K 2α /β 2 ).

k. |νi.k. (C. (C.k.96) This case is similar to the Rayleigh distribution scenario in (C. p2 (C.n|2 ≤ −2α β 2 r0 γ fGi.k. t) distribution.99) Since limγ →∞ h′ (γ ) = 0. Hence.k.n |2 is distributed according to Weibull-(λ2 . it is easy to verify that lim 1 − Fγi.n | is distributed according to Weibull-(λ.C.k.100) 171 .k. Fγi.e. t/2) distribution.k (g )dg − β2 2α r0 β2 (p−d)2α r2 − e = 1− 0 p2 + e − αβ 2 p2 g β2 1 −1− α dg β2 (p−d)2α β2 (p+d)2α e − γ gλ2 t/2 ds(g ).97) . We start with ﬁnding the cumulative distribution function of γi.k.n (γ ) (when γ ≥ 0) as Fγi.95) γ gλ2 t/2 = 1− β 2 (p+d)−2α γ −2α λ2 β 2 r0 t/2 fGi.n is of Gumbel-type. the limiting distribution of maxk γi.33). i.n (γ ) = p |νi.2 Weibull In this case.k.n (γ ) e γ →∞ γ β 2 r −2α λ2 0 t/2 γ →∞ = t/2 2 r0 .n (lK ) = 1 . Note that this is true even when t < 1 which refers to heavy-tail distributions..98) γ lim fγi.n (γ ) can be approximated for large γ as t/2 −2α 2 2 β 2 r0 λ h(γ ) ≈ t γ 1−t/2 .3.k.n .n (γ )γ 1−t/2 e β 2 r −2α λ2 0 = Thus. K we get −2α 2 lK = β 2 r 0 λ log t 2 2 Kr0 .k. (C. and p2 2 tr0 −2α 2 2 β 2 r0 λ t/2 2 p (C.k. Therefore. |νi.n (γ ) fγi.k (g )dg g e − γ gλ2 t/2 (C. Solving for 1 − Fγi. the growth function h(γ ) = 1−Fγi.94) (C.

i.105) from (C. we get Pr 1 − O log log K log K < maxk γi. . hX (lK )hX (lK ) − 2hX (lK ) + O 2! 3! K 2 ′ Proof. h′′ (lK ) = O 1 . Similarly. we have log − log F K lK + hX (lK ) u u2 ′ u3 e−u+O(u hX (lK )) ′′ ′2 = −u + hX (lK ) + . we apply the following theorem by Uzgoren. Equation 19] for proof. N ) Ω log log2/t 172 . . and so on.102). following (C.Now. See [88. positive random variables with continuous and strictly positive pdf fX (x) for x > 0 and cdf represented by FX (x). Pr max γi. . log K Note that the above equation is the same as (C.107) (C. h′ (lK ) = O 1 log K . log K where we have used the fact that ex = 1 + O (x) for small x.101) (C.k.n ≤ lK − h(lK ) log log K k = e −e log log K +O log2 log K log K (C. p2 (C.102) = 1−O 1 .k.103) (C. setting lK = β 2 r0 λ log t 2 2 Kr0 p2 and u = log t +1 K 2 log log K . Therefore. The above theorem gives taylor series expansion of the limiting distribution for −2α 2 Gumbel-type distributions. B.n ≤ lK + h(lK ) log log K k = e −e − log log K +O log2 log K log K (C.61). xK be a sequence of i.k. if limx→∞ h′X (x) = 0. In particular. Let x1 .72). . and p2 2 Kr0 . In particular.108) DN C ∗ = BNflo (r. K ≥1−O 1 (C. Then. Let hX (x) be the growth function.104) (C.62)(C.n log log K ≤1+O lK log K 1+O log log K log K log K 1 . we have h(lK ) = O log− t +1 K 2 1 .105) = e− = O Subtracting (C.d. we get C ∗ = BN O log log 2/t 2 Kr0 .106) . Theorem 6 (Uzgoren). we have Pr max γi.

Fγi.1. × (2m − 1). . We ignore the terms m = 1.k.k.k. if log2/t KN N ≫ 1.109) (C.3 LogNormal In this case.n (γ ) = γ fGi. as the dominant term for large γ corresponds to m = 0.n | is distributed according to LogNormal-(a. Therefore. .113) . i.n (γ ) 1 ≈1− 2 −2α β 2 r0 (C.n|2 ≤ (C.23] in the large γ -regime as: Fγi. then C ∗ = O BN log log2/t 2 KNr0 p2 N . 2 β 2 (p+d)−2α 8w p |νi. 7.k (g ) e − γ log g −2a √ 8w 2 ∞ β 2 (p+d)−2α log γ −2a g √ 8w √ π m=0 (−1)m (2m − 1)!! 2m log γ −2a g √ 8w 2m dg.k.k.3. 2.112) − γ log g −2a √ 8w 2 2w π g 1 2 2 αβ p β 2 e − 2a log γ g dg + 2w π β2 (p−d)2α β2 (p+d)2α e − γ log g −2a √ 8w 2 − 2a log γ g ds(g ).111) fGi.Further. .n. Eq. 173 (C.n (γ ) can be approximated [89. w ) distribution. C.n (γ ) (when γ ≥ 0) is Fγi. . The cumulative distribution function of γi. |νi.k (g )dg.110) where erfc[·] is the complementary error function.e.n |2 is distributed according to LogNormal-(2a.k. .k (g )dg β2 r 2α 0 β2 (p−d)2α 1 −1− α (C. 4w ) distribution.. Using the asymptotic expansion of erfc[·]. where (2m − 1)!! = 1 × 3 × 5 × . Hence.k (g )dg g 2 −2α log γ − 2a 1 β r0 g √ erfc = 1− fGi.n (γ ) = 1− 2w π −2α β 2 r0 e − γ log g −2a √ 8w 2 β 2 (p+d)−2α log = 1− 2 2 w r0 e − 2 π p log β 2 rγ −2α − 2a 0 − γ −2α −2a β 2 r0 √ 8w − 2a log γ g 2 fGi. we have Fγi. Fγi.k.k. |νi.k.

119). we put g = β 2 r0 in the exponent of −2α g ≤ β 2 r0 .114) π This is because the contribution of the two integrals in (C. The contribution of ﬁrst integral. as γ → ∞.114). is log log γ −2α β 2 r0 − 2a e γ −2α −2a β 2 r0 √ 8w 2 β2 2α r0 β2 (p−d)2α g 1 2 2 αβ p β 2 log γ −2a β 2 r −2α √0 8w 1 −1− α e − γ log g −2a √ 8w 2 log γ − 2a g 2 dg 2 ≤ ≤ ≤ = log −2α−2 γ r0 − 2 a −2α αβ 2 p2 β 2 r0 β2 r 2α 0 β2 (p−d)2α β2 r 2α 0 log β2 2α r0 β2 (p−d)2α 2 e − γ log g −2a √ 8w −2α−2 r0 αβ 2 p2 −2α−2 r0 αβ 2 p2 −2α−2 r0 αβ 2 p2 −2α−2 r0 αβ 2 p2 −2α−2 r0 αβ 2 p2 e 1 8w γ −2a β 2 r −2α √0 8w log γ − 2a g γ log g −2a √ 8w 2 dg (C. making its contribution zero. we take an upper bound by taking the term g −1−1/α β2 g −2α β 2 r0 out of since −2α the integral.k. we claim that log γ →∞ lim 1 − Fγi.113) contribute to the RHS in (C.118) ≤ = 1 8w log γ2 β 4 r −4α 0 −4a β2 (p−d)2α dg 2α 8w (C.115). The second integral has an exponent term that goes to zero faster than log e − γ −2α −2a β 2 r0 √ 8w 2 → 0.120) (C.(C. Similar to the above analysis.n (γ ) −2α log γ − log(β 2 r0 ) − 2a e γ −2a β 2 r −2α √0 8w 2 = 2 r0 p2 2w .113) towards the RHS of (C.Now.116) β2 (p−d)2α β2 r 2α 0 e log γ2 gβ 2 r −2α 0 −4a log g β 2 r −2α 0 dg (C.115) − dg (C.119) log γ2 β 4 r −4α 0 1 8w → 0.117) β2 (p−d)2α β2 r 2α 0 g −2α 2 β r0 g −2α 2 β r0 1 2 1 8w log γ2 gβ 2 r −2α 0 −4a dg (C. it 174 . Note that only the ﬁrst two term in (C.114) is zero. and in (C.121) where in (C. when γ is large. γ log β 4 r −4α − 4a 0 r0 1− p−d −4a −2α (C.

Similarly. Therefore.125) . p2 8wπ (C.124) lim h′ (γ ) = 0.k.129) (C. h′′ (lK ) = O Theorem 6 for u = log log K . we get Pr lK − c e √ 8w log K 1 . we have h(γ ) = γ →∞ 1 − Fγi.n ≤ lK + c ≥ 175 e √ 8w log K log K 1 1−O . the limiting distribution of maxk γi. and so on. Using h(lK ) = O lK log lK .n ≤ lK + h(lK ) log log K k = e −e − log log K +O 2 log √ log K log K (C.128) (C.is easy to show that log γ →∞ lim fγi.n (γ ) γe γ −2a β 2 r −2α √0 8w 2 = r2 √0 .131) .127) and (C. log K where we have used the fact that ex = 1 + O (x) for small x. we have −2α lK = β 2 r 0 e 8w log Kr 2 0 +Θ(log log K ) p2 .n (γ ) log γ (C.n (γ ) 4wγ ≈ for large γ.k. K log K log log K < maxk γi.130) − 1+O log √ log K log K log K = O Combining (C.122) Using the above equation and (C.k.k. we have Pr max γi.127) = 1−O 1 .k.130).114). and fγi. h′ (lK ) = O 1 log lK .k.k. and 1 lK log lK (C.123) (C.126) (C.n belongs to the Gumbel-type. log K log log K (C.n ≤ lK − h(lK ) log log K k = e = e −e log log K +O 2 log √ log K log K (C. Pr max γi. Solving for lK .

132) 2 Kr0 .p. T where S1 ∈ (0. and p2 2 Kr0 . Fmaxt Xt (lT /S1 ) = This gives. T We have FX (lT /S1 ) = 1 − 1− S1 T . then C ∗ = O BN log e KNr 2 0 log p2 N .136) 1 − e−S1 V lT /S1 .138) 1 log 1 + Pcon lK/ log K K ≤ E log 1 + Pcon max γi.n k . (C. K (C.k. if e √ log KN N ≫ 1.62)-(C.72).k. B. following a similar analysis as in (C. Now.. we get 1− where Fγi. Setting S1 = log K and V (x) = log(1 + Pcon x).4 Proof of Theorem 4 S1 . Therefore. for any increasing concave function V (·).133) DN C ∗ = Ω BNflo (r. T ]. p2 C ∗ = O BN log (C. we get max γi.where c is a constant. E V max Xt t ≥ Pr max Xt ≥ lT /S1 V lT /S1 t = ≥ 1− 1− S1 T T V lT /S1 (C.n = Θ lK w.n (lK/ log K ) = 1 − log K .134) Further.135) (C.k. k (C. N ) log (C.137) 176 .h. C.

139) and (C.n | Pj.k.n |2 log 1 + max 2 k 1 + β 2 c−2α j =i Pj.k.k.143) 177 .n|2 Similarly.n (c) Xi.k.k.k |νi.n|2 j =i Pi. we have (due to truncated path-loss model) ∗ CLB B P∈P N ≥ max ≥ max P∈P E i=1 n=1 B N max log 1 + k 1+ −2α β 2 r0 E i=1 n=1 log 1 + max k −2α 1 + β 2 r0 −2α Pi.n 2 j =i Pj.n |νj.n Xi.141) where r0 ≤ c ≤ 2p is a constant. E i=1 n=1 −2α Pi.n |νj.n (c) .n|2 .n(c). we have ∗ CLB B P∈P N ≤ max ≤ max P∈P E i=1 n=1 B N max log 1 + k 1+ Pi.140) is the multiplication factor in the denominator of SINR term.k |νi. Deﬁning Xi.n γi.n β 2 Ri. (C.n |νj.C. the bounds can be represented as: B N max P∈P i=1 n=1 E −2α Pi.k.n|2 (C.n j =i Pj.n | .k.n|2 log 1 + max k 1 + β 2 (2p)−2α j =i Pj. 1 + β 2 c−2α j =i Pj.n|2 max P∈P i=1 n=1 E log 1 + Pi. as a lower bound.k .k. where (C.5 Proof of Theorem 5 The maximum distance between a TX and user is 2p.k.140) Note that the only diﬀerence in the bounds in (C.k.n |νj.n |2 β 2 Ri.k |νi.k.n |νj.142) −2α |νi. Therefore. In particular.k.139) .n (c) the bounds can be represented as B N maxk Xi.n |νj.n β 2 Ri. (C.k.n β 2 Ri.n 2 − β (2p) 2α γi. (C.k.

Let us denote Y(c) β 2 c−2α j =i ∞ Pj.n |2 ≤ 1−e ∞ y =0 − x(1+y ) β 2 r −2α i.n (c)|Gi.k. This gives − gx i.n x 178 .k .k FXi.n x gi.k fY(c) (y )dy.n (x) = FXi.n (c)|Ri.k.n t .k =ri.144) Therefore.k.n (c)|Gi.147) = 1− 2 r0 −x g e p2 1 j =i 1+ β 2 c−2α P g j.k = 1−e −2α where Gi.k fY(c) (y )dy = 1− e − x(1+y ) β 2 r −2α i.n|2 .k.k =g (x)fGi.k 1 j =i 1+ β 2 c−2α P 1 .148) −2α g =β 2 r 0 − + −2α β 2 r0 e− g x 1 j =i β 2 (p−d)−2α β 2 (p−d)−2α β 2 (p+d)−2α 1+ β 2 c−2α Pj. Then.k − x β 2 r −2α i.k (x) = = y =0 ∞ y =0 x(1 + y ) −2α β 2 ri. we have FXi.149) e− g x 1 j =i 1+ β 2 c−2α P g j.146) (C. we have Pr |νi.n x (C.k. −1−1/α dg (C. where FW (x) denotes the value that is taken by the cdf of random variable W at x.k fY(c) (y )dy Ri.k = β 2 Ri.n x g g 1 2 2 αβ p β 2 ds(g ).k j =i 1+ β 2 c−2α Pj.k.k =ri.k (C.n β 2 r −2α i. x j. (C.k (g )dg (C. Now.k =β 2 r−2α (x) = 1 − e i.145) FXi. Y(c) has a MGF MY(c) (t) = j =i 1 1− β 2 c−2α P j.n |νj.

k.n (c) (x) ≈ x 2 − r0 β 2 r −2α 0 e p2 1 j =i 1+ Pj.152) Here. we consider the upper bound. and Q has a gumbel-cdf given by (C.k. (C. i.139) and (C.n x −2α c2α r0 Note that Xi. where − → denotes convergence in distribution...149) can be ignored. at large x.152). ∞).1.n (c) lK (c. n) satisﬁes FXi. 19 179 .46). (C. It is straightforward to show that the last term can be ignored at large x since the exponential term decays quickly to zero.n (c) (x) ≈ fXi.At large values of x.k. n) converges in distribution to Gumbel-type cdf that is given by exp{−e−xr0 2α /β 2 }.n (c) = maxk Xi.n (c) (x) fXi.k. First. maxk Xi.K } Xi.n Xi.n (c) belongs to a domain of attraction since limx→∞ = −2α β 2 r0 . one can approximate 1 − FXi.143). K (C.151) 1−FXi. Note that maxk={1. the distribution of Xi. i. In particular. the last but one term in (C. From (C.41)-(C.154) ≤ max P∈P log 1 + Pi.155) as K tends to inﬁnity.k.n (c) − lK (c.150) (C.n (c) (x) 2 − 2x r0 β r −2α 0 e −2α 2 2 p β r0 j =i 1+ Pj.k. n) − →Q k d d (C. x ∈ (−∞. i.n E Xi.k. the last two terms in the above expression are negligible compared to the second term19 . Now..n(c) is a non-decreasing Following the analysis in (C. we know max Xi. i.n (2p) . 1 . lK (c. Therefore.n x −2α c2α r0 and 1 . we have B ∗ CLB N ≤ max P∈P E i=1 n=1 B N log 1 + Pi.k.n (c) can be approximated by a Gumbel distribution when K is large.n (2p) i=1 n=1 where the above equation follows by Jensen’s inequality..153) (C.k. In particular. n) = 1 − ∗ We will now bound CLB via the upper and lower bounds represented by the common expression in (C.143).n (c) − lK (c.

where ր denotes convergence of an increasing sequence. K )Pcon β 2 r −2α 0 = e 1 + −2α 2 p (2p)2α r0 B −1 .158) (C.n (c)} ր E{Q} + lK (c.K ) 2 ¯ r0 K l(2p.160) with Pj. we get ∗ CLB ≤ −2α u β 2 r0 max 1+ ¯ l(2p. K ) is computed by solving (C.161) Using lK (2p. −2α E Xi. Therefore.n l −2α .sequence for every realization. K ) P∈P B N log 1 + Pi.162) 180 .159) follows from (C. i. ¯ l(2p.n lK (2p.159) = max P∈P i=1 B ≤ max P∈P log 1 + Pi. i=1 n=1 (C.n lK (i. where l = ¯ l(2p. n) ≥ ¯ l(2p. i. Therefore. n). i. Applying the above argument to (C. Noticing that E{Q} = β 2 r0 u.154). where u is the Euler-Mascheroni constant (u ≈ 0. n) (C. n). n) + β 2 r0 u i=1 n=1 N B −2α β 2 r0 u Pi. i. we have L1 convergence. we can write lK (2p. i. In particular.158) because log(1 + ax) ≤ a log(1 + x) for all x ≥ 0 and a ≥ 1. Hence.n = Pcon for all (j.5772). n) + E {Q}.150). (2p)2α r0 (C. applying monotone convergence theorem. from (C. n) .159). we know that l = lK (2p.n (2p) ≤ lK (2p.n lK (2p. K ) for all (i.157) (C.n lK (2p. n). we have B ∗ ≤ max CLB P∈P N −2α log 1 + Pi. i=1 n=1 where (C. i. i. i. i. (C.156) as K grows large. K ) in (C.160) Note that the value of l that satisﬁes the above equation decreases with increase in {Pj.k. n) . k (C. Now. i.n for all j = i}. n) ≥ ¯ l (2p. E max Xi. K ) satisﬁes ¯ l(2p. n) n=1 N −2α β 2 r0 u 1+ lK (2p. n) log 1 + 1 + lK (2p. In particular. n) satisﬁes l 2 K r0 −2α β 2 r0 = e p2 1+ j =i Pj.

n Xi.n ≥ 0 ∀ i. K ) (C.161) (re-written below for brevity): ¯ l(2p. Using V (Xi.n (c)) = log(1 + Pi. the optimization problem with an objective function i. we have (C. deﬁne the class of optimization problems as follows.163) where S1 ∈ (0. n) = 1 − B ∗ CLB ≥ 0. i.n lK (r0 .n ≤ Pcon ∀ i.1. OP c. n. l(2p. n) . The lower bound follows from Theorem 4.n xi. (C.n xi. K (C. Combining (C. K .63 max P∈P N i=1 n=1 Putting S1 = 1.143).n −2α r0 ∀ i. and =e xi.where l = lK (2p. i. n) satisﬁes (C. K )Pcon −2α β 2 r0 = e 1 + −2α 2 p (2p)2α r0 B −1 .164) log 1 + Pi. Therefore.n E V (Xi. h(K ) B N max P∈P i=1 n=1 log(1 + Pi. Pi. B ∗ CLB N ≥ (1 − e − S1 ) max P∈P i=1 n=1 log 1 + Pi. n.160).n ) (C. In particular. n) . K ]. n). i. i.143) when c = r0 . we have ∗ (1 − e−S1 )OP r0 .t.n (r0 ) lK/S1 (r0 .168) s.162) and (C.n ) evaluates the lower bounds in (C.164) in to one mathematical form.170) 181 .n (c)) in Theorem 4 and taking the summation over all (i.n −2α β 2 r0 1+ j =i Then.169) where S1 ∈ (0. n 2 r0 h(K ) p2 Pi. K/S1 ≤ CLB ≤ β 2 r −2α u 1+ ¯ 0 OP 2p. we have from Theorem 4. We will now consider the lower bound in (C. u is the Euler-Mascheroni constant and ¯ l(K ) satisﬁes (C. (C. K ] and FXi.166) (C. S1 .167) c−2α Pj.n lK/S1 (r0 .165) (C.K ) 2 ¯ r0 K l(2p.

let {xi. n} be the solution to (C.n . n.175) i. h(K ) . (C. let {xi.172) Similarly.173) for all (i. i.n (c2 ) by any larger value.n for all i.n 2 r0 h(K ) β 2 r −2α =e 0 p2 x 1+ j =i c−2α Pj. This is because the RHS of (C.n (c1 ) since the RHS of (C.n (c2 ) for all i. Let us substitute c2α x c2 2α xi. h(K ) 1≤ ≤ OP c1 .n for all i. we have OP c2 . we get (C.1 Proof of a Property of OP c.174) will be larger than LHS of (C.n xi.n (c2 ) ≥ xi.n (c2 ).172) is an increasing function of xi. h(K ) c2 c1 2α . Now. n} be the solution to (C. if we substitute xi.n −2α r0 ∀ i. n}. then the RHS of (C. for the same set of powers {Pi. n).n 2 2 r0 h(K ) β 2 r −2α 0 = e p2 x (c ) 1+ j =i 2α c− Pj. xi.n xi.n (c2 ) (C. n}. h(K ) We will now show that for positive constants c1 . c2 (0 < c1 ≤ c2 ) and any increasing function h(·). (C. −2α r0 182 .n (c1 ) 1 .n (c1 ) for all i. we claim that c2 c1 2α xi.n (c1 ) ≥ xi.5.168) (rewritten below for brevity) when considering the optimization problem OP c1 . h(K ) .C.174) Now.n (c2 ) 2 −2α r0 (C.171) For any given set of powers {Pi. We know that for all (i. Then.n xi.n (c1 ) c1 (c ) instead of xi. Clearly.172) when considering the optimization problem OP c2 .172) is a decreasing function of c.n 1 2 2 r0 h(K ) α 2 −2α c2 1 β r0 ≷ e p2 1+ j =i 2α c− Pj. n) i.174).

n (c1 ) .n xi.n xi. Using this relation and the fact that xi.179) Also note that log(1 + ax) ≤ a log(1 + x) for all x ≥ 0 and a ≥ 1.n xi.176) and taking logarithm of both sides. n). we also have i. Therefore.n (c2 ) ≥ xi. we have log(1 + Pi. we have 1≤ OP c2 .n (c1 ).180) Combining (C.n 1 2 r0 h(K ) β 2 r −2α 0 = e p2 x (c ) 1+ j =i 2α c− Pj. n) 1≤ log(1 + Pi. (C. (C.(C.175) i. (C. n} is the solution to (C. we have for every (i. h(K ) c2 ≤ c1 OP c1 .178) Therefore.n xi. Since {xi.n xi.175) by (C.n (c2 )) ≤ log 1 + c2 c1 2α Pi.n xi.n xi.n (c2 )) c2 ≤ log(1 + Pi.n xi. c2 2α xi.176) Dividing (C. h(K ) (see (C.177) 1+ j =i 2α c− Pj.165)-(C.n (c1 ) 1 −2α r0 (C. −2α r0 (C.n (c1 ) 1 .n xi.n (c2 ) for all (i. 2α c1 β 2 r0 (C.where the actual inequality will be determined later.181) Putting the above equation in OP c.n 1 2 2 r0 h(K ) c2α β 2 r −2α 1 0 ≤e p2 c2α x (c ) α c2 xi.180).168)). h(K ) 2α .n (c1 ) ≤ c2 c1 2α log 1 + Pi.n (c1 ) c1 ≥ xi.n (c1 )) ≤ log(1 + Pi.n (c1 ) for all i.179) and (C.n (c1 ) . we get 0≷ Since c2 ≥ c1 .n (c1 )) c1 2α .172) when considering the optimization problem OP c1 .n (c1 ) 2 −1 −2α . we have in (C. h(K ) . we have log 1 + c2 c1 2α Pi.182) 183 .

(Atlanta. Massachusetts Institute of Technology. on Mobile Computing and Networking. S. Holland. and E. 3246–3250. “Bit-rate selection in wireless networks. Bicket. R./Apr.” IEEE Trans. vol. 1244–1256. Rice and S. Balachandran.. pp.” IEEE Trans. 5. A. Kanodia. “Robust rate adaptation for 802. pp. 45. pp. Commun. “Variable rate variable power M-QAM for fading channels. Conf. Wirless Communications. pp. GA). Goldsmith. Yao. Nanda. Oct. Commun. [7] J. S. L.. vol. vol. [8] S. Commun. Commun.” IEEE J. on Mobile Computing and Networking. ACM Internat. and P. on Mobile Computing and Networking. Wicker. [2] A. “Adaptive error control for slowly varying channels. vol. and S. pp. Sabharwal. H. Bahl. 917–925. B.. vol.. [4] K.-D. vol. “Channel quality estimation and rate adaptation for cellular mobile radio. Yang. “Adaptive coding for time-varying channels using outdated fading estimates. Conf.” in Proc. Feb. Vaidya. Kadaba. vol. 42. July 1999. Wong. Goeckel. [10] Y. Select. 1994. 43. 2006. [6] B. June 1999. 1997. 2001. Sadegi.. 5. “An eﬀective go-back-N ARQ scheme for variable-error-rate channels. and V. Chua. Lu. C. 1218–1230. V. Knightly. 20–23. pp. 844–855. “Opportunistic media access for multirate ad hoc networks.11 wireless networks. ACM Internat. 3246–3250. Jan. Areas In Commun.” IEEE Trans.Bibliography [1] A. 184 . ACM Internat. 2001. 47.” Master’s thesis.” in Proc. 1995.” IEEE Trans. [5] G./Mar. 17. New York: Cambridge University Press. Goldsmith and S. N. [3] D.” in Proc. Conf. pp. Bharghavan. “A rate adaptive MAC protocol for multihop wireless networks. Feb 2005. 2005. [9] M.

53. and E. Tech. and V. 57. 12. vol. no. Athena Scientiﬁc. Shin.” IEEE Trans. “A class of hybrid ARQ schemes for wireless links. M. 50. and U. pp. “Joint rate and power adaptation for type-I hybrid ARQ systems over correlated fading channels under diﬀerent buﬀer-cost constraints.. 421–435. vol. pp. K. Proakis. “An adaptive ARQ scheme with packet combining for time varying channels. K. Djonin. Letters. M. 1982. vol. vol. Vector Quantization and Signal Compression. Bhargava. Abou-Fayal. Yli-Juuti. “POMDP-based coding rate adaptation for Type-I hybrid ARQ systems over fading channels with memory. Choi and K. [23] A. pp. 5. [20] C.. G. 3rd ed.” IEEE Trans.” IEEE Trans. [12] S. 3512–3523. [22] I. Wireless Commun. “The complexity of Markov decision processes. Jan. Dec. “On ﬁrst-order Markov modeling for the Rayleigh fading channel. vol. 1994. Poor. 50. G. June 2008. 2006. Gray. 2000.” Mathematics of Operations Research.. Nov. V. 2032–2040. Gersho and R. A. Bhargava. vol. vol.. pp. Veh. V. Karmokar. C. 2nd ed. 48. 3. Bhargava. 1999. Madhow. 52–54. Monahan. N. 777–790. “A survey of partially observable Markov decision processes: Theory. [14] A.. Commun. Dynamic Programming and Optimal Control. 2001. [21] H. New York: McGraw-Hill.” IEEE Trans. [15] D. and V.” Management Science. Zeng. Dec.. Boston: Kluwer Academic Publishers. 185 . Papadimitriou and J. 1036–1046. 1–16. and algorithms. M. K. H. Djonin. 28. models.” IEEE Trans. [16] G. Beaulieu. An Introduction to Signal Detection and Estimation. “On ARQ scheme with adaptive error control. Chakraborty. New York: Springer. 1987. Bertsekas. Tech. pp.. K. 441–450. D. Feb. Liinabarja. Commun. Medard. E. Digital Communications. Veh. 2008. Karmokar. Tan and N.” IEEE Trans. V. May 2001. K. pp. Jan. C. 1992. Tsitsiklis. [19] D. and V.” IEEE Commun. [13] H. vol. pp. Tech. [17] C. vol. 2nd ed. [18] J. 1426–1436.. S. Minn.[11] S. 3. M. 2000. 1995. Veh. “Binary adaptive coded pilot symbol assisted modulation over rayleigh fading channels without feedback. pp.. pp.

[24] P. Sadeghi, R. Kennedy, P. Rapajic, and R. Shams, “Finite-state markov modeling of fading channels - a survey of principles and applications,” IEEE Signal Processing Mag., vol. 25, pp. 57–80, Sept. 2008. [25] G. Song and Y. Li, “Utility-based resource allocation and scheduling in OFDMbased wireless broadband networks,” IEEE Commun. Mag., vol. 43, pp. 127 – 134, Dec. 2005. [26] M. Falkner, M. Devetsikiotis, and I. Lambadaris, “An overview of pricing concepts for broadband IP networks,” IEEE Commun. Surveys Tutorials, vol. 3, pp. 2–13, second quarter 2000. [27] L. A. DaSilva, “Pricing for QoS-enabled networks: A survey,” IEEE Commun. Surveys Tutorials, vol. 3, pp. 2–8, second quarter 2000. [28] S. Shenker, “Fundamental design issues for the future internet,” IEEE J. Select. Areas In Commun., vol. 13, pp. 1176–1188, Sept. 1995. [29] “IEEE 802.16-2005 and http://www.ieee802.org/16. IEEE Std 802.16-2004/Corl-2005,”

[30] “Overview of 3GPP Release 8 V0.2.2 (2011-01),” http://www.3gpp.org/ftp/Information/WORK PLAN/Description Releases/. [31] J. Huang, V. Subramanian, R. Agrawal, and R. Berry, “Downlink scheduling and resource allocation for OFDM systems,” IEEE Trans. Wireless Commun., vol. 8, pp. 288–296, Jan. 2009. [32] “3GPP Technical Speciﬁcations (Release 10) 3GPP TS 25.101 V10.1.0,” http://www.3gpp.org/article/umts. [33] I. C. Wong and B. L. Evans, “Optimal resource allocation in the OFDMA downlink with imperfect channel knowledge,” IEEE Trans. Commun., vol. 57, pp. 232– 241, Jan. 2009. [34] G. Song and Y. Li, “Cross-layer optimization for OFDM wireless networks— Parts I and II,” IEEE Trans. Wireless Commun., vol. 4, pp. 614–634, Mar. 2005. [35] C. Y. Wong, R. S. Cheng, K. B. Letaief, and R. D. Murch, “Multiuser OFDM with adaptive subcarrier, bit and power allocation,” IEEE J. Select. Areas In Commun., vol. 17, pp. 1747–1758, Oct. 1999. [36] T. J. Willink and P. H. Wittke, “Optimization and performance evaluation of multicarrier transmission,” IEEE Trans. Inform. Theory, vol. 43, pp. 426–440, Mar. 1997. 186

[37] L. M. C. Hoo, B. Halder, J. Tellado, and J. M. Cioﬃ, “Multiuser transmit optimization for multicarrier broadcast channels: Asymptotic FDMA capacity region and algorithms,” IEEE Trans. Commun., vol. 52, pp. 922–930, Jun. 2004. [38] I. Wong and B. Evans, “Optimal downlink OFDMA resource allocation with linear complexity to maximize ergodic rates,” IEEE Trans. Wireless Commun., vol. 7, pp. 962 –971, Mar. 2008. [39] K. Seong, M. Mohseni, and J. M. Cioﬃ, “Optimal resource allocation for OFDMA downlink systems,” Proc. IEEE Int. Symposium Inform. Theory, pp. 1394–1398, Jul. 2006. [40] A. Ahmad and M. Assaad, “Margin adaptive resource allocation in downlink OFDMA system with outdated channel state information,” Proc. IEEE Int. Symposium Personal Indoor Mobile Radio Commun., pp. 1868–1872, 2009. [41] D. Hui and V. Lau, “Design and analysis of delay-sensitive cross-layer OFDMA systems with outdated CSIT,” IEEE Trans. Wireless Commun., vol. 8, pp. 3484 –3491, July 2009. [42] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004. [43] J. Mitchell and E. K. Lee, “Branch-and-bound methods for integer programming,” in Encyclopedia of Optimization (A. Floudas and P. M. Pardalos, eds.), vol. 2, ch. 4, pp. 509–519, The Netherlands: Kluwer Academic Publishers, 2001. [44] J. Jang and K. B. Lee, “Transmit power adaptation for multiuser OFDM systems,” IEEE J. Select. Areas In Commun., vol. 21, pp. 171–178, Feb. 2003. [45] M. B´ enichou, J. M. Gauthier, P. Girodet, G. Hentges, G. Ribi` ere, and O. Vincent, “Experiments in mixed-integer linear programming,” Mathematical Programming, vol. 1, pp. 76–94, 1971. [46] A. Lodi, “Mixed integer programming computation,” in 50 Years of Integer Programming: 1958-2008 (M. J¨ unger et al., ed.), pp. 619–645, Berlin-Heidelberg: Springer-Verlag, 2010. [47] G. Cornuejols, M. Conforti, and G. Zambelli, “Polyhedral approaches to mixed integer programming,” in 50 Years of Integer Programming: 1958-2008 (M. J¨ unger et al., ed.), pp. 343–385, Berlin-Heidelberg: Springer-Verlag, 2010. [48] R. C. Jeroslow, “There cannot be any algorithm for integer programming with quadratic constraints,” Operations Research, vol. 21, no. 1, pp. 221–224, 1973.

187

[49] D. P. Bertsekas, Constrained Optimization and Lagrange Multiplier Methods. Academic Press, 1982. [50] “UTRA-UTRAN Long Term Evolution (LTE) and 3GPP System Architecture Evolution (SAE) 3GPP LTA Paper,” 3GPP. ftp://ftp.3gpp.org/Inbox/2008 web ﬁles/LTA Paper.pdf. [51] J. G. Proakis, Digital Communications. New York: McGraw-Hill, 5th ed., 2008. [52] J.-J. van de Beek, O. Edfors, M. Sandell, S. Wilson, and P. Borjesson, “On channel estimation in OFDM systems,” in Proc. IEEE Veh. Tech. Conf., vol. 2, pp. 815–819, 1995. [53] H. V. Poor, An Introduction to Signal Detection and Estimation. New York: Springer-Verlag, 2nd ed., 1994. [54] I. C. Wong and B. L. Evans, “Optimal OFDMA resource allocation with linear complexity to maximize ergodic weighted sum capacity,” Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, vol. 3, pp. III–601 –III–604, 15-20 Apr. 2007. [55] D. J. Love, R. W. Heath, V. K. N. Lau, D. Gesbert, B. D. Rao, and M. Andrews, “An overview of limited feedback in wireless communications systems,” IEEE J. Select. Areas In Commun., vol. 26, pp. 1341–1365, Oct. 2008. [56] D. P. Bertsekas and R. G. Gallager, Data Networks. Prentice Hall, 2nd ed., 1992. [57] A. K. Karmokar, D. V. Djonin, and V. K. Bhargava, “POMDP-based coding rate adaptation for type-I hybrid ARQ systems over fading channels with memory,” IEEE Trans. Wireless Commun., vol. 5, pp. 3512–3523, Dec. 2006. [58] R. Aggarwal, P. Schniter, and C. E. Koksal, “Rate adaptation via link-layer feedback for goodput maximization over a time-varying channel,” IEEE Trans. Wireless Commun., vol. 8, pp. 4276–4285, Aug. 2009. [59] G. E. Monahan, “A survey of partially observable Markov decision processes: Theory, models, and algorithms,” Management Science, vol. 28, pp. 1–16, Jan. 1982. [60] M. A. Haleem and R. Chandramouli, “Adaptive downlink scheduling and rate selection: a cross-layer design,” IEEE J. Select. Areas In Commun., vol. 23, pp. 1287–1297, Jun. 2005. [61] R. Wang and V. K. N. Lau, “Robust optimal cross-layer designs for TDDOFDMA systems with imperfect CSIT and unknown interference – state-space approach based on 1-bit ACKNAK feedbacks,” IEEE Trans. Commun., vol. 56, pp. 754–761, May 2008. 188

[62] Z. K. M. Ho, V. K. N. Lau, and R. S.-K. Cheng, “Cross-layer design of FDDOFDM systems based on ACK/NAK feedbacks,” IEEE Trans. Inform. Theory, vol. 55, pp. 4568–4584, Oct. 2009. [63] Y. J. Zhang and S. C. Liew, “Proportional fairness in multi-channel multi-rate wireless networks–Part II: The case of time-varying channels with application to OFDM systems,” IEEE Trans. Wireless Commun., vol. 7, pp. 3457–3467, Sept. 2008. [64] C. H. Papadimitriou and J. N. Tsitsiklis, “The complexity of Markov decision processes,” Math. Operations Res., vol. 12, pp. 441–450, Aug. 1987. [65] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A tutorial on particle ﬁlters for online nonlinear/non-Gaussian Bayesian tracking,” IEEE Trans. Signal Processing, vol. 50, pp. 174–188, Feb. 2002. [66] V. Chandrasekhar, J. G. Andrews, and A. Gatherer, “Femtocell networks: a survey,” IEEE Commun. Mag., vol. 46, pp. 59–67, Sept. 2008. [67] M. Sharif and B. Hassibi, “On the capacity of MIMO broadcast channels with partial side information,” IEEE Trans. Inform. Theory, vol. 51, pp. 506–522, Feb. 2005. [68] P. Gupta and P. Kumar, “The capacity of wireless networks,” IEEE Trans. Inform. Theory, vol. 46, pp. 388–404, Mar. 2000. [69] L.-L. Xie and P. Kumar, “A network information theory for wireless communication: scaling laws and optimal operation,” IEEE Trans. Inform. Theory, vol. 50, pp. 748–767, May 2004. [70] O. Leveque and I. Telatar, “Information-theoretic upper bounds on the capacity of large extended ad hoc wireless networks,” IEEE Trans. Inform. Theory, vol. 51, pp. 858–865, Mar. 2005. [71] M. Franceschetti, O. Dousse, D. N. C. Tse, and P. Thiran, “Closing the gap in the capacity of wireless networks via percolation theory,” IEEE Trans. Inform. Theory, vol. 53, pp. 1009–1018, Mar. 2007. [72] M. Grossglauser and D. Tse, “Mobility increases the capacity of ad-hoc wireless networks,” in INFOCOM 2001, vol. 3, pp. 1360–1369, 2001. [73] S. Kulkarni and P. Viswanath, “A deterministic approach to throughput scaling in wireless networks,” IEEE Trans. Inform. Theory, vol. 50, pp. 1041–1049, Jun. 2004.

189

and P. “A throughput scaling law for a class of wireless relay networks. 234–244. 1268–1274.” in Proc.. Hamalainen. Choi and J. Systems and Computers. Tech. pp.. “Throughput scaling laws for wireless networks with fading channels. 53. “Multiple-antennas and isotropically random unitary inputs: The received signal density in closed form. pp. [79] I. [82] M.” IEEE Trans. and K. [85] H. “Autonomous spectrum sharing for mixed lte femto and macro cells deployments. Ebrahimi. Asilomar Conf. pp. vol. Nov. Sohn. 7. Nov. 2002. 1–5. Wireless Commun. Apr. Theory. 2010. May 2011. 2011. 2007.. Inform. vol. pp. “Capacity scaling of MIMO broadcast channels with random user distribution. “Performance analysis of downlink ofdma resource allocation with limited feedback. vol. Commun. Gesbert and M. 2009. vol. 399–417. [76] W.” in INFOCOM. L. vol. 1– 5. “Throughput scaling laws for wireless ad hoc networks with relay selection. [84] J. 714–725. Andrews. 48. May-June 1963. 11. [78] D. 2927 –2937. Inform. [83] B. Gupta. Juntti. [75] P. Feb. J. Conf. 59. Jun. A. Signals. Feki. pp. 4250–4254. Jan. Symposium Inform. Kountouris and J.” Wireless Communications. M.” IEEE Trans. Theory.” IEEE Trans. Mar. Hassibi and T. 190 . 1333–1337. 57. and A. 2.” IEEE Trans. vol. “The capacity gain from intercell scheduling in multiantenna systems. 2008. 1473–1484. 2133–2137. Andrews.” in Proc. Andrews. IEEE Int. B. june 2009. “Miso capacity with per-antenna power constraint. Khandani. “Rate scaling laws in multicell networks under distributed power control and user scheduling. Vu. vol. pp. vol. J. V. 2004. Lee.[74] M. Leinonen. IEEE Veh. [77] M. [80] J. Sripathi and J. pp. Jun. pp. Kountouris. pp. [81] M. and M. Maddah-Ali. Lehnert. Everett. Gulf Professional Publishing.” in Proc.” Operations Research.” IEEE Trans. “Generalized lagrange multiplier method for solving problems of optimum allocation of resources. Theory. Andrews. 2000. Introductory analysis: The theory of calculus. A. Inform. Theory. pp. Marzetta. 2010. 8. IEEE Transactions on. Capdevielle. pp. Fridy.

N. 9th printing. A First Course in Order Statistics. T. A. N. N. 2008. [88] N. Arnold. Order Statistics. David and H.. New York: John Wiley and Sons. SIAM-Society for Industrial and Applied Mathematics. 1954. and H. C. ”The asymptotic developement of the distribution of the extreme values of a sample” in Studies in Mathematics and Mechanics Presented to Richard von Mises. New York: Academic. Nagaraja. Nagaraja. 1970. 191 . Abramowitz and I. Balakrishnan. Uzgoren. 2003. [89] M. A. Dover Publications. [87] B. Stegun. Handbook of Mathematical Functions. 3rd ed.[86] H.

- Mdrs155ec IntroUploaded byshohagpt81
- Commsim_UserGuideUploaded bySanjay Sinha
- Baseband_ModulationUploaded bysasirekaarumugam
- Yao Ma Ber QE x a c t B E R fo r M-QAM with MRC and Imperfect Channel Estimation in Rician Fading Channelsam 04133880Uploaded bypeppas4643
- Paper Diogo x AlmeidaUploaded byDoan Van
- WLAN, Part 3 ContentsUploaded byvoduyminhdtvt
- 08_ResourceManager_2004.pdfUploaded byramkumargupta9
- Matlabb.docxUploaded byNgọc Thái
- Ber Analys. Qam Fso 2013Uploaded byTrai Phù Đổng
- 012Uploaded bySiva Sagar Gurujula
- TN0501-labmanualUploaded byshahak23
- Resource Allocation and Design Issues in Wireless SystemsAggarwal DissUploaded bysabuj22
- LTE ThroughputUploaded byAndrew Thei
- Digital Coherent Detection of Multi-gigabitUploaded byAbdelgader Abdalla
- Digital CommunicationsUploaded byNguyễnTấnTriều
- wsn-tae IIUploaded byshuchi
- edseries.txtUploaded bysftgjsfyj
- Ceragon ManualUploaded byMarlon Biagtan
- MScDCOM-Lec07v2 With AnnotationsUploaded byJai Gaizin
- Chapter 6 Digital ModulationUploaded byBob Fahmy
- cwa8087_40Uploaded byswvhit
- Syed Usama Hasan. NUST-135.DocxUploaded byUsama Hassan
- arm_tcom_99Uploaded byMona Sayed
- Design and Development of Gaussian Minimum Shift Keying (GMSK) Demodulator for Satellite CommunicationUploaded byBONFRING
- NewTool_realistic Traffic 2011Uploaded byMourad Soliman
- komdat 3Uploaded byKhusnil Mujib
- Final-EE456-10Uploaded byErdinç Kayacık
- hibridoUploaded byFernando Alarcón
- ModulationUploaded byDHarishKumar
- Writing Problem and Hypothesis Statements for Engineering Research(19)Uploaded by柯泰德 (Ted Knoy)