ABSTRACT

Title of dissertation:

RELIABLE COMMUNICATION OVER OPTICAL FADING CHANNELS Kaushik Chakraborty, Doctor of Philosophy, 2005

Dissertation directed by:

Professor Prakash Narayan Department of Electrical and Computer Engineering and Institute for Systems Research

In free space optical communication links, atmospheric turbulence causes random fluctuations in the refractive index of air at optical wavelengths, which in turn cause random fluctuations in the intensity and phase of a propagating optical signal. These intensity fluctuations, termed “fading,” can lead to an increase in link error probability, thereby degrading communication performance. Two techniques are suggested to combat the detrimental effects of fading, viz., (a) estimation of channel fade and use of these estimates at the transmitter or receiver; and (b) use of multiple transmitter and receiver elements. In this thesis, we consider several key issues concerning reliable transmission over multiple input multiple output (MIMO) optical fading channels. These include the formulation of a block fading channel model that takes into account the slowly varying nature of optical fade; the determination of channel capacity, viz., the maximum achievable rate of reliable

communication, when the receiver has perfect fade information while the transmitter is provided with varying degrees of fade information; characterization of good transmitter power control strategies that achieve capacity; and the capacity in the low and high signal-to-noise ratio (SNR) regimes. We consider a shot-noise limited, intensity modulated direct detection optical fading channel model in which the transmitted signals are subject to peak and average power constraints. The fading occurs in blocks of duration Tc (seconds) during each of which the channel fade (or channel state) remains constant, and changes across successive such intervals in an independent and identically distributed (i.i.d.) manner. A single-letter characterization of the capacity of this channel is obtained when the receiver is provided with perfect channel state information (CSI) while the transmitter CSI can be imperfect. A two-level signaling scheme (“ONOFF keying”) with arbitrarily fast intertransition times through each of the transmit apertures is shown to achieve channel capacity. Several interesting properties of the optimum transmission strategies for the transmit apertures are discussed. For the special case of a single input single output (SISO) optical fading channel, the behavior of channel capacity in the high and low signal-to-noise ratio (SNR) regimes is explicitly characterized, and the effects of transmitter CSI on capacity are studied.

RELIABLE COMMUNICATION OVER OPTICAL FADING CHANNELS

by Kaushik Chakraborty

Dissertation submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Doctor of Philosophy 2005

Advisory Committee: Professor Professor Professor Professor Professor Professor Professor Prakash Narayan, Chair/Advisor Alexander Barg Benjamin Kedem P. S. Krishnaprasad Armand Makowski Adrian Papamarcou Sennur Ulukus

c Copyright by Kaushik Chakraborty 2005 .

Dedication To my son Sattwik. ii .

ACKNOWLEDGMENTS I express my profound gratitude to my advisor. Adrian Papamarcou and Sennur Ulukus for serving on my dissertation committee. I am grateful to Professors Alexander Barg. for his wholehearted support. Souvik Mitra. Dr. Onur Kaya for numerous stimulating discussions. P. I thank the Institute for Systems Research and the Department of Electrical and Computer Engineering at the University of Maryland for providing the necessary financial support during my graduate studies. Kaushik Ghosh. Mr. I shall endeavor to follow his exemplary professional and ethical standards. Dr. Roy Chowdhury. Amit Kale. Armand Makowski. Benjamin Kedem. and invaluable guidance in every stage of this work. It has been a privilege to be able to work with him. Dr. Anubhav Datta. S. DAAD19-01-1-0465 to the Center for Communicating Networked Control Systems (through Boston University). Ayan Roy Chowdhury and Ayush Gupta for their companionship and support. Professor Prakash Narayan. Amit K. I thank Indrajit Bhattacharya. Damianos Karakos. I am deeply indebted to my colleagues and friends. Mainak Sen. Chunxuan Ye. timely encouragement. Krishnaprasad. I thank Professor Adrian Papamarcou for his help and guidance from my earliest days at the University of Maryland. iii . and Dr. I gratefully acknowledge that this research was supported by the Army Research Office under ODDR&E MURI01 Program Grant No.

I want to take this opportunity to express my gratitude and affection to my parents and brother for their continued encouragement and support. I thank my father-in-law, mother-in-law and sisters-in-law for their affection and faith in me. I thank my dear wife Sharmistha for standing by me through thick and thin; her unconditional love and unwavering faith form the backbone of my life. Finally, I thank my son Sattwik for allowing me to spend much of his share of time towards research. To him, I owe the successful completion of this thesis.

iv

TABLE OF CONTENTS List of Figures 1 Introduction 1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . 1.2 Background . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Historical perspective . . . . . . . . . . . . . . 1.2.2 Atmospheric optical propagation . . . . . . . 1.2.3 Brief description of the optical communication 1.2.4 Previous work . . . . . . . . . . . . . . . . . . 1.3 Overview of the dissertation . . . . . . . . . . . . . . 1.4 Contributions of this dissertation . . . . . . . . . . . SISO Poisson fading channel 2.1 Introduction . . . . . . . 2.2 Problem formulation . . 2.3 Statement of results . . . 2.4 Proofs . . . . . . . . . . 2.5 Numerical example . . . 2.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 1 1 3 3 4 6 9 10 12 16 16 17 21 27 44 48 50 50 51 55 55 57 58 62 64 86 95 98 98 99 103 103 105 106 111 136 141

2

3

MIMO Poisson channel with constant channel fade 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 3.2 Problem formulation . . . . . . . . . . . . . . . . . . 3.3 Statement of results . . . . . . . . . . . . . . . . . . . 3.3.1 Channel capacity . . . . . . . . . . . . . . . . 3.3.2 Optimum transmission strategy . . . . . . . . 3.3.2.1 Individual average power constraints 3.3.2.2 Average sum power constraint . . . . 3.4 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Numerical examples . . . . . . . . . . . . . . . . . . . 3.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fade . . . . . . . . .

4 MIMO Poisson channel with random channel fade 4.1 Introduction . . . . . . . . . . . . . . . . . . . . 4.2 Problem formulation . . . . . . . . . . . . . . . 4.3 Statement of results . . . . . . . . . . . . . . . . 4.3.1 Channel capacity . . . . . . . . . . . . . 4.3.2 Optimum power control strategy . . . . 4.3.3 Symmetric MIMO channel with isotropic 4.4 Proofs . . . . . . . . . . . . . . . . . . . . . . . 4.5 Numerical examples . . . . . . . . . . . . . . . . 4.6 Discussion . . . . . . . . . . . . . . . . . . . . .

v

5

Conclusions 5.1 Directions for future research . 5.2 RF/optical wireless sum channel 5.2.1 Motivation . . . . . . . . 5.2.2 Problem formulation . . 5.2.3 Channel capacity . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

143 143 146 146 148 153

A

156 A.1 Proof of (2.37) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 A.2 Proof of (2.42) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 A.3 Proof of (2.74) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 B.1 B.2 B.3 B.4 Proof Proof Proof Proof of of of of (3.34) . (3.43) . (3.52) . (3.75) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 160 161 163 165

B

C

167 C.1 Proof of (4.37) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 C.2 Proof of (4.60) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 C.3 Proof of (4.85) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 170 174

D Proof of Theorem 10 Bibliography

vi

.2. . . . . . . . . . ′ ′ (c) I0 (σ − ) > 0.6 3. . . 89 Plot of capacity (Csum ) versus σ for Example 3. . . . . 19 Subintervals of [0. . . . . . . . . . . ′ (e) I0 ((1 + 1/a)σ) > 0. 60 The possible variations of I0 (x) versus x. . . σ2 for Example 3. . . . . 88 Plot of capacity (Cind ) versus σ1 . . . . . . . . . . . . . . . . . . .1. . σ2 for Example 3. . . . . . . . . . . . . . . . . . . . . . .5 3.4 2. . . . . . . . . 45 Comparison of capacity versus σ for various assumptions on transmitter CSI. . 6. . . 7 2. 84 Decision region of optimal duty cycles for individual average power constraints σ1 . . 37 Optimal power control law with perfect CSI at transmitter and receiver. . . . . . . . . . . . . . . · · · . . . . . Region Rk corresponds to condition (k) in Theorem 5. . . . . . . . . . . . k = 1. . . . . . . . . . . . . .1. . 90 Decision region of optimal duty cycles for individual average power constraints σ1 . . . . . . .8 .2 3. . . . . . 46 Comparison of capacity versus SNR for various assumptions on transmitter CSI. . . . . . . . . . . . . . . . . . . . . . .4 3. . . . . . . . . . . . . . . .7 3. . . 17 Block fading channel. . . ′ (a) I0 (0) < 0. . .1 2.3 2. . . . . . . σ2 for Example 3. . . .3 3. . . . . . . . . . . . .LIST OF FIGURES 1. . .1 3. . . . . . . . . . I0 (σ + ) < 0. . . . . . . .6 2. . . . . . . . . . 91 vii ′ 3. . . .1. . T ]. . . . . . . . . . . . . . . . . 36 Discrete channel approximation. . . . . . . . . .1. . . . . . .7 Poisson fading channel. . . . . . . . . . . ′ (b) I0 (x0 ) = 0 for some 0 ≤ x0 < σ. . . . . 47 N × M MIMO Poisson channel with deterministic channel fade. . . . . . . . 87 Optimal duty cycles for an average sum power constraint σ for Example 3. . . .5 2. . . . . . . . . . . . . . . . . . . . . . . .2 2. . .1 An intensity modulation direct detection (IM/DD) optical communication system. . . . . . ′ (d) I0 (x0 ) = 0 for some σ < x0 ≤ (1 + 1/a)σ. . . . . . . . 51 The structure of the optimal solution for N = 2.

. . . . . . . . . . . . . . . . .2 5. .2. . 93 3. . . . . . . . . . .2. . . . . . . . . . 148 Optical channel model. . . . . . . . . . 99 Block fading channel. . . . . 138 State set diagram for Example 4. . . . 92 3. . . . . . . . . . . . . . .3 4. . . . . . . . . . . . . . 146 Block schematic of the combined RF/optical sum channel. . . . . . . . . . . . . . . . . . . . . . . . . . . .2. .3.1 4. . 107 State set diagram for Example 4. . . . . . . .5 4. σ2 for Example 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. .4 N × M MIMO Poisson channel with random channel fade.11 Plot of capacity (Csum ) versus σ for Example 3. . .1 5. . . . . . . . . . . . . . . .3 5. . 149 viii . . . . .9 Optimal duty cycles for an average sum power constraint σ for Example 3. . .7 5. . .1. . . . . . . . . . 148 RF fading channel model. . . . . . . . . . . . . . . . .10 Plot of capacity (Cind ) versus σ1 . . . . . 139 Channel capacity for Example 4. . .2. . . . .6 4. 94 4. . . . . . .2 4. . . . . . . . . 100 Mirror states of a 2 × 2 MIMO Poisson channel. . . . . . . 140 Block schematic of the DMC sum channel.1. 137 Lower bounds on capacity for Example 4. . . . . . . . . . .4 4.

to name a few. data rates can typically be of the order of gigabits per 1 . atmospheric turbulence can cause random variations in refractive index of air at optical wavelengths. which. inexpensive components. In free space optical communication links. RF-wireless backhaul. There are many benefits of wireless optical systems. 21. metro network extensions. rapid deployment time.1 Motivation Free space optics (FSO) is emerging as an attractive technology for several applications. last mile connectivity.. result in random fluctuations in both the intensity and phase of a propagating optical signal [24. 51.g. Such fluctuations can lead to an increase in the link error probability. 53. In practice. these fades can routinely exceed 10 dB. in turn. 54]. seamless wireless extension of the optical fiber backbone. free space optical communication through the turbulent atmospheric channel has received much attention in recent years [20. 34. Consequently. viz. and lack of licensing regulations. In the systems under consideration..” can be modeled in terms of an ergodic lognormal process with a correlation time of the order of 10−3 to 10−2 second [44].Chapter 1 Introduction 1. The fluctuations in the intensity of the transmitted optical signal. fiber backup. immunity to RF interference. 48]. high security. thereby degrading communication performance [54]. 23. e. termed “fading. and enterprise connectivity [49].

These assumptions lead to a variety of interesting problems which are the subject of current investigation. the use of multiple transmit and receive antennae has been shown to significantly improve the communication throughput in the presence of channel fade. In a recent experimental study [29]. [6. Therefore. thereby achieving higher throughputs (cf. In RF communication.g. Depending on the availability of a feedback link and the amount of acceptable delay. 2 . nearly 106 bits are transmitted during each coherence period. a small fraction of which can be used by the receiver to form good estimates of the channel fade. e. 34. Some attempts have since then been made to characterize analytically the benefits of MIMO communication in optical fading channels [20. at typical speeds. the free space optical channel is a slowly varying fading channel with occasional deep fades that can affect millions of consecutive bits [21]. which can be used for adaptive power control. 40]). For a survey of recent results.. instantaneous realizations of the channel state can be estimated at the receiver. Another popular technique to combat the detrimental effects of fading is the use of spatial diversity in the form of multiple transmit and receive elements. the transmitter can be provided with complete or partial knowledge of the channel state. see [16]. multiple laser beams were employed to improve communication performance in a turbulent atmospheric channel. A technique that is often used to achieve higher rates of reliable communication over fading channels is to use estimates of the channel fade (also referred to as path gain or channel state) at the transmitter and the receiver. 51]. 21. In optical fading channels. See [40] for a comprehensive review of relevant research in radio frequency (RF) communication.second.

characterization of good transmitter power control strategies that achieve capacity. Classical works.. we consider several important issues concerning reliable communication over MIMO optical fading channels. the maximum achievable rate of reliable communication..2.g. a series of semaphores mounted on towers.1 Historical perspective Historically.2 Background 1. Alexander Graham Bell used a “photophone” to transmit telephone signals via an intensity3 . and the limiting behavior of channel capacity in low and high signal-to-noise ratio (SNR) regimes. when the receiver has very good estimates of channel state information (CSI) while the transmitter is provided with varying degrees of CSI. man has used fire beacons and smoke signals for establishing and maintaining contact with other human beings. e. the determination of channel capacity. Ever since the discovery of fire. In 1880. 1. conveying information optically through the atmospheric channel is one of the most primitive forms of communication known to mankind. viz. These include the formulation of a block fading channel model that takes into account the slowly varying nature of optical fade. Euclid’s Optica and Hero’s Catopricia discuss how the ancient Greeks used the sun’s reflection from metal disks to signal messages over vast distances. where human operators relayed messages from one tower to the next [22]. Early naval communication relied on signaling flags and shuttered lamps [17]. Claude Chappe invented the “optical telegraph” in the 1790s.In this dissertation.

and thermal inhomogeneities in the troposphere2 cause absorption and 1 Water vapor. became the preferred mode of communication. nitrogen. the interest in wireless optical communication dwindled. ozone.2. However. especially in urban and metropolitan environments. since the mid-1990s. Free space optics was voted as one of the ten “hottest” technologies in 2001 [36]. and intense. dust. an increasing demand for broadband communication. 2 The troposphere is the lowest layer of the atmosphere extending from the earth surface up to an elevation of about 15 km (9 miles). which evolved to telephony and wireless telegraphy.modulated optical beam 200m through air to a distant receiver [28]. However. ice. 50]. Most of the early development of unguided laser communication was directed towards satellite and remote military applications. In the commercial sector. For details. oxygen. The resurrection of modern optical communication can be attributed to the invention of lasers in the late 1950s. fuelled by the explosive growth of the internet.2 Atmospheric optical propagation An optical signal propagating through the free space atmospheric channel undergoes degradations due to many factors. 4 . has opened new horizons for optical wireless communication. which makes it an attractive carrier for high data rate communication. carbon dioxide. organic molecules. Bell’s other technology. Molecular and aerosol absorbers1 in the atmosphere. etc. coherent. 1. con- tribute to absorption and scattering at optical wavelengths. These effects can be substantially reduced by choosing the operating carrier frequency judiciously. as the popularity of optical fibers soared in the 1970s and 1980s. Laser radiation is monochromatic. see [27.

Under clear weather conditions. 48].scattering of the propagating optical field. We shall assume that typical transmitter beam divergence and receiver field-of-view values are much larger than the beam spread and angular spread induced by turbulence [45]. Heated air rising from the earth and man-made devices such as heating ducts create thermal inhomogeneities in the troposphere. multipath spread. 5 . the twinkling of stars at night and the shimmering of the horizon on a hot day [28]. and will be ignored. multipath spread is of the order of 10−12 second for line-of-sight communication [44]. a phenomenon referred to as “atmospheric turbulence” or “refractive turbulence.” can be modeled in terms of an ergodic lognormal process with a correlation time of the order of 1–10 milliseconds [44].” which. We shall assume that the fade is frequency nonselective (which is justified by the negligible effect of multipath spread). e. result in random fluctuations in both the intensity and phase of a propagating optical signal [24. beam spread. This leads to random variations in the refractive index of air at optical wavelengths.viz. in turn. so that the intensities of the optical signals at the transmit and recieve apertures for each transmit-receive aperture pair are related via a multiplicative fade coefficient [18].g. Absorption and scattering can have several detrimental effects on the propagating optical wavefront. termed “fading. and will be ignored henceforth. Atmospheric turbulence manifests itself in several familiar atmospheric conditions. angular spread. We shall focus our attention on atmospheric optical communication under clear weather conditions.. attenuation. Doppler spread and depolarization [45].. The effects of atmospheric absorption are negligible for short to medium range communication links. The intensity fluctuations of the propagating optical signal.

4 Herein lies an important difference between optical and RF communication. at wavelengths in the range of 850–950 nanometers [26. additional shot noise results from external sources of radiation (“background noise”). viz. whence the instantaneous power of the transmitted signal is proportional to the square of the input current.. a semiconductor photodiode.. and hence the instantaneous power. The intrinsically discrete nature of the current gives rise to shot noise observed in low light level detection. 44]. in RF commu- nication. e.2. or an avalanche photodiode (APD). 50]. the transmitter comprises a photoemitter. 21]. we restrict our attention to an idealized direct detection optical receiver.3 Brief description of the optical communication system In conventional optical communication systems. 26].g.. e. At typical operating frequencies. an electrical current is generated by photoabsorption at a rate which is proportional to the instantaneous optical power incident on the active detector surface. as well as from spontaneously generated charge particles (“dark current”) [13. This simple 3 Conventional diodes used for wireless optical communication emit light in the infrared spec- trum. typically the amplitude of the propagating RF signal is proportional to the modulating electrical current. 6 .g. a laser diode (LD) or a light-emitting diode (LED). In this dissertation. while the receiver comprises a photodetector. The intensity.1. of the transmitted optical signal is proportional to the modulating current at the photoemitter. in which the photocurrent generated at the detector is modeled as a doubly stochastic Poisson counting process (PCP) whose rate is proportional to the incident optical power at the detector plus a constant (dark current rate) [13.4 At the direct detection photodetector.3 the simplest and most popular technique for modulating the information onto the emitted light signal is intensity modulation [13. a PIN diode.

r ∈ x IR3 } is the IR-valued directional (or spatial) component of the amplitude of the transmitted waveform. In practice. A block schematic diagram of a shot noise limited free space optical communication system is given in Figure 1. t ≥ 0} is the IR+ -valued intensity of the transmitted signal (which is 0 proportional to the modulating electrical current at the photoemitter). r ∈ IR3 . r) (·)2 .1) where {x(t). t ≥ 0. 0 ≤ t ≤ T. ≤ σA. propagating through the turbulent atmospheric chan7 .1. which is captured in terms of the following peak and average transmitter power constraints: 0 1 T ≤ T 0 x(t) ≤ A. r) Optical fading channel Transmitter R0 (t. and f0 is the optical carrier frequency. receiver model is valid in the shot noise limited regime. the light emitting devices are limited in transmission power. The optical waveform. {˜(r). when the effect of background radiation is minimal compared to the incident optical signal at the detector. LP F R(t) + Λ(t) Poisson pulses Y (t) Optical direct−detection device Figure 1. r .1: An intensity modulation direct detection (IM/DD) optical communication system. (1.λ0 xO (t. x (1. The IR-valued amplitude of the transmitted optical waveform is given by xO (t. r) = x(t)˜(r) cos(2πf0 t).2) x(τ )dτ where A > 0 and 0 ≤ σ ≤ 1 are fixed.

and λ0 ≥ 0 is the background noise (dark current) rate which is assumed to be constant. so that the amplitude of the received optical waveform (in the absence of dark current) is given by RO (t. undergoes frequency nonselective time-varying fading. (1. 0 The receiver is assumed to possess perfect channel state information (CSI) {S(t). Other models have been proposed in the literature that describe the background noise as additive white Gaussian (cf.. t ≥ 0.g. t ≥ 0} is the IR+ -valued multiplicative channel fade (or channel state). 26.3) where {S(t). (1.nel. t ≥ 0}. where U is arbitrary. r). r) = S(t)xO (t. 23. [20. 54] and the references therein). We also assume that the effects of infrared and visible background radiation can be effectively captured by a constant rate at the photodetector. while the transmitter CSI is given by {U(t) = h(S(t)). These assumptions lead to simple channel models amenable to an exact analysis.4) where {R(t) ∝ S(t)x(t). e. r ∈ IR3 . t ≥ 0} is the intensity of the received optical signal. It should be noted that our model ignores the bandwidth limitations of the transmitter and receiver devices currently used in practice. t ≥ 0} with rate (or intensity) Λ(t) = R(t) + λ0 . 51. with a given mapping h : IR+ → U. Henceforth we refer to this channel model as the Poisson fading channel. The electrical current generated 0 at the (idealized) direct detection receiver is a Poisson counting process (PCP) {Y (t). we provide a systematic treatment of the problem of the determination of the capacity of the multiple input multiple output (MIMO) Pois8 . 53. t ≥ 0}. In this dissertation.

Davis [11] considered the same channel subject to both peak and average transmitter power constraints. and analyze how the knowledge of varying levels of CSI at the transmitter with perfect CSI at the receiver can be favorably used to enhance capacity. The capacity of the Poisson channel with random noise intensity subject to timevarying peak and average power constraints was derived in [12]. and the error exponents for this channel were determined in [2]. Wyner employed more direct methods to derive not only the capacity. Kabanov [25] used martingale techniques5 to determine the capacity of the Poisson channel subject to a peak transmitter signal power constraint. 9 . A strong converse for the coding theorem using a sphere packing bound was derived in [4]. 1. In [52].son fading channel. 5 See [1] for a comprehensive treatment of the application of martingale techniques to the theory of point processes. but rely instead on more elementary methods. In this dissertation. Most of these studies focus on the Poisson channel with transmitter power constraints. but also an exact expression for the error exponent of the Poisson channel subject to peak and average power constraints. we do not employ this powerful analytical tool to prove our results.4 Previous work The direct detection photon channel without any fading has been studied extensively in the literature. The capacity region of the Poisson multiple-access channel was obtained in [32].2. Capacity and decoding rules for the Poisson arbitrarily varying channel were discussed in [3]. The Poisson broadcast channel has been addressed in [33].

42]. This important issue was considered in [41.d. 21]. Of direct relevance to our work are the recent results of Haas and Shapiro [19. several researchers have investigated the use of multiple transmit and receive apertures in optical channels. In [34]. In the last decade. The channel coherence time Tc is used as a measure of the intermittent coherence of the time-varying fade. and change in an independent and identically distributed (i. bounds on channel capacity were obtained for a discrete-time Poisson channel.In practice. where the author discussed the Poisson channel with constraints on the intertransition times of the transmitted signal. the transmitting devices are limited in bandwidth. whereas ergodic and outage capacity issues for the MIMO Poisson channel with random channel fade are addressed in [21]. 1. The receiver is assumed to possess 10 . In [31].) manner across successive such intervals. More general spectral constraints on the transmitted signals were considered in [43]. The channel fade is assumed to remain fixed over time intervals of width Tc . the authors have demonstrated that use of multiple transmit and receive apertures can lead to reduction in bit error rate. We begin with the single input single output (SISO) Poisson fading channel in Chapter 2.i. Upper and lower bounds on the capacity of the MIMO Poisson channel with deterministic channel fade are derived in [19]. We propose a block fading channel model that takes into account the slowly varying nature of optical fade.3 Overview of the dissertation This dissertation is organized as follows.

e. and identify situations when the availability of good estimates of CSI at the transmitter can lead to increased throughput. we consider a MIMO Poisson channel with constant channel fade. the channel fade is assumed to remain constant throughout the duration of transmission. Under these assumptions. and analyze properties of optimal transmission strategies that achieve channel capacity. and to provide useful insights into the structure of efficient transmission strategies.. finds an application in Chapter 4. where a simplified expression for channel capacity is derived for the symmetric MIMO Poisson channel with isotropic channel fade. We also study the behavior of channel capacity in the high and low signal-to-noise regimes. we allow for two kinds of transmitter power constraints: (a) peak and average transmitter power constraints on individual transmit apertures. this model is rich enough to reveal some of the complexities of communicating over MIMO Poisson channel with random fade (which is discussed in Chapter 4). Though this assumption of deterministic channel fade does not capture the practical channel conditions effectively. (a) perfect CSI at transmitter. In Chapter 3.. and (b) no CSI at transmitter. although apparently artificially introduced. We characterize the properties of optimal transmission strategies that achieve capacity. The average sum power constraint. viz. 11 . Two extreme cases of this general formulation are of special interest. i. while the transmitter CSI may be imperfect. we characterize the capacity of the Poisson fading channel when the transmitted signals are subject to peak and average power constraints. In this setting. and discuss their implications on communication system design.perfect CSI. and (b) individual peak power constraints and an average power constraint on the sum of the transmitted signals from all transmit apertures.

we investigate the general capacity problem for the MIMO Poisson fading channel. by randomly switching between the two channels.i. (i) A block fading channel model has been introduced for the slowly varying free space optical fading channel.The MIMO Poisson channel with random channel fade is addressed in Chapter 4. in the concluding section in Chapter 5. The receiver is assumed to possess perfect CSI. The channel fade is assumed to remain unvarying 12 . Several important properties of optimal transmission strategies are identified for the symmteric MIMO channel when the channel fade is isotropically distributed and the transmitter has perfect CSI. Finally. 1. the channel fade matrix is assumed to remain unvarying on intervals of duration Tc .. Furthermore.d. in which information is transmitted using one of two available wireless channels. while the transmitter CSI can be imperfect. In this setting. The transmit apertures are subject to individual peak and average power constraints. We outline some of the information theoretic issues which might arise in communication over such a hybrid RF/optical wireless channel.4 Contributions of this dissertation We conclude this chapter by outlining the main contributions of this dissertation. depending on the instantaneous RF channel conditions. an RF fading channel and an optical channel (without fading). A block fading channel model is considered. a new RF/optical sum channel model is introduced. additional information can be conveyed. viz. manner across successive intervals. and vary in an i.

on intervals of duration Tc . where the authors have claimed that at the high SNR 13 . and use of channel state information (CSI) at the transmitter and the receiver. (a) estimation of channel state. where Tc is the channel coherence time. manner across successive such intervals.d. In this setting. the transmitter CSI does not increase capacity. (iii) We have demonstrated that a two-level signaling scheme (“ON-OFF keying”). it has been established that a knowledge of CSI at the transmitter leads to higher channel capacity in the high signalto-noise (SNR) regime when the average power constraint is stringent. The receiver is assumed to possess perfect CSI. and is assumed to vary in an i. (ii) We have analyzed two techniques to combat fading in optical channels. Furthermore. we have obtained a single-letter characterization of the capacity of a block fading MIMO Poisson channel subject to peak and average transmitter power constraints.i. and discussed several properties of optimal transmission and reception strategies.. and (b) use of multiple apertures at the transmitter and receiver. in which each transmit aperture either remains silent (“OFF”) or transmits at its peak power level (“ON”). while the transmitter CSI can be imperfect. while in the low SNR regime. This simplistic abstraction of a complicated physical phenomenon allows us to derive exact expressions for channel capacity and to identify properties of optimal transmission strategies for reliable communication over a shot-noise limited Poisson fading channel. This is in contrast to the results cited in [21]. with arbitrarily fast intertransition times can achieve channel capacity. (iv) For the SISO Poisson fading channel. viz. the capacity of the block fading Poisson channel with perfect receiver CSI does not depend on the coherence time Tc .

all the transmit apertures with higher allowable average power levels must also remain ON. Under a symmetric average power constraint. which separates the channel states according to the relative ordering of the transmit apertures’ optimal average conditional duty cycles. (vi) For the symmetric MIMO Poisson channel with isotropically distributed random fade. thereby establishing the tightness of the lower bound proposed in [19]. An improved lower bound is proposed based on our results for the MIMO Poisson channel with deterministic fade and a sum average power constraint. can achieve channel capacity. (vii) We have studied a combined RF/optical sum channel with perfect RF fade information at the receiver and partial RF fade information at the transmitter. even when the power constraints are symmetric. We have identified a partitioning of the channel state set. it follows that the “simultaneous ON-OFF keying” strategy. the notion of “mirror states” has been introduced to obtain a simplified expression for channel capacity. A complete characterization of the optimal power control 14 . (v) For the MIMO Poisson channel with constant channel fade. The simultaneous ON-OFF keying lower bound is shown to be strictly suboptimal for the MIMO Poisson channel with isotropic fade.regime. a knowledge of CSI at transmitter does not improve capacity. which dictates all the transmit apertures to simultaneously remain either ON or OFF. We have determined an expression for capacity of this channel in terms of the capacities of the individual RF and optical channels. and a gain in the code rate due to random switching. a key property of the correlation structure of transmitted signals from all the transmit apertures has been identified: it been established that whenever a transmit aperture remains ON.

remains unresolved. 15 . which are functions of transmitter RF fade information. primarily due to the lack of an exact expression for capacity of the discrete-time Poisson channel.and switching strategies.

i. 16 . i.1 Introduction In this chapter. and analyze how the knowledge of varying levels of CSI at the transmitter with perfect CSI at the receiver can favorably be used to enhance capacity. but in this chapter is limited to the case of SISO single-user channel. We provide a systematic treatment of the problem of the determination of the capacity of a Poisson fading channel. and changes across successive such intervals in an independent and identically distributed (i. Our model is similar to the one studied in [21].) fashion. we consider a single-user single input single output (SISO) Poisson fading channel limited by shot noise. channel state information (CSI) can be provided to the transmitter. The nonnegative transmitted signal is constrained in its peak and average power.Chapter 2 SISO Poisson fading channel 2. counts individual photon arrivals at the photodetector. which in effect. and the receiver performs direct detection.. In this setting.e. We then consider situations in which varying levels of information regarding the channel fade. A block fading channel model is introduced that accounts for the slowly varying nature of optical fade. the channel fade remains constant for a coherence interval of a fixed duration Tc (seconds). while the receiver has perfect CSI. Information is transmitted over this channel by modulating the intensity of an optical signal.d.

some concluding remarks are provided in Section 2.3 and proved in Section 2. We use the notation Xj to denote a sequence of rvs {Xi . Xj }. conclusions are drawn that differ considerably from those reported in [21].1: Poisson fading channel. Xi+1 . 0 ≤ t ≤ b}.2 deals with the problem formulation.4. b a ≤ t ≤ b} is denoted in shorthand notation by Xa . Xj }.5. Our results are stated in Section 2. Finally.2 Problem formulation The following notation will be used throughout this dissertation. Realizations of rvs and random processes. i when i = 1. we use Xj = {X1 . when a = 0. which are denoted in lowercase letters. 2.6. A continuous time random process {X(t). Section 2. · · · . we use X b = {X(t). An illustrative example of a channel with a lognormal fade is considered in Section 2. follow the same convention. 17 . we consider a more general class of problems than those considered in [21]. Random variables (rvs) are denoted by upper-case letters and random vectors by bold uppercase letters. · · · . From our new capacity results.S(t) λ0 x(t) Λ(t) PCP Y (t) Figure 2. The remainder of this chapter is organized as follows.

Z+ -valued functions {g(t). 1 The set of nonnegative real numbers is denoted by IR+ . left-continuous. t < ∞. 0 18 . For a given IR+ -valued1 transmitted signal {x(t).A block schematic diagram of the channel model is given in Figure 2. 0 ≤ σ ≤ 1. The input to the channel is a IR+ -valued signal xT = {x(t). where the peak power A > 0 and the ratio of average-to-peak power σ. t ≥ 0. the set of positive integers by Z+ .1. Let Σ(T ) denote the space of nondecreasing. λ(u)du. Then the output process Y T = {Y (t). t ≥ 0} is the IR+ -valued random fade. the received signal Y ∞ = 0 {Y (t). such that for 0 ≤ τ . where {S(t). 0 ≤ t ≤ T } 0 which is proportional to the transmitted optical power. (2. Pr Y (t + τ ) − Y (t) = j|Λt+τ = λt+τ = t t where Γ(λt+τ ) = t t+τ t 1 −Γ(λt+τ ) j t+τ t e Γ (λt ). and λ0 ≥ 0 is the background 0 noise (dark current) rate which is assumed to be constant. · · · . t ≥ 0}. the jumps in Y ∞ correspond to the arrival of photons in the receiver. 1. are fixed. Note that Y ∞ is an independent increments process with Y (0) = 0. 0 ≤ t ≤ T } with 0 g(0) = 0. j! j = 0. Physically. and which satisfies peak and average power constraints of the form: 0 ≤ x(t) ≤ A. and 0 the set of nonnegative integers by Z+ . 1 T T 0 0 ≤ t ≤ T. 0 ≤ t ≤ T } takes values in Σ(T ).1) x(t)dt ≤ σA. piecewise-constant. t ≥ 0} is a Z+ -valued nondecreasing (left-continuous) Poisson counting pro0 cess (PCP) with rate (or intensity) equal to Λ(t) = S(t)x(t) + λ0 .

t ≥ 0. The channel coherence time Tc is a measure of the intermittent coherence of the time-varying channel fade. · · ·. I E[S] < ∞ and2 I E[S log S] < ∞. The channel fade is then described by the random sequence S∞ = {S[1]. · · · .i. S[2]. as advocated in [21]. In an illustrative example discussed in Section 2. The expression ⌈x⌉ denotes the smallest integer greater than or equal to x.d. Our general results hold for a broad class of distributions for S which satisfy the following technical conditions: Pr{S > 0} = 1. Note that the rate of the received signal Y ∞ is now given by3 Λ(t) = S[⌈t/Tc ⌉]x(t) + λ0 . S(t) = S[k]. 2 3 Unless mentioned otherwise. kTc ) be denoted by the rv S[k] (see Figure 2. path gain. in other words. let the channel fade on [(k − 1)Tc . We assume that the channel fade remains fixed over time intervals of width Tc . kTc ). t ≥ 0}.d. and changes in an i. i. 2. is modeled by a IR+ -valued random process 0 {S(t). The channel fade.. manner across successive such intervals. repetitions of a rv S with known distribution. · · ·} of i. we shall use the lognormal distribution with normalized channel fade. For k = 1.i.5 below.S[1] 0 S[2] Tc 2Tc (k − 1)Tc S[k] kTc S[K] (K − 1)Tc T = KTc Transmission interval Figure 2. k = 1. 2.2). all logarithms are natural logarithms. t ∈ [(k − 1)Tc .2: Block fading channel.e. 19 .

Note that the transmitter output. T = KTc . w ∈ W. For each uK ∈ U K . For the channel under consideration. i. receiver) is provided with CSI u[k] = h(s[k]) (resp. 0 ≤ t ≤ T }. u⌈t/Tc ⌉ dt ≤ σA. w ∈ W. u⌈t/Tc ⌉ ).1): 0 ≤ xw (t. that the message transmission duration T is an integer multiple of the channel coherence time Tc . k = 1. We assume. satisfying peak and average power constraints which follow from (2. s[k]) on [(k − 1)Tc . 4 Typically Tc is of the order of 1 − 10 ms.e.. · · ·} denote the CSI at the transmitter. W }. Let {U[k] = h(S[k]). In general. a (W. we can model the CSI available at the transmitter in terms of a given mapping h : IR+ → U. K ∈ Z+ . the transmitter is provided no CSI. We shall be particularly interested in two special cases of this general framework. i. · · ·. 2. k = 1. 2. uK ) = {xw (t. where U is an arbitrary subset of IR+ . T )–code (f. For 0 0 S∞ = s∞ . h is the identity mapping.2) xw t. i.Various degrees of CSI can be made available to the transmitter and the receiver.e. 1. the codebook comprises a set of W waveforms f (w. h is the trivial (constant) mapping. i. 1 T T 0 0 ≤ t ≤ T. hereafter referred to as the transmitter CSI h.e. In the second case. without loss of generality. u⌈t/Tc ⌉ ) ≤ A.. A fraction of the bits transmitted and received during a coherence interval can be used by the receiver to estimate the prevailing fade value. kTc ). the transmitter has perfect CSI. In the first case. (2. w ∈ W = {1. We shall assume throughout that the receiver has perfect CSI4.e. the signal xw (t) at time t is allowed to be a function of u⌈t/Tc ⌉ . φ) is defined as follows. during which 1 − 10 Mb can be transmitted at a rate of 1 Gbps. 20 . the transmitter (resp. · · · . not necessarily finite...

sK ). The decoder is a mapping φ : Σ(T ) ×(IR+ )K → W. The rate of this (W. σ. there exist (W. Given 0 < ǫ < 1. T )–codes (f.3 Statement of results Our first and main result provides a single–letter characterization of the capacity of the Poisson fading channel model described above. Definition 1 Let A. φ) < ǫ. the transmitter sends a waveform xw (t. and its average probability of decoding error is given by 1 Pe (f.. 0 For each message w ∈ W and transmitter CSI uK ∈ U K corresponding to the fade vector sK ∈ (IR+ )K . where we have used the shorthand notation xT UK = xw t. φ) such that 1 T log W > R − δ and Pe (f. The two special cases of perfect and no CSI at the transmitter are considered next. over 0 the channel. R is an achievable rate if it is ǫ-achievable for all 0 < ǫ < 1. 0 ≤ t ≤ T . w w ∈ W. Finally. produces an output w = φ(y T . The supremum of all achievable rates is the capacity of the channel. The receiver. we analyze the limiting 21 . Recall our persistent assumption that the receiver has perfect CSI. Tc be fixed. and will be denoted by C. SK E w w=1 . T )–code is given by R = ˆ nats/sec. 2. u⌈t/Tc ⌉ ). φ) = W W 1 T log W I Pr φ(Y T . λ0 . U⌈t/Tc ⌉ .2. a number R > 0 is an ǫ-achievable rate if for every δ > 0 and for all T sufficiently large. SK ) = w xT (UK ). upon observing y T and being provided with sK . 0 ≤ t ≤ T .

y) is strictly convex on [0. and α(x) = ∆ ∆ ∆ x ≥ 0.7) max I [µ(U)ζ(SA. it is convenient to set ζ(x. ∞) for every y ≥ 0. λ0 ) − ζ(µ(S)SA. Theorem 1 Let A. λ0 . (2. respectively. E 22 (2. x ≥ 0. Tc be fixed. λ0 )] . λ0 ) − ζ(µ(U)SA.4) whence it can be verified that ζ(x. Corollary 1 The capacity for perfect CSI at the transmitter and the receiver is given by CP = max I [µ(S)ζ(SA. For stating our results.6) µ:U →[0. y≥0 (2.5) Note that ζ(·. E (2. The capacity for transmitter CSI h is given by C = where U = h(S). The capacities for the special cases of perfect and no CSI at the transmitter follow directly from Theorem 1. σ. λ0 )] . y) − x(1 + log y) = x log(1 + α(x/y)x/y).3) 1 −1 e (1 + x)(1+1/x) − 1 . y) = (x + y) log(x + y) − y log y. viz.behavior of channel capacity in the high and low signal-to-shot-noise ratio (SNR) regimes. (with 0 log 0 = 0). y ≥ 0. x x ≥ 0. 1] R0 I [µ(S)]≤σ E . (2. in the limits as λ0 → 0 and λ0 → ∞. 1] I [µ(U )]≤σ E µ:I + →[0.

E (2. This is in concordance with previous results (cf. [37] for such a result on block interference channels). (conditioned on the current transmitter CSI) with arbitrarily fast intertransition times. the optimality of i.i. in a single coherence interval. where the optimality of binary signaling for Poisson channels has been established. e.6) (as also in (2. we see that the channel capacity does not depend on the coherence time Tc . the received signal {Yt .d.d. hence it suffices to look at a single coherence interval in the mutual information computations. Conditioned on the transmitted signal {xt . Furthermore. (ii) From Theorem 1. conditioned on perfect receiver CSI.g.The capacity for no CSI at the transmitter and perfect CSI at the receiver is given by CN = max I [µζ(SA. λ0 )] . which are i. A}-valued transmitted signals. (iv) The optimizing “power control law” µ in (2.i. (iii) Our proof of the achievability part of Theorem 1 shows that {0. transmitted signals leads to a lack of dependence of capacity on Tc .. e. λ0 ) − ζ(µSA.7).g. and perfect receiver CSI sK ..8) depicts the probability with which the transmitter picks the signal level A depending on 23 .6). (2.. The fact that the channel capacity of a block fading channel with perfect receiver CSI does not depend on the block size has been reported in the literature in various settings in other contexts (cf.. so that the maximum clearly exists. can achieve channel capacity. [52]).7) and (2. 0 ≤ t ≤ T }.8) 0≤µ≤σ Remarks: (i) The optimization in (2. (2.8)) is that of a concave functional over a convex compact set. 0 ≤ t ≤ T } is independent across coherence intervals.

and if σ0 > σ. let 0 λ0 µρ(s) = sA ∆ ∆ e ρ −(1+ sA ) λ 1+ 0 sA ( sA ) 1+ −1 . let ρ = ρ∗ > 0 be the solution of the equation E[µ I [[µρ (S)]+ ] = σ. it can be interpreted as the optimal average “conditional duty cycle” of the transmitted signal as a function of transmitter CSI. 1] that achieves the maximum in (2. (2.  0 σ0 ≤ σ. 0 (2.7) is given as follows. u ∈ U. the optimal power control law µ∗ : IR+ → [0. 24 . σ > σ.12) Let σ0 = I 0 (S)]. (2.  ρ 0 in the special cases of perfect and no CSI at the transmitter.6) is given as follows. Thus. 0} by [x]+ .10) Then the optimal power control law µ∗ in (2.the available transmitter CSI.  ρ 0 0 5 We denote max{x. u ∈ U. 1] that achieves the maximum in (2. For ρ ≥ 0.  0 σ0 ≤ σ.9) If σ0 = I 0 (U)] > σ. Then the optimal power control law µ∗ in (2. λ0 s ∈ IR+ . s ∈ IR+ . ∗ µ (u) =   [µ ∗ (u)]+ . (2. σ > σ.6) is given by    µ (u).7) is given by E    µ (s).13) µ (s) =   [µ ∗ (s)]+ . Theorem 2 The optimal power control law µ∗ : U → [0. let µ = µρ (u) be the solution of the equation I SA log E ∆ (1 + α(SA/λ0 )SA/λ0 ) U =u (1 + µSA/λ0 ) = ρ.11) The following corollary particularizes the previous optimal power control law Corollary 2 For perfect CSI at the transmitter. For ρ ≥ 0. ∗ (2. let ρ = ρ∗ > 0 be the solution of the equation5 E[µ I [µρ (U)]+ E = σ.

is defined as SNR = A/λ0 . 0) − ζ(µH (U)SA.8) is achieved by µ∗ = min{σ. with µ0 being the solution of the equation I SA log E (1 + α(SA/λ0 )SA/λ0 ) (1 + µ0 SA/λ0 ) = 0. 1] is given as E follows: if σ < 1/e.15) with U = h(S). so that for each 0 ≤ σ < σ0 . the capacity for transmitter CSI h is C H = I µH (U)ζ(SA. 0 ≤ µ∗ (u) ≤ 1/2.denoted SNR.10). Furthermore. and yields a higher capacity value than.For no CSI at the transmitter.14) Remarks: (i) It can be verified that the left-side of (2. We characterize next the capacity in the low and high shot-noise regimes (equivalently the high and low SNR regimes) when the peak signal power A is fixed. eq. . 4). where the optimal power control law µH : U → [0. the optimal power control law in (2. there exists a unique ρ = ρ∗ > 0 that solves (2. (2.11) is well-defined. (ii) For the case of perfect transmitter CSI. let ρ = ρ∗ > 0 be the solution of the equation I e−ρ/A I [S|U ] = E 25 . Theorem 3 In the high SNR regime in the limit as λ0 → 0.5 shows the difference in the values of channel capacity when computed using the two power control laws for a range of values of A/λ0 and σ. it can be shown that for each u ∈ U. 0) . the maximum in (2. E (2. µ0 }. so that the power control law given by (2. An example in Section 2. the claimed optimal power control law in ([21].10) decreases monotonically from σ0 to 0 as ρ increases from 0 to ∞. The peak signal-to-noise ratio.13) differs from.

 µH (u) = (2. not depending on x. u ∈ U.17) (2. σ < 1/e. Corollary 3 For perfect CSI at transmitter. Remark: In the low SNR regime for λ0 ≫ 1. E (2.15)) depends on transmitter CSI. we characterize the capacity in the high and low SNR regimes. 26 . a knowledge of the fade at the transmitter does not improve capacity. in the high SNR regime in the limit as λ0 → 0. 0) − ζ(µH (S)SA. In the next two corollaries of Theorem 3. This is in contrast with the results in [21].16) σ ≥ 1/e. the capacity is H CP = I µH (S)ζ(SA. σ} does not depend on the transmitter CSI. in the high SNR regime as λ0 → 0.17) that transmitter CSI does not improve capacity. where the authors have argued that at high SNR. 1/2}. where x0 = x0 (A) < ∞. However. E[S 0 where µL = min{σ. we see from (2. the capacity is6 C L = µL (1 − µL) I 2 ]A2 /2λ0 + O λ−2 . respectively. such that f (x) ≤ Ag(x) ∀ x ≥ x0 . In the low SNR regime for λ0 ≫ 1.18) 6 By the standard notation f (x) = O(g(x)). if σ < 1/e. 0) . where the optimal conditional duty cycle µL = min{1/2. the optimal conditional duty cycle µH (and hence the capacity in (2.  −1  e . The capacity increases with SNR (approximately) linearly with a slope proportional to µL (1 − µL ). in the special cases when the transmitter is provided with perfect and no CSI. we mean that there exists a number 0 ≤ A < ∞.eσ. then µH is given by   −1−ρ∗ /A I [S|U =u]  e E  .

τ ] together with the corresponding (ordered) arrival times TNτ = (T1 . 1/2}.20) where µL = min{σ.19) In the low SNR regime for λ0 ≫ 1. E[S 0 (2. Corollary 4 For no CSI at transmitter. E[S 0 (2.e. 1] is given as follows: if σ < 1/e. 1/2}.  σ ≥ 1/e. 0) . TNτ ) 27 .17).4 Proofs We begin this section with some additional definitions that will be needed in our proofs.where the optimal power control law µH : IR+ → [0. the number of photon arrivals Nτ on [0. the capacity is the same as in (2. s ∈ IR+ .22) where µL = min{σ. σ < 1/e. 0 L CP = µL (1 − µL) I 2 ]A2 /2λ0 + O λ−2 . E (2.17).21) where µH = min{σ. (2. i. First.. In the low SNR regime for λ0 ≫ 1. the capacity is the same as in (2. the capacity is H CN = I µH ζ(SA. then µH is given by E   −1−ρ∗ /sA  e  . 0 let ρ = ρ∗ > 0 be the solution of I e−ρ/SA = eσ. observe that for any τ > 0. 1/e}. 0) − ζ(µH SA. 2.. H µ (s) =  −1  e . i.e. in the high SNR regime in the limit as λ0 → 0. L CN = µL (1 − µL) I 2 ]A2 /2λ0 + O λ−2 . · · · .

S⌈τ /Tc ⌉ E ∆ 0 ≤ τ ≤ T.26) From ([47].25) ˆ = S⌈τ /Tc ⌉ X(τ ) + λ0 . TNτ .2. TNT ) ˆ is a self-exciting PCP with rate process ΛT .g. and the (conditional) sample function 28 . (2. so that the random vector (NT . For an input signal xT satisfying (2. e. Theorem 7.. consider the conditional mean of X T (conditioned causally on the channel output and the fade) by ∆ ˆ X(τ ) = I X(τ ) Nτ . t where nT x .s T K = exp − λ(τ )dτ 0 · λ(ti ).24) ˆ where we have suppressed the dependence of X(τ ) on (Nτ .23) i=1 λ(τ ) = s[⌈τ /Tc ⌉]x(τ ) + λ0 .1) and a fade vector SK = sK . (2. (2. TNT ) has the “conditional sample function density” (cf. the channel output (NT . S⌈τ /Tc ⌉ . it follows that conditioned on SK .are sufficient statistics for Y τ . and ∆ ˆ Λ(τ ) = I Λ(τ ) Nτ . TNτ . TNT |X T .1). TNT ) is a complete description of the random process Y T . The channel is characterized as follows. (2. 0 ≤ τ ≤ T. SK ). In order to write the channel output sample function density conditioned only on the fade for a given joint distribution of (X T .SK nT . E 0 ≤ τ ≤ T. and define Λ(τ ) = S[⌈τ /Tc ⌉]X(τ ) + λ0 . 0 ≤ τ ≤ T. TNτ . the process (NT . S⌈τ /Tc ⌉ ) for notational convenience. [47]) T nT fNT .

and with Pe (f. Denote X(t) = xW (t. SK )) + H(W |φ(NT . 1 T T 0 0 ≤ t ≤ T. TNT |SK nT . φ) ≤ ǫ.density is given by T nT fNT . With T = KTc . TNτ = tnτ . R = 1 H(W ) T 1 I(W ∧ φ(NT .2) then implies that 0 ≤ X(t) ≤ A. (2. K ∈ Z+ . · · · . (2. φ) of rate R = ∆ 1 T 0 ≤ τ ≤ T. ˆ E Proof of Theorem 1: Converse part: Let the rv W be uniformly distributed on (the message set) W = {1. T )–code (f. S⌈τ /Tc ⌉ = s⌈τ /Tc ⌉ . t where nT s K = exp − ˆ λ(τ )dτ 0 · ˆ λ(ti ). 0 ≤ t ≤ T . and independent of SK . TNT . log W . the following Markov condition holds: W −◦− X t S⌈t/Tc ⌉ −◦− Nt TNt . consider a (W. where 0 ≤ ǫ ≤ 1 is given. Note that (2. W }. i=1 (2. x with x(τ ) = I X(τ ) Nτ = nτ .27) ˆ λ(τ ) = s[⌈τ /Tc ⌉]ˆ(τ ) + λ0 . = T 29 0 ≤ t ≤ T. By a standard argument. TNT ) be the channel output when X T is transmitted and the channel fade is SK .28) I E[X(τ )]dτ ≤ σA. Let (NT . Clearly. SK )) . U⌈t/Tc ⌉ ). TNT .29) .

32) (2. I W ∧ φ(NT .4. for instance.30) where hb denotes binary entropy. The difference between the conditional entropies of the mixed rvs8 on the right side of (2. leads to R ≤ 1 1 1 I(W ∧ φ(NT . SK )) + hb (ǫ) . 8 Our definition of the conditional entropy of mixed rvs is consistent with the general formulation developed. and (2. TNT . TNT |SK ) − h(NT . TNT . TNT . SK ) ≤ I W ∧ NT . T )–code (f.29). TNT .34) follow. 37. the last paragraph of Section 3. TNT |SK = h(NT . TNT |W. TNT . SK = I W ∧ NT . (2. respectively. φ) with Pe (f. SK ) (2. SK )) ≤ ǫ log W + hb (ǫ) = ǫT R + hb (ǫ).32).33) 7 This result can be deduced. upon using Fano’s inequality H(W |φ(NT . T (2. 30 . SK ) . TNT |X T . φ) ≤ ǫ must satisfy R 1 I W ∧ φ(NT .34) where (2.34) is h(NT .3) on p. for instance. we get the standard converse result that the rate R of the (W. and Kolmogorov’s formula (3. TNT |SK ) − h(NT .31). Chapter 3. SK ) ≤ h(NT . from [39]. 36.31) Proceeding further with the right side of (2. TNT |SK ) − h(NT . p. TNT |X T . the independence of W from SK . Since 0 < ǫ < 1 was arbitrary. SK ) (2. (1 − ǫ) T T (2.6. in [39].33) and (2.which. by the data processing result for mixed rvs7 .

in the right side of (2. 0 ≤ τ ≤ T .36) holds by an interchange of operations9 to get T I E 0 ˆ (Λ(τ ) − Λ(τ ))dτ = T 0 I Λ(τ ) − Λ(τ ) dτ.36) dτ. E ˆ followed by noting that I Λ(τ )] = I E[ ˆ E[Λ(τ )]. ˆ I ζ S[⌈τ /Tc ⌉]X(τ ).35) is by (2. TNT |SK (NT .(2.35) = I E 0 NT ˆ Λ(τ ) − Λ(τ ) dτ + I E ˆ log Λ(Ti ) − log Λ(Ti ) i=1 ˆ log Λ(Ti ) − log Λ(Ti ) (2. λ0 E E S[⌈τ /Tc ⌉] (2. E E (2. λ0 E E ˆ = I [ζ (S[⌈τ /Tc ⌉] I [X(τ )| S[⌈τ /Tc ⌉]] .1. by (2.37) is proved in Appendix A. (2. and (2.23). λ0 E ˆ = I I ζ S[⌈τ /Tc ⌉]X(τ ). 31 .38) ˆ ≥ I ζ I S[⌈τ /Tc ⌉]X(τ ) S[⌈τ /Tc ⌉] . SK (NT . λ0 E E where (2. λ0 )] .37).37) = I E i=1 T = 0 ˆ I [ζ (S[⌈τ /Tc ⌉]X(τ ). Next. λ0 )] E E = I [ζ (S[⌈τ /Tc ⌉] I [X(τ )| U[⌈τ /Tc ⌉]] .40) 9 The interchange is permissible as the assumed condition I E[S] < ∞ implies the integrability of ˆ {Λ(τ ).= I − log fNT .39) (2. λ0 E E = I ζ S[⌈τ /Tc ⌉] I X(τ ) S[⌈τ /Tc ⌉] . 0 ≤ τ ≤ T }. TNT |SK ) E − I − log fNT .27). TNT |X T . SK ) E   T NT exp − 0 Λ(τ )dτ Λ(Ti ) i=1  = I log E T ˆ NT ˆ exp − 0 Λ(τ )dτ i=1 Λ(Ti ) T NT (2. λ0 )] − I ζ S[⌈τ /Tc ⌉]X(τ ). 0 ≤ τ ≤ T } and {Λ(τ ). TNT |X T .26). (2.

fix 0 ≤ τ ≤ T and condition on S[⌈τ /Tc ⌉] = s. s ∈ IR+ .40). SK ) T ≤ 0 {I [ζ(S[⌈τ /Tc ⌉]X(τ ). U[⌈τ /Tc ⌉]] E E = I [X(τ ) |U[⌈τ /Tc ⌉]] E by virtue of the Markov condition X(τ ) −◦− U[⌈τ /Tc ⌉] −◦− S[⌈τ /Tc ⌉].38) is by Jensen’s inequality applied to the convex function ζ(·.39) is from I X(τ ) |S[⌈τ /Tc ⌉] E ˆ = I I X(τ ) Nτ . λ0 )|S[⌈τ /Tc ⌉] = s] − ζ(s I E E[X(τ )|U[⌈τ /Tc ⌉] = h(s)]. TNT .43) −ζ(S[⌈τ /Tc ⌉] I E[X(τ )|U[⌈τ /Tc ⌉]]. TNτ . (2.43).40) holds as I [X(τ ) |S[⌈τ /Tc ⌉] ] = I [X(τ ) |S[⌈τ /Tc ⌉]. λ0 ) |S[⌈τ /Tc ⌉] = s] = I [ζ(sX(τ ). λ0 ) E (2. we get that I W ∧ φ(NT .37). Considering the integrand in (2. (2.34). S⌈τ /Tc ⌉ |S[⌈τ /Tc ⌉] E E = I [X(τ ) |S[⌈τ /Tc ⌉] ] . E and (2.43) is further bounded above by a suitable modification of the argument of Davis [11].2. λ0 )]} dτ. λ0 ) − ζ (S[⌈τ /Tc ⌉] I E E[X(τ )|U[⌈τ /Tc ⌉]]. (2.where (2.41) which is established in Appendix A. Then 0 I [ζ (S[⌈τ /Tc ⌉]X(τ ). λ0 ). (2.42) (2. 0 ≤ t ≤ T. λ0 ) 32 . The right side of (2. Summarizing collectively (2.

Using the strict convexity of ζ(·.44) over all conditional distributions of X(τ ) conditioned on U[⌈τ /Tc ⌉] = h(s) with a fixed conditional mean I E[X(τ )|U[⌈τ /Tc ⌉] = h(s)] = πτ (h(s)). Consider maximizing the right side of (2.45) is πτ (h(s))ζ(sA. 33 . λ0 )/A − ζ(sπτ (h(s)). A} with Pr{X = 0} = 1 − Pr{X = A} = 1 − µ A. (2. It is readily seen that this upper bound on I E[g(X)] is achieved if X ∈ {0. The necessity of this choice of (optimal) X follows from the strict convexity of g(·) on [0.44) by (2. λ0 )|U[⌈τ /Tc ⌉] = h(s)] E −ζ(s I E[X(τ )|U[⌈τ /Tc ⌉] = h(s)]. Lemma 1)10 iff X(τ ) is a {0. A] → IR+ 0 be a strictly convex mapping.46) 10 An alternative proof can be gleaned from the following simple observation. with g(0) = 0. (2. say. or [43]. Then. whence I E[g(X)] ≤ g(A)µ A .45) and is maximized by considering the first term above. Let X be a [0. λ0 ). A}-valued rv with Pr(X(τ ) = A|U[⌈τ /Tc ⌉] = h(s)) = 1 − Pr(X(τ ) = 0|U[⌈τ /Tc ⌉] = h(s)) = πτ (h(s))/A.= I [ζ(sX(τ ). Let g : [0.47) (2. Then.42). of arbitrary distribution but with fixed mean µ = I E[X]. g(X) ≤ g(A) A X. λ0 ). the right side of (2. A]- valued rv. proof of Theorem 1. this term is largest (see [11].44) equals I E[ζ(sX(τ ). and subject to the first constraint (alone) in (2. A > 0. (2.28). and the corresponding largest value of (2. A]. λ0 )|U[⌈τ /Tc ⌉] = h(s)] − ζ(sπτ (h(s)). λ0 ). λ0 ).

(2. (2. E[π (k−1)Tc K k = 1. (2.47).50) (2. A 0 ≤ µ(u) ≤ 1. 0≤τ ≤T max 1 T T 0 {I τ (U[⌈τ /Tc ⌉])ζ(S[⌈τ /Tc ⌉]A. define νk (u) = 1 Tc kTc I τ (U[k])|U[k] = u]dτ. s ∈ IR+ .48) I τ (h(S[⌈τ /Tc ⌉]))]dτ ≤ σA.49) I [πτ (U [⌈τ /Tc ⌉])]dτ ≤σA E In order to simplify the right side of (2. (2. 0 ≤ τ ≤ T. (2. for every u ∈ U. λ0 )/A] E[π −I E[ζ(S[⌈τ /Tc ⌉]πτ (U[⌈τ /Tc ⌉]). 0≤τ ≤T 1 T T 0 {I τ (h(S[⌈τ /Tc ⌉]))ζ(S[⌈τ /Tc ⌉]A. we get k=1 νk (u) . E[π we thus see from (2.53) 34 .43). and σ ≥ = = 1 1 AT 1 1 AK 1 1 AK T 0 K u ∈ U.48). (2. TNT .45). 0 (2. λ0 )]} dτ = 1 T T 0 πτ :U →[0. A]. λ0 )/A] E[π I [πτ (h(S[⌈τ /Tc ⌉]))]dτ ≤σA E −I E[ζ(S[⌈τ /Tc ⌉]πτ (h(S[⌈τ /Tc ⌉])).52) I τ (h(S[⌈τ /Tc ⌉]))]dτ E[π 1 Tc kTc I τ (U[k])]dτ E[π (k−1)Tc k=1 K I k (U[k])] E[ν k=1 (2.28) implies that 0 ≤ πτ (h(s)) 1 T T 0 ≤ A. K. λ0 )]} dτ.Noting that (2. SK ) T ≤ max 1 T T 0 πτ :U →[0.44).49).51) 1 µ(u) = K From (2.48) that 1 I W ∧ φ(NT . · · · . (2. A].

i.55) is by (2.57) holds by the i.49) can be written as 1 T T 0 {I τ (U[⌈τ /Tc ⌉])ζ(S[⌈τ /Tc ⌉]A.57) (2. λ0 ) − ζ(µ(U)SA. λ0 )/A] − I E[π E[ζ(S[⌈τ /Tc ⌉]πτ (U[⌈τ /Tc ⌉]).54) (2. λ0 ) and (2.58) is by Jensen’s inequality and (2. where (2.50).d. (2. Achievability part: We closely follow Wyner’s approach [52]. each of duration ∆. λ0 )] . and (2. λ0 )/A − ζ(S[k]πτ (U[k]). and (2.50).1 1 = AK K I k (U)] E[ν k=1 (2.55) = I E[µ(U)]. T ].53) is by (2. The time-averaged integral on the right side of (2. each S[k] remains unvarying for a k=1 35 ∆ . Fix L ∈ Z+ and set ∆ = Tc /L. where T = KTc with K ∈ Z+ . λ0 )/A − ζ(Sνk (U).56) is by Jensen’s inequality applied to the convex function ζ(·. Summarizing collectively (2. nature of S∞ . in the channel fade sequence S∞ = {S[k]}∞ .i. λ0 )]} dτ E[π (2. (2.d.55). where (2. λ0 )] E[ν k=1 ≤ I E[µ(U)ζ(SA. Divide the time interval [0.3.31).51). we get that R max I [µ(U)ζ(SA. λ0 ) − ζ(Sµ(U)A. E (2. λ0 )] E[ν I k (U)ζ(SA. λ0 )]} dτ K 1 = K ≤ = 1 K 1 K k=1 K 1 Tc kTc (k−1)Tc {I τ (U[k])ζ(S[k]A.52).49). nature of the channel fade sequence S∞ .58).59) µ:U →[0.51).56) (2. (2. (2. into KL equal subintervals. λ0 )/A − ζ(S[k]νk (U[k]). λ0 )]. See Figure 2.54) holds by the i. 1] I [µ(U )]≤σ E This concludes the proof of the converse part of Theorem 1. Then.58) k=1 K I k (U[k])ζ(S[k]A. (2. (2.

62) (with Y (0) = 0). KL. n∆).61) Next. T )-codes as above – and. hence. ˜ w ∈ W. 0 ≤ t ≤ T } is restricted to be {0. · · · . uK ) = {xw (t. · · · . and {S[k]}∞ k=1 varies across such blocks in an i. comprising ˜ Yn = 1(Y (n∆) − Y ((n − 1)∆) = 1). block of L consecutive ∆-duration subintervals within [(k−1)Tc . if xw (t. A}valued and piecewise constant in each of the KL time slots of duration ∆.3: Subintervals of [0.   w    ∆ ⌈t/Tc ⌉ (2. n = 1. u⌈t/Tc ⌉ ). u⌈t/Tc ⌉ ) = 0. kTc ]. Note that the condition (2.i. T ]. consider the situation in which for each message w ∈ W. the capacity C for transmitter CSI h – is clearly no smaller 36 .d. and SK . the channel input waveform f (w. n = 1. u⌈n/L⌉ ) ≤ σ. (2. manner. The largest achievable rate of restricted (W. 1}KL× (IR+ )K → W based on restricted 0 observations over the KL time slots. u       t ∈ [(n − 1)∆. KL.2) requires that 1 KL KL n=1 xn (w.60) xn (w. uK ∈ U K . Define    0. (2. u⌈n/L⌉ ) = ˜ ) = A. consider a decoder φ : {0. if x (t.  1.Transmission interval S[1] 0 ∆ S[2] Tc 2Tc (k − 1)Tc S[k] kTc S[K] (K − 1)Tc T (k − 1)L∆ + ∆ (k − 1)L∆ kL∆ − ∆ k th coherence interval kL∆ Figure 2. Now.

4) 0 WY |X. s ∈ ˜ ˜ 0 IR+ . ∆ (2. From ([9].65) can be seen to hold also when S is IR+ -valued. (2. and under constraint (2. 0 37 . state alphabet IR+ . 0 (2. output alphabet ˜ Y L = {0. S (0|1. s) = y x l=1 WY |X. y x ˜ ˜ ˜ xL ∈ {0.˜ X 0 1 − w10(L) ˜ Y 0 1 − w11(s. Remark A2 following Proposition 1).61).65) In [9]. S.64) with transmitter CSI h (and perfect receiver CSI). s). ∆ = w10 (L).4: Discrete channel approximation. Tc where C(L) is the capacity of a (L-) block discrete memoryless channel ˜ (in nats per block channel use) with input alphabet X L = {0. s) = 1 − WY |X. 1}L . respectively.and Y-valued rvs and WY |X. 1}L . s ∈ IR+ . S (˜l |˜l . is given by (see Figure 2. S (1|1. however the result (2. s) ˜ ˜ ˜ ˜ = (λ0 + sA) Tc exp −(λ0 + sA) Tc L L = w11 (s. ˜ yL ∈ {0. L) 1 Figure 2. S (·|·. than C(L) . IR+ . Y . s) ˜ ˜ ˜ ˜ = λ0 Tc exp −λ0 Tc L L WY |X.63) ˜ ˜ ˜ ˜ where X. s). S (0|0. L) w10(L) 1 w11(s. L). 1}L . s) = 1 − WY |X. it is readily obtained11 that C(L) = PXL |U : ˜ 11 max L l=1 I [Xl ]≤Lσ E ˜ ˜ ˜ I(XL ∧ Y L |S). are X -. transition probability mass function (pmf) 0 L W (L) (˜ L |˜ L . S (1|0. 1}L . S is taken to be finite-valued.

L) + (1 − µ(h(s)))w10 (L)) − (µ(h(s))hb (w11 (s. 1] I [µ(U )]≤σ E 38 . 1}L.67) C(L) = L where PX. YL |S (˜ L .71) 0 µ:U →[0. where the joint conditional pmf PXL . YL |S is given by ˜ ˜ PXL . S (˜|˜. (2. x ˜ ˜ xL ∈ {0. S = s) = hb (µ(h(s))w11 (s. ˜ yL ∈ {0. s).65) is achieved by L PXL |U (˜ L |h(s)) = x ˜ so that from (2. Y |S is given by ˜ ˜ E ˜ PX|U : I [X]≤σ ˜ max ˜ ˜ I(X ∧ Y |S).65). we can express (2. Y |S (˜. 1}L . s ∈ IR+ . (2.69) u ∈ U. yL |s) = PXL |U (˜ L |h(s))W (L) (˜ L |˜ L . x ˜ x yx ˜ ˜ ˜ ˜ ˜ x ∈ {0. l=1 PXl |U (˜l |h(s)).70) and ∆ ˜ ˜ βL (s) = I(X ∧ Y |S = s) ˜ ˜ ˜ = H(Y |S = s) − H(Y |X. (2.72) s ∈ IR+ . 1}. E[ ˜ y ∈ {0. 0 (2.68) PX. ˜ Setting ∆ ˜ µ(u) = Pr{X = 1|U = u} = I X|U = u]. s). 1}L.68) as C(L) = max L I L (S)].66) 0 By a standard argument which uses (2. where hb (·) is the binary entropy function. the maximum in (2. x ˜ x y x ˜ ˜ ˜ ˜ xL ∈ {0.63). s ∈ IR+ . (2. L)) + (1 − µ(h(s)))hb (w10 (L))) . 0 (2. 1}. ˜ s ∈ IR+ . E[β (2. y |s) = PX|U (˜|h(s))WY |X.with U = h(S).

C = where ψ(u. λ0 )]. we could also ˜ have considered a restricted decoder φ with Yn in (2. we have C ≥ ≥ C(L) L→∞ Tc lim µ:U →[0. λ0 ) − ζ(Sµ(U)A. KL.73) by (2. L→∞ Tc /L lim whence C ≥ max I E[µ(U)ζ(SA. µ(u)) = I E[µ(u)ζ(SA. 1] max I [µ(U )]≤σ E I L (S)] E[β L→∞ Tc /L lim (2.Since L ∈ Z+ was arbitrary. (ii) In the proof above of the achievability part of Theorem 1.72). 39 ∆ n = 1.76) I [µ(U )]≤σ E u ∈ U.3 that I L (S)] E[β = I E[µ(U)ζ(SA. Remarks: (i) In Section 2.67) when L → ∞.75) (2. (2. (2. λ0 ) − ζ(Sµ(U)A. (2. it is shown in Appendix A. Finally. λ0 )]. µ(U))]. 1] max I E[ψ(U.3. µ:U →[0.6.74) µ:U →[0. · · · . λ0 )|U = u].77) . remark (iii) following Theorem 1 constitutes an interpretation of (2. 1] I [µ(U )]≤σ E thereby completing the proof of the achievability part.62) replaced by ˜ Yn = 1 − 1(Y (n∆) − Y ((n − 1)∆) = 0). Proof of Theorem 2: From 2. λ0 ) − ζ(Sµ(u)A.

g. Then µ0 must satisfy the (necessary) Euler-Lagrange condition (cf.e.81) Furthermore. µ(U))]. With σ0 = I 0 (U)]..g. In order to determine the optimal power control law µ∗ : U → [0.78) is I SA log E (1 + α(SA/λ0 )SA/λ0 ) U =u (1 + µ0 (u)SA/λ0 ) ∂2ψ ∂µ2 = 0. from (2. i. ψ(·) is a strictly concave function of µ. i.80) = 0. u ∈ U. by (2. e ∆ 1 ≤ µ0 (u) ≤ 2 .78). it follows that we see that 1 e 1 ≤ σ0 ≤ 2 . u ∈ U. [11]. 1] and I E[µ(U)] ≤ σ.. e. ∞) with α(0) = Therefore. without the constraints µ : U → [0. u ∈ U.79) and let µ0 : U → IR denote the maximizer. 1 e 1 2 and α(∞) = 1 . µ0 (2.80) also constitutes a sufficient condition. E (1+µ(u)SA/λ0 ) (2. i. It can be verified (cf.81). First consider the “unconstrained optimization” problem..Note that ∂ψ ∂µ = I [ζ(SA. now with the inclusion of the constraints µ : U → [0. (2.. 1] we use variational calculus. E[µ Consider next the constrained optimization problem on the right side of (2.. [15]) ∂ψ ∂µ which. < 0.78) ∂2ψ ∂µ2 = −I E S 2 A2 (Sµ(u)A+λ0 ) U =u . (2. Figure 2) that α(·) is monotone decreasing on [0. so that (2.76). by (2.e.e. u ∈ U. If σ ≥ 40 . 1] and I E[µ(U)] ≤ σ: µ:U →IR max I E[ψ(U. e. λ0 ) − SA(1 + log(Sµ(u)A + λ0 ))| U = u] E = I SA log (1+α(SA/λ0 )SA/λ0 ) U = u .

83) and (2. Therefore. then the “unconstrained” maximizer µ0 satisfies the previous two constraints. then for all ω ≥ 0. and hence is the solution of the constrained problem in (2.78). µρ (u) can be < 0 for some u ∈ U.σ0 . (2. ·). u ∈ U. µ(U))].76) is given by the Euler-Lagrange equation ∂ξ ∂µ = 0. where ξ(u. u ∈ U.76) as well. it follows that if for some u ∈ U. we conclude that a necessary and sufficient condition for µρ : U → IR to be the maximizer in (2. we then see that µρ satisfies I SA log E (1 + α(SA/λ0)SA/λ0 ) U =u (1 + µρ (u)SA/λ0) = ρ.85) We now impose the constraint 0 ≤ µ(u) ≤ 1. Finally. (2. the constraint µ(u) ≥ 0 dictates the maximizing solution to be µ∗ (u) = 0. By 2 the strict concavity of ξ(u. Using the strict concavity of L(·). Suppose next that σ < σ0 . the optimal 41 . u ∈ U for all ρ ≥ 0. However. u ∈ U. the maximizing solution is given by µ∗ (u) = µρ (u). Summarizing the previous observations. µρ (2. Note that µρ (u) ≤ µ0 (u) ≤ 1 . (2. ξ(u. otherwise. and define the Lagrangian functional L(µ) = I E[ξ(U. we get that µ∗ (u) = [µρ (u)]+ . µ(u)) − ρµ(u). given ρ > 0. First ignore the (local) constraint 0 ≤ µ(u) ≤ 1. if µρ (u) < 0 for some u ∈ U. µ(u)) = ψ(u. µρ (u) < 0.84) By (2.83) and ρ ≥ 0 is a Lagrange multiplier.82) u ∈ U. u ∈ U. ·) is a strictly decreasing function of ω. ∆ (2.84).

Given s ≥ 0. Clearly. by (2. it follows that µH (u) = e−1−ρ/A I [S|U =u]. µρ (·) and µ∗ (·) are as defined in (2.  σ ≥ e−1 . 0 ≤ µH (u) ≤ e−1 .9). Next consider the case λ0 ≫ 1. 0) − ζ(µH (U)SA. so that σ0 = e−1 . E by (2. Let µL (u) = limλ0 →∞ µρ (u) and µL (u) = ρ limλ0 →∞ µ∗ (u). where C. ρ ≥ 0.11). µH (u) = e−1 . in the C H = I µH (U)ζ(SA. u ∈ U. ρ ≥ 0. Furthermore.9) and (2.11) respectively. Let C H = limλ0 →0 C. Finally. µH (u) = limλ0 →0 µρ (u) ρ and µH (u) = limλ0 →0 µ∗ (u). E This concludes the proof of the first part of Theorem 3. (2. Proof of Theorem 3: First consider the limit as λ0 → 0.6). where for σ < e−1 . ∗ /A I [S|U ] E ] = σ. u ∈ U. λ0 ) − sA(1 + log(λ0 + asA)) = sA log λ0 + (sA + λ0 ) log (1 + sA/λ0 ) − sA (1 + log λ0 + log (1 + asA/λ0 )) . it 0 follows that H µ (u) =   −1  e . a ≥ 0. This concludes the proof of Theorem 2. ζ(sA. λ0 →0 = I E[SA log(α(0)/µ)|U = u]. ρ ρ ρ ≥ 0.6). 42 .Lagrange multiplier ρ∗ is chosen to satisfy the power constraint I E[[µρ∗ (U)]+ ] = σ. u ∈ U. we get  −1−ρ∗ /A I [S|U =u]  e E  . ρ∗ ≥ 0 satisfies I −1−ρ E[e limit as λ0 → 0. σ < e−1 . Since lim I SA log E (1 + α(SA/λ0)SA/λ0 ) U =u (1 + µSA/λ0) 0 ≤ µ ≤ 1. By (2. 0) . u ∈ U.

equals (sA + λ0 ) sA/λ0 − (sA/λ0 )2 /2 + O(λ−3 ) 0 −sA 1 + asA/λ0 − (asA/λ0 )2 /2 + O(λ−3) 0 = (1 − 2a)s2 A2 /2λ0 + O(λ−2 ) → 0 0 as λ0 → ∞.which. by (2. Finally. for λ0 ≫ 1. for λ0 ≫ 1. E[S 0 43 x2 2 (2. λ0 ) = µL (sA + λ0 ) log(1 + sA/λ0 ) − (µL sA + λ0 ) log(1 + µL sA/λ0 ) = µL (sA + λ0 ) sA/λ0 − (sA/λ0 )2 /2 + O(λ−3 ) 0 −(µL sA + λ0 ) µL sA/λ0 − (µLsA/λ0 )2 /2 + O(λ−3) 0 = µL (1 − µL )s2 A2 /2λ0 + O λ−2 . for λ0 ≫ 1. it follows that ρ = I E[ζ(SA. . In this case ρ 0 σ0 = 1/2. we then obtain C L = µL (1 − µL ) I 2 ]A2 /2λ0 + O(λ−2). u ∈ U. a constant which we denote by µL . λ0 ) − ζ(µLsA. λ0 ) − SA(1 + log(λ0 + µL (u)SA))|U = u] ρ = (1 − 2µL(u)) I 2 |U = u]A2 /2λ0 + O(λ−2 ). (2.6). we have µL ζ(sA. ρ → 0 and µL (u) = 1/2 + O(λ−1 ) → 1/2. we compute C L .9). (2. u ∈ U. 1/2}.87). and therefore µL (u) = min{σ.86). 0 where we have used the approximation log x = x − By (2.78) and (2.87) + O(x3 ) for x ≪ 1 in (2. with the approximation log x = x − x2 /2 + O(x3 ) for x ≪ 1. Given s ≥ 0.86) Hence. E[S ρ 0 so that as λ0 → ∞.

We pick σG = 0.1.13)) with the channel fade for different values of σ for SNR = 0 dB..i. Note that the average power constraint becomes ineffective if σ > σ0 . Sk ∼ S = exp(2G). lognormal process. i. eq.5 (severe 2 turbulence) [21]. 2 The log-amplitude variance σG can vary from 0 (negligible fading) to 0. 4) is also plotted for comparison.e. and compare the values with the capacity as determined by the results of [21] as well as for the channel without fading. on an average. · · ·. We determine the capacity for the two special cases discussed in Section 2. where G is Gaussian with mean µG and variance σG .e. The channel fade is an i. The power law in ([21]. We fix A = 1 and vary the parameters σ and λ0 to study the effect of the average power constraint and SNR on the channel capacity.5 Numerical example In this example. 2. Figure 2. This behavior is similar to the waterfilling power law in 44 . I E[S] = 1.e.. does not attenuate or amplify the transmitted signal.. We choose 2 µG = −σG so that the fade is normalized. for small values of s). 2.d.5 shows the behavior of the optimal power control law µ∗ (·) (see P (2. which corresponds to a moderately turbulent Poisson channel. Denote the (peak) signal-to-noise ratio by SNR = 10 log10 (A/λ0 ) (in dB). This implies that the atmosphere. 2 k = 1. If σ < σ0 . we consider the fade to have a lognormal distribution.This completes the proof of Theorem 3. the optimal power control law dictates that the transmitter should not transmit when the channel fade is very bad (i. i.3.

05 σ=0.1 0. 45 .15 σ=0.1 0.5: Optimal power control law with perfect CSI at transmitter and receiver.35 Power law µ(s) −> 0.− Haas−Shapiro power law 0.3 0.01 0 0 5 10 15 20 25 30 Fade level s −> 35 40 45 50 Figure 2.25 0.5 σ ≥ σ = 0.2 0.4736 0 ___ Optimal power law −. fade (SNR= 0 dB) 0.4 0.Optimal power control law vs.45 σ=0.4 0.

6: Comparison of capacity versus σ for various assumptions on transmitter CSI. In Figures 2.1 SNR= 0 dB 0. e.. RF Gaussian fading channels (cf. On the other hand. the capacity obtained from [21] (CHS ) and the capacity of the Poisson channel without fading (C0 ) (cf. the power control law due to ([21]. we conclude 46 ..3 R * * CHS 0.6 0. eq.35 .15 0.9 1 Figure 2. In particular.3 0. [52].5 0. we plot the capacity for perfect transmitter and receiver CSI (CP ).g.1 0.Plot of capacity vs.8 0.7.4 0.5) for various values of σ and SNR. eq. CP __ C 0.05 SNR= −10 dB 0 0 0. 4) is constant over a wide range of values of σ. σ for different SNR levels 0. we compare the capacity values obtained under different assumptions..2 0. [40]).g.− C 0 SNR= 10 dB 0.6 and 2. 1. and does not properly exploit the channel fade..2 0. e. From the figures.7 Average−to−peak power ratio σ −> 0.25 Capacity (in nats/sec) −> −. the capacity for no transmitter CSI (CR ).

7: Comparison of capacity versus SNR for various assumptions on transmitter CSI.Plot of capacity vs.15 σ=0.5 0..35 Capacity (in nats/sec) −> 0.1 σ=0.05 0 −30 −20 −10 0 10 SNR (in dB) −> 20 30 40 Figure 2.. 47 .4 .01 0.2 0.− C0 0.3 0.1 0.25 σ ≥ 0. C P __ CR * * CHS −.. SNR for different σ values 0.

The improvement is greater for small σ values. Under the assumptions of perfect CSI at the receiver while the transmitter CSI can be imperfect.6 Discussion We have studied the capacity problem for a shot-noise limited direct detection block-fading Poisson channel. the capacity when no CSI is available at the transmitter..that the knowledge of CSI at the transmitter can increase the channel capacity. input 48 . is very close to CR . The i. Furthermore.d. which does not depend on the fade for a large range of values of σ. knowledge of CSI at the transmitter can greatly improve capacity at high SNR when σ is small. while inside a single interval. the channel capacity does not depend on the channel coherence time Tc . with perfect CSI at the receiver. 2.5. nature of channel fade allows us to focus on a single coherence interval. and vary across successive such intervals in an i.d. the optimality of i. fashion.i.e. The two signaling levels correspond to no transmission (“OFF” state) and transmission at the peak power level (“ON” state). i.i. a single-letter characterization of the capacity has been obtained when the transmitted signal is constrained in its peak and average power levels. A novel channel model for the free-space Poisson fading channel has been proposed in which the channel fade remains unvarying in intervals of duration Tc . Binary signaling with arbitrarily fast intertransition times is shown to be optimal for this channel. when the average power constraint is severe. the capacity computed from [21].d. It is clear that CHS . This can be understood from the power law in Figure 2. Furthermore.i.

In the high SNR regime. in the low SNR regime. On the other hand.signaling (as a function of current transmitter CSI) leads to a lack of dependence of the capacity formula on Tc . a knowledge of CSI at the transmitter does not provide any additional advantage.” which represents the conditional probability of the transmit aperture remaining in the ON state as a function of current transmitter CSI. We have analyzed the effects of varying degrees of CSI at the transmitter on channel capacity as a function of the peak signal-to-noise power ratio (SNR). An exact characterization of the “optimal power control law. 49 . and can be viewed as the optimal average conditional duty cycle of the transmitted signal. a knowledge of varying degrees of CSI at the transmitter can lead to a significant gain in capacity when the average power constraint is stringent. has been obtained.

which is addressed in the next chapter. Several properties of optimal transmission strategies are derived for the MIMO Poisson channel with constant fade. We assume. many of which are relevant even when the channel fade is random. In this setting. we provide a “single-letter characterization” of the channel capacity and outline key properties of optimal transmission strategies. with the fade coefficients being known to the transmitter and the receiver. a key property of optimum transmission strategy is identified. We consider two variants of transmitter power constraints: (a) peak and average power constraints on individual transmit apertures. and (b) peak power constraints on individual transmit apertures and a constraint on the sum of the average powers from all transmit apertures. we focus on the shot-noise limited. that the channel fade remains unvarying for the duration of transmission and reception. For the MIMO channel with constant fade and a sum average power constraint. single-user MIMO Poisson channel with N transmit and M receive apertures. which helps us to derive a simplified expression for channel capacity of a 50 . for the time being.Chapter 3 MIMO Poisson channel with constant channel fade 3.1 Introduction In this chapter. We refer to this channel as the N × M MIMO Poisson channel. The results of this chapter provide useful insights into the capacity problem of the MIMO Poisson fading channel with a random fade.

5. t ≥ 0} at the mth receive aperture is a Z+ –valued nondecreasing. (3.1. For a given set of N IR+ –valued transmitted signals {xn (t). In Section 3.2. t ≥ 0}N .6.2 PCP PCP Y1 (t) Y2 (t) λM (t) PCP λ0.λ0.1 (t) s1. 51 t ≥ 0. 0 (left-continuous) Poisson counting process (PCP) with rate (or intensity) equal to N λm (t) = n=1 sn.1 x1 (t) x2 (t) s1.4.M (t) xN (t) sN. 3.1) . The remainder of the chapter is organized as follows. we introduce the channel model and state the problem formulation. We close the chapter with some concluding remarks in Section 3.M (t) Channel fade matrix λ1 (t) λ2 (t) λ0.1: N × M MIMO Poisson channel with deterministic channel fade. m xn (t) + λ0. we state our results.2 (t) s1. m . In Section 3. symmetric MIMO Poisson channel with isotropically distributed channel fade. A few illustrative examples are discussed in Section 3.M YM (t) Figure 3. the received 0 n=1 signal {Ym (t).3. which are proved in Section 3.2 Problem formulation A block schematic diagram of the channel model is given in Figure 3.

each of which is proportional to the transmitted optical power from the n=1 respective transmit aperture. n = 1. dark current rates {λ0. N are fixed. 1 T T 0 0 ≤ t ≤ T. m }N. and are known to both the transmitter and the receiver. · · · . · · · . or they may be a collection of a diverse group of devices (lasers or LEDs) with 52 . m }M remain unvarying throughout the entire duration of m=1 transmission and reception. t ≥ 0}N . 0 ≤ t ≤ T }N . collectively denoted by xT = {xn (t). and which satisfy peak and average power constraints of the form: 0 ≤ xn (t) ≤ An .2) xn (t)dt ≤ σn An . The input to the channel is a set of N IR+ –valued transmitted signals. A) if σn = σ (resp. 19]. In practice. where the peak powers An > 0 and the average-to-peak power ratios σn . peak) power constraint σ (resp. and λ0. (3. m ≥ 0 is the background noise (dark current) rate at the mth receive aperture. We also assume that the receive apertures are sufficiently separated in space. 0 ≤ σn ≤ 1. We assume that the channel fade coefficients {sn. the transmit apertures may belong to a laser array with similar specifications. N. M m=1 and the n=1. m = 1. M. N. peak) power constraints are asymmetric. · · · . · · · . so ∞ that given the knowledge of x∞ = {xn (t). else the average (resp. An = A) for all n = 1.where sn. · · · . · · · . n = 1. m ≥ 0 is the (deterministic) fade (or path gain) from the nth transmit aperture to the mth receive aperture. m = 1. M. the processes Y1∞ . Definition 2 We say that the transmitted signals are subject to a symmetric average (resp. one 0 corresponding to each transmit aperture. YM are n=1 conditionally mutually independent [47.

with significantly different consequences. For the channel under consideration. N. 1.. A symmetric power constraint is a reasonable assumption for the former case. We shall see later in Chapter 4 that the average sum power constraint plays an important role in the characterization of channel capacity of the MIMO Poisson fading channel with random channel fade.different operating characteristics. In [19]. n = 1. n = 1. w ∈ W = {1. N. If the transmitted signals are subject to individual peak power constraints An .4) xn (w. viz. N. satisfying the following peak and average n=1 power constraints which follow from (3. We consider another variant of the average transmit power constraint.3) xn (t)dt ≤ σ An . The codebook comprises a set of W waveform vectors f (w) = {xn (w. · · · . t)dt ≤ σn An . w ∈ W. 1 T T 0 0 ≤ t ≤ T. and an average sum power constraint σ. T )–code (f. where 0 ≤ σ ≤ 1 is fixed. (3. and an average sum power constraint σ if 0 ≤ xn (t) ≤ An . the authors addressed the special case of MIMO Poisson channel with a symmetric average power constraint. w ∈ W. φ) is defined as follows. a (W. a constraint on the sum of the average powers of the transmitted signals from all the transmit apertures. the waveform 53 .2): 0 ≤ xn (w. t) ≤ An . while asymmetric constraints are more appropriate for the latter. We say that the transmitted signals are subject to peak power constraints An . W }. · · · . t). Here we also allow for asymmetric average power constraints. · · · . 1 T T 0 N n=1 n = 1. 0 ≤ t ≤ T }N . · · · . N n=1 (3.

We use the notation Cind and Csum to denote the capacity with individual and sum average power constraints respectively. w ∈ W satisfy the following peak and average power constraints which follow from (3. we provide a “single-letter characterization” of the channel capacity C (see Definition 1) in terms of signal and channel parameters. N. t) 1 T T 0 N n=1 ≤ An . Section 2. 2. t). The decoder is a mapping φ : (Σ(T ))M → W. m = 1. M. xT (w) . For each message w ∈ W. upon observing ym from the mth T T receive aperture. (3. t)dt ≤ σ w ∈ W. T )–code is given by R = of decoding error is given by 1 Pe (f. 0 ≤ t ≤ T. 54 . N n=1 An . yM ). The rate ˆ of this (W. the transmitter sends N waveforms. · · · .3): 0 ≤ xn (w. · · · . 0 ≤ t ≤ T } is sent from the nth n T transmit aperture. where Σ(T ) has been defined earlier in Chapter 2. produces an output w = φ(y1 . The receiver. w ∈ W. and its average probability w=1 T Pr φ(Y1T .5) xn (w.vectors f (w). and examine some properties of the optimum transmission strategies. · · · . 1 N In this chapter. one from each transmit aperture. YM ) = w xT (w). the waveform xT (w) = {xn (w. φ) = W W 1 T log W nats/sec. · · · . n = 1.. · · · .

· · · . y). N 0≤µn ≤σn . n = 1.1 Channel capacity Our main result characterizes the capacity of the N × M MIMO Poisson channel.3.3 Statement of results The following inequality. · · · . N. . max I(µN . n = 1. N} → {1. is given by Cind = where M n=1. N respectively. ···. · · · . λ0. λ0.7) I(µN . m   (3. n = 1. y + k=1 xk xn ≥ 0. (3. N} being a permutation of {1. m  . · · · . N and σn . ζ(xn . y ≥ 0. s). m Ak . will be used in our results: N N n−1 ζ n=1 xn . 55 n = 1.8)  n=1 k=Π(1) sk.3. s) = m=1 ∆   N n=1 N with Π : {1. Theorem 4 The capacity of the N × M MIMO Poisson channel with peak and average power constraints An . (3. N} such that µΠ(n) ≥ µΠ(n+1) . (3. n=1 3. N − 1. · · · . which can be verified by simple algebra. · · · .9) −ζ   νn ζ  νn  Π(n) k=Π(1) Π(n) sk.6) ≥ with x0 = 0. · · · . m Ak . y = n=1 N ζ xn .

N and an average sum power constraint σ. can achieve channel capacity. ···. and transmission at the peak power level (“ON” state). (ii) Our proof of the achievability part of Theorem 4 (as well as Corollary 5) shows that binary signaling from each transmit aperture.and ∆ νn =    µ  Π(n) − µΠ(n+1) .10) Corollary 5 The capacity with peak power constraints An .8). s). the capacity-achieving transmitted signals through the N transmit apertures are correlated across apertures but i.11)) is the probability of the transmitted signal through transmit aperture n remaining in the level An (ON state). N N N n=1 µn An ≤σ n=1 (3.7) (as well as in (3. Remarks: (i) The optimization in (3. An 0≤µn ≤1. A transmission event is an assignment of ON and OFF states to the N transmit apertures at each time instant. We show that the optimum set of transmission events can take at most N + 1 values (out of a total of 2N possible values). n = 1.11) . n = 1. so that the maximum clearly exists. with arbitrarily fast intertransition times. ·) is as defined in (3. N.d. N − 1. · · · . in time.7) (as well as in (3. n=1. · · · . The parameter µn in (3. n = N. (3. The two signaling levels at each transmit aperture correspond to no transmission (“OFF” state). which correspond to k transmit 56 max I(µN .   µ  Π(N ) .11)) is of a concave functional over a convex compact set. n = 1. · · · . (iii) In general. is given by Csum = where I(·.i. and can be interpreted as the average duty cycle of the nth aperture’s transmitted signal.

k = 0. Optimal solutions of both the problems can be computed using Kuhn-Tucker conditions (cf. which involve the computation of the maximum of a concave function of N variables. Whenever a transmit aperture is ON. n = 1.g. viz. [35]. where M ∈ Z+ is arbitrary. · · · . · · · . N n=1 µn An ≤ σ N n=1 An . The optimization problem in (3... N.7) and (3. The optimal solutions for general N ∈ Z+ can be obtained along similar lines.2 Optimum transmission strategy The optimum transmission strategy can be examined upon solving the optimization problems in (3.11) for the special case of N = 2 transmit apertures and M receive apertures.11) differs from the problem in (3. · · · .7) and (3. but are omitted due to notational complications. e. with the individual constraints on the average duty cycles. We present below the structure of the optimal solutions of (3. N}.7) only in the constraint set. N being replaced by a constraint on their sum. {µn . n = 1. over linear constraint sets. µn ≤ σn . 1.. viz.11). 233).. viz. all the transmit apertures with higher average duty cycles (in the sense of the permutation Π) must also remain ON. p.apertures being ON. 57 .3. 3.

3. m )bn.3. m Bm log m=1 1 + α (Bm ) Bm 1 + ρBm 1 e = 0. 1 e (3. n = 1. let β2 = β2 (x) solve M λ0. m β2 = 0.16) 1 ≤ βn (0) ≤ 2 .m . m = 1.4). 1 The following fact is useful for the purpose of the said verification: α(·) is monotone decreasing 1 2 on [0. m bn. m )b1. m x = 0.1 Individual average power constraints First consider the MIMO Poisson channel with peak and average power constraints on individual transmit apertures. m = Bm = ∆ sn. m log m=1 1 + α(b2. βn (ρn ) = ρn . 0 ≤ x ≤ ρ1 . ρ and the functions β1 (·) and β2 (·) ¯ are obtained in course of evaluation of the Kuhn-Tucker conditions. We are now ready to characterize the optimum transmission strategy for the 2 × M MIMO Poisson channel with peak and average power constraints. ρ2 . and let ρ = ρ be the solution of ¯ M λ0. m x + b2. m b1. let β1 = β1 (x) solve M ≤ ρ ≤ ¯ and ρ ≥ max{ρ1 . m )b2. M. m 2 n=1 bn. (3. ρ2 }. m An . m b2. ∞) with α(0) = and α(∞) = 1 .13) where α(·) is as defined in (2. m 1 + b1. 2. m 1 + ρBm = 0. (3. 2. Let bn. (3. m log m=1 1 + α(bn. e 58 . n = 1. · · · . λ0.15) and for 0 ≤ x ≤ ρ2 . n = 1.12) Let ρ = ρn be the solution of M λ0. We begin with some notation. m 1 + b1. ρ2 ≥ 0. m log m=1 1 + α(b1. The parameters ρ1 . m β1 + b2. For ¯ λ0. 1 2 (3.14) It can be verified that1 ρ1 ≥ 0. It can be verified that β1 (·) and β2 (·) are monotone decreasing.2. 2.

¯ 1 2 3. Specifically. ¯ 1 2 4. See [35].Theorem 5 The pair (µ∗ . 1. while neither constraint is active at optimality for points in R1 . then µ∗ = µ∗ = σ2 . then µ∗ = µ∗ = σ1 .2. µ∗ = σ2 . For all other (σ1 . · · · . µ∗ = σ2 . i = 1. condition (k) of Theorem 5 corresponds to (σ1 . The optimum pair (µ∗ . µ∗ ) lies either inside or on the boundary of R6 . min{ρ1 . If σ2 ≤ ρ1 and σ1 ≥ β1 (σ2 ). 0 ≤ µ∗ ≤ ρ2 . If σ1 ≤ σ2 and ρ2 ≤ σ1 ≤ ρ. which is characterized by 1 2 one of the lines specified by the equations (i) µ∗ = µ∗ = ρ. If max{σ1 . ¯ ¯ 1 2 2. σ2 ) pairs in region Rk in Figure 3. then µ∗ = σ1 . The average power 1 2 2 2 1 1 constraints involving both σ1 and σ2 are active at optimality2 for points in R6 . 0 ≤ µ∗ ≤ ρ1 . we partition the unit square into regions corresponding to the conditions (1) − (6) of Theorem 5. 59 . µ∗ = σ1 . σ2 } ≥ ρ. then µ∗ = µ∗ = ρ. For points in R3 and R5 . 1 2 5. in which the average power 2 We say that the power constraint µi ≤ σi is active at optimality if µ∗ = σi . If σ1 ≥ σ2 and ρ1 ≤ σ2 ≤ ρ. µ∗ ) that maximizes the optimization problem in (3. only the constraint involving σ2 is active at optimality.7) for 1 2 the special case of N = 2 is given as follows. 6. while for points in R2 and R4 . ρ2 } ≤ ρ ≤ ρ. If σ1 ≤ ρ2 and σ2 ≥ β2 (σ1 ). i p.2. This is a generalization of the SISO Poisson channel capacity result [11. µ∗ = β2 (σ1 ). 220 for a formal definition of active constraints. only the constraint involving σ1 is active at optimality. 52]. ¯ 1 2 (ii) µ∗ = β1 (µ∗ ). 1 2 Remarks: (i) In Figure 3. k = 1. then µ∗ = β1 (σ2 ). 1 2 6. 2. (iii) µ∗ = β2 (µ∗ ). σ2 ) pairs.

60 .2: The structure of the optimal solution for N = 2. · · · . k = 1. Region Rk corresponds to condition (k) in Theorem 5. 6.Figure 3.

the optimum transmission strategy assigns nonzero probability to only 2 transmission events. wherein the lower bound derived is always tight. under a symmetric average power constraint. (iii) For the 2 × M MIMO Poisson channel with asymmetric average power constraints. ρ}. R3 in Figure 3. (b’) aperture 1 1 in the OFF state and aperture 2 in the ON state (with probability µ∗ − µ∗ ).2). the optimum set of transmission events has at most 3 values. This simultaneous ON-OFF keying strategy can achieve capacity ¯ for an arbitrary number of transmit apertures. for a 2 range of values of σ1 and σ2 (corresponding to regions R1 . 1 2 This implies that for the 2 × M MIMO Poisson channel with a symmetric average power constraint. If σ1 < σ2 . if σ1 > σ2 . This special case corresponds to the problem addressed in [19]. ρ}). and (b) both apertures in the OFF state (with probability ¯ 1 − min{σ. the corresponding events are: 1 (a’) both transmit apertures in the ON state (with probability µ∗ ). and (c) both apertures in 1 2 the OFF state (with probability 1 − µ∗ ). ρ}). viz. In particular. (b) aperture 1 in the ON state and 2 aperture 2 in the OFF state (with probability µ∗ − µ∗ ).. ¯ (ii) It follows from Theorem 5 that if σ1 = σ2 = σ. Theorem 1) which is determined from the system parameters. the following events can have nonzero probability: (a) both transmit apertures in the ON state (with probability µ∗ ). and 2 1 (c’) both apertures in the OFF state (with probability 1 − µ∗ ). 61 . the optimum number of transmission events is still 2. then µ∗ = µ∗ = min{σ. R2 . (a) both transmit apertures in the ON state (with probability min{σ. However.constraint is active at optimality only below a threshold (km in [11]. and the simultaneous ON-OFF keying strategy is optimal.

2 Average sum power constraint Consider next the MIMO Poisson channel with peak average power constraints on individual transmit apertures and a constraint on the sum of the average powers from all the transmit apertures.3. 1. m y). ¯ ¯ 1 2 62 .11) 1 2 for the special case of N = 2 is given as follows. m ((1 + a)b1. ¯ λ0. d(x. m (b1. it is not difficult to verify that K1 ≥ K2 . ¯ (3. If σ ≥ ρ. We are now ready to characterize the optimum transmission strategy for the 2 × M MIMO Poisson channel with peak power constraints and an average sum power constraint. m ) log(1 + (b1. Theorem 6 The pair (µ∗ . m − ab2. For x. Let x = γn (y) be the solution of the equation d(x. m log(1 + ρ2 Bm )) .2. n = 1. m (Bm log (1 + ρBm ) − (1 + a)b2. y) = m=1 ∆ λ0. m log(1 + ρ1 Bm ) − aBm log (1 + ρBm )) . µ∗ ) that maximizes the optimization problem in (3. m − ab2. A2 ∆ ∆ M m=1 M m=1 λ0. (3. then µ∗ = µ∗ = ρ. We begin with some notation. 2.19) whence it can be verified that γ1 (y) ≥ γ2 (y).3.6). y ∈ IR. y ∈ IR.18) Note that d(·. Define K1 = K2 = where a = define M A1 . m )x + (1 + a)b2.(3.17) Using (3. y) is nondecreasing on IR for every y ∈ IR. y) = Kn .

A key property of the optimum pair (µ∗ , µ∗ ) is discussed in the following corollary. 1 2 We show that the relative ordering of µ∗ and µ∗ , i.e., whether µ∗ ≥ µ∗ or µ∗ ≤ µ∗ , 2 1 2 1 2 1 does not depend on σ. An application of this property is shown in Chapter 4 where we consider the MIMO Poisson channel with isotropically distributed random channel fade. Corollary 6 For all σ ∈ [0, 1], 1. if K1 ≥ 0 and K2 ≤ 0, then µ∗ = µ∗ ; 1 2 2. if K2 > 0, then µ∗ ≥ µ∗ ; 1 2 3. if K1 < 0, then µ∗ ≤ µ∗ . 2 1 Remarks: (i) From Corollary 6, it follows that regardless of the value of 0 ≤ σ ≤ 1, µ∗ ≥ µ∗ iff K1 + K2 ≥ 0. Thus, the relative ordering of µ∗ and µ∗ does not 2 1 2 1 depend on σ. This implies that given a set of fade coefficients {sn, m }N, M m=1 , peak n=1, power constraints {An }N , and background noise rates {λ0, m }M , one transmit n=1 m=1 aperture experiences better channel conditions (which is reflected by the condition 1(K1 + K2 ≥ 0)), and hence is assigned a higher optimal average duty cycle, than the other transmit aperture, for all 0 ≤ σ ≤ 1. (ii) The remark above can be generalized to the case of an arbitrary number of transmit apertures. The relative ordering of {µ∗ }N does not depend on σ. n n=1 63

2. If σ < ρ, then µ∗ = (1 + a)σ − aµ∗ , where ¯ 2 1    max{0, γ (σ)},   1    µ∗ = 1  min{γ2 (σ), (1 + 1/a)σ},      σ, 

if if

K1 ≤ d(σ, σ), K2 ≥ d(σ, σ), (3.20)

otherwise.

3.4 Proofs
We begin this section with some additional definitions that will be needed in our proofs. For any τ > 0, the number of photon arrivals Nm (τ ) on [0, τ ] together with the corresponding (ordered) arrival times Tmm
N (τ )

= {Tm, 1 , · · · , Tm, Nm (τ ) } are

τ sufficient statistics for Ym , m = 1, · · · , M. Therefore, the random vector Yss (τ ) =

{(Nm (τ ), Tmm

N (τ )

)}M is a complete description of the random processes Y τ = m=1

τ {Y1τ , · · · , YM }, τ ≥ 0.

The channel is characterized as follows. For an input signal xT = {xn (t), 0 ≤ t ≤ T }N satisfying (3.2), the channel output (Nm (T ), Tmm n=1
N (T )

) at the mth receive

aperture has the “conditional sample function density” (cf. e.g., [47]) fNm (T ), TNm (T ) |XT nm , tnm xT m m
T nm

= exp −

λm (τ )dτ
0

·

λm (tm, i ),
i=1

(3.21)
N (T )

where λm (·) is as defined in (3.1). Recall that the processes (Nm (T ), Tmm

),

m = 1, · · · , M are conditionally mutually independent given the knowledge of xT . Therefore, the conditional sample function density of Yss (T ) given xT is given by
M

fYss

(T )|XT

(yss |x ) = =

T

m=1 M

fNm (T ), Tmm (T ) |XT (nm , tnm |xT ) N m
T nm

m=1

exp −

λm (τ )dτ
0

·

λm (tm,i ),
i=1

(3.22)

where yss = {(nm , tnm )}M . In order to write the channel output sample funcm m=1
T tion density for a given distribution of XT , consider the conditional mean of Xn

(conditioned causally on the channel output) by
∆ ˆ Xn (τ ) = I E[Xn (τ )|Yss (τ )],

0 ≤ τ ≤ T,

n = 1, · · · , N,

(3.23)

64

ˆ where we have suppressed the dependence of Xn (τ ), n = 1, · · · , N on Yss (τ ) for notational convenience; and define
N

Λm (τ ) =
n=1

sn, m Xn (τ ) + λ0, m ,

0 ≤ τ ≤ T,

m = 1, · · · , M,

(3.24)

and
∆ ˆ Λm (τ ) = I m (τ )|Yss (τ )] E[Λ N

=
n=1

ˆ sn, m Xn (τ ) + λ0, m ,

0 ≤ τ ≤ T,

m = 1, · · · , M.
N (T )

(3.25)

From ([47], pp. 425-427), it follows that the process (Nm (T ), Tmm

) is a self-

ˆ exciting PCP with rate process ΛT , m = 1, · · · , M, and the output sample function m density is given by
M T nm

fYss (T ) (yss ) =
m=1

exp −

ˆ λm (τ )dτ

0

·

ˆ λm (tm, i ),
i=1

(3.26)

where yss = {(nm , tnm )}M , and m m=1
N

ˆ λm (τ ) =
n=1

sn, m xn (τ ) + λ0, m , ˆ

m = 1, · · · , M,

with xn (τ ) = I ˆ E[Xn (τ )|Yss (τ ) = yss ], Proof of Theorem 4: Converse part: Let the rv W be uniformly distributed on (the message set) W = {1, · · · , W }. Consider a (W, T )–code (f, φ) of rate R =
∆ 1 T

0 ≤ τ ≤ T,

n = 1, · · · , N.

log W , and with 0 ≤ t ≤ T,

Pe (f, φ) ≤ ǫ, where 0 ≤ ǫ ≤ 1 is given. Denote Xn (t) = xn (W, t), n = 1, · · · , N. Note that (3.4) then implies that 0 ≤ Xn (t)
1 T T 0

≤ An ,

0 ≤ t ≤ T,

n = 1, · · · , N,

(3.27)

I E[Xn (τ )]dτ ≤ σn An , 65

n = 1, · · · , N.

32) This result can be deduced. n = 1.28). φ) with Pe (f. (3. The difference between the entropies of mixed rvs4 on the right side of (3.31) (3. the rate R of the (W.30) where (3. and (3.4.31) is h(Yss (T )) − h(Yss (T )|X T ) = I − log fYss (T ) (Yss (T )) − I − log fYss (T )|XT (Yss (T )|XT ) E E   T M Nm (T ) Λm (Tm. M. m = 1.6. the following Markov condition holds: W −◦− Xτ −◦− Yss (τ ). from [39]. Clearly. the last paragraph of Section 3. p.3) on p. · · · .29). for instance. T (3.30) is by the data processing result for mixed rvs3 .31) is by (3. 4 Our definition of the conditional entropy of mixed rvs is consistent with the general formulation developed. and Kolmogorov’s formula (3. 36. N. 37. Tmm N (T ) T ) be the channel output at the mth receive aperture when Xn is transmitted from the nth transmit aperture. for instance. · · · .29) Proceeding further with the right side of (3.28) By a standard argument. i ) m=1 exp − 0 Λm (τ )dτ i=1  = I log E T ˆ M Nm (T ) ˆ Λm (Tm. T )–code (f.Let (Nm (T ). I (W ∧ φ(Yss(T ))) ≤ I (W ∧ Yss (T )) = h(Yss (T )) − h(Yss (T )|W ) ≤ h(Yss (T )) − h(Yss (T )|X T ) (3. in [39]. Chapter 3. 0 ≤ τ ≤ T. 66 . i ) m=1 exp − 0 Λm (τ )dτ i=1 3 (3. φ) ≤ ǫ must satisfy R 1 I (W ∧ φ(Yss(T ))) .

i ) − log Λm (Tm. 0 ≤ τ ≤ T . λ0. i )  sn.25). m .34).34). Next. m ≥ ζ = ζ sn.34) is proved in Appendix B.36).32) is by (3. we get that T M N I (W ∧ φ(Yss(T ))) ≤ 5 I ζ E 0 m=1 n=1 sn. i ) − log Λm (Tm. m I E[Xn (τ )].m n=1 (3.M T = m=1 I E 0 ˆ Λm (τ ) − Λm (τ ) dτ   ˆ log Λm (Tm.33) holds by an interchange of operations5 to get T I E 0 ˆ (Λm (τ ) − Λm (τ ))dτ = T 0 I Λm (τ ) − Λm (τ ) dτ.1. · · · . (3. m I Xn (τ )].m Xn (τ ). and (3.m dτ. λ0. m = 1.m Xn (τ ). (3.34) n=1 where (3. 67 . Summarizing collectively (3. and (3. m = 1. i )    N N M +I  E I  E T 0  Nm (T ) i=1 = m=1 M  Nm (T ) i=1 ˆ log Λm (Tm. M . by E[ ˆ E[Λ (3. n=1 where (3.26). λ0. E ˆ followed by noting that I Λm (τ )] = I m (τ )]. λ0 ).35) is by Jensen’s inequality applied to the convex function ζ(·. M. · · · . (3. (3. in the right side of (3. λ0. 0 ≤ τ ≤ T }. λ0.m Xn (τ ).36) sn. N N I ζ E n=1 ˆ sn. 0 ≤ τ ≤ T } and {Λm (τ ).35) (3.22).31). m Xn (τ ). (3.33) = m=1 I ζ E −ζ ˆ sn. m ˆ The interchange is permissible by the integrability of {Λm (τ ). λ0.36) holds as I Xn (τ )] = I E[Xn (τ )|Yss (τ )]] = I E[ ˆ E[I E[Xn (τ )]. m E[ ˆ n=1 N (3.

and subject to the first constraint (alone) in (3. m .N We now establish the fact that the optimal marginal distributions of {Xn (τ )}N that n=1 maximize the right-side of (3. the inner maximum is achieved (see footnote 10.37) is further bounded above by a suitable modification of Footnote 10. (3.38) are binary {0. consider maximizing the integrand on the right side of (3. λ0. A1 ] and fixed conditional 2 mean I E[X1 (τ )|XN (τ ) = xN ] = η1 (xN ). χ (τ )) = N 0≤Xn (τ )≤An I [Xn (τ )]=χn (τ )An E max I E m=1 ζ n=1 sn. and is maximized by considering the first term above. say. χN (τ )) M N = 0≤Xn (τ )≤An I [Xn (τ )]=χn (τ )An E n=2. m .37) at time instant τ equals M N N I ζ E m=1 n=1 sn. n = 1. λ0. Fixing 0 ≤ τ ≤ T .27). m Xn (τ ). λ0. An ]. Chapter 2) iff X1 (τ ) 68 . consider the following alternate formulation of (3. Using the strict convex- ity of ζ(·. m Xn (τ ). Chapter 2. λ0.N −ζ sn. N} with fixed means I E[Xn (τ )] = χn (τ )An . · · · . n = 1. λ0. m I E[Xn (τ )]. m −ζ sn. · · · .37) The right side of (3. (3.38): h(τ. xN ∈ 2 2 2 2 N n=2 [0. m χn (τ )An . Then. y). N.38) n=1.37) over all joint distributions of {Xn (τ ). Consider the optimization problem M N h(τ. the integrand on the right side of (3.···. m n=1 dτ. m n=1 . N I [η1 (XN (τ ))]=χ1 (τ )An E 2 max 0≤X1 (τ )≤A1 I [X1 (τ )|XN (τ )]=η1 (XN (τ )) E 2 2 max I E m=1 ζ n=1 sn. m Xn (τ ). ···. where the inner optimization is performed over the class of conditional distributions of X1 (τ ) (conditioned on XN (τ )) with a finite support [0. An }-valued. To this end.

1} denote the nth bit in the binary representation of i. n = 1. N} : bn (i) = 1} denote the set of nonzero bit positions in the binary representation of i. N. the joint pmf 69 . therefore.39) and (3. To this end. n=1 bn (i)2 Let Ii = {n ∈ {1. · · · . By (3. n=2 Returning to (3.is a {0. i = 0. 2N − 1. (3.38) is {0. 1.40) denotes the (joint) pmf of nth transmit aperture remaining in state bn (i). · · · . A1 }–valued rv with Pr{X1 (τ ) = A1 |XN (τ ) = xN } = 1 − Pr{X1 (τ ) = 0|XN (τ ) = xN } 2 2 2 2 η1 (xN ) 2 . n = 1. we introduce the following notation.40). · · · . n = 1. · · · . at time instant τ. it follows that the optimal marginal pmf of Xn (τ ) that maximizes the right side of (3. 2N − 1. i = 0. A1 By symmetry. N}. Let bn (i) ∈ {0. An ]. N. · · · . N. · · · . An }–valued with Pr{Xn (τ ) = An } = 1 − Pr{Xn (τ ) = 0} = χn (τ ). = A1 N xN 2 ∈ [0. Then qi (τ ) = Pr{Xn (τ ) = bn (i)An .38). N. · · · . · · · . n = 1. 0 ≤ τ ≤ T . A1 }–valued with I 1 (XN (τ ))] E[η 2 Pr{X1 (τ ) = A1 } = = χ1 (τ ). to focus on the joint pmf of binary {0.38). in other words. ∆ n = 1. An }–valued rvs Xn (τ ). in order to compute the right side of (3.39) It suffices. 2N − 1. (3. we see that the optimal marginal pmf of X1 (τ ) that maximizes the right side of (3. i = 0. · · · .38) is {0. i = N n−1 .

m APτ (k) . P = m=1 6 ∆ ζ k=1 sP (k). m bn (i)An .2. · · · . · · · .45) = n=1 ψn. λ0. (3.        χ  Pτ (N ) (τ ). ∗ qi (τ ) =   1−χ   Pτ (1) (τ ). · · · . m We take 0 k=1 xk = 0.42) and (3. = χn (τ ).42) is    χ  Pτ (k) (τ ) − χPτ (k+1) (τ ). 2N − 1.        0.43). where we have defined6 M n ψn. N − 1.2 −1 vector q(τ ) = {qi (τ )}i=0 satisfies N qi (τ ) ≥ 0. m AP (k) . χ (τ )) = n=1 N χPτ (n) (τ ) − χPτ (n+1) (τ ) M N ζ m=1 k=1 sPτ (k). N} such that χPτ (n) (τ ) ≥ χPτ (n+1) (τ ). · · · . N. χ (τ )) = max q(τ ) i=0 N qi (τ ) m=1 ζ n=1 sn.38). λ0. N = 1. (3. 2N −1 i=0 qi (τ ) i:bn (i)=1 qi (τ ) i = 0. N} → {1. k = 1. Summarizing collectively (3. otherwise. m (3. m APτ (k) . n=1 2 maximum in the right side of (3. we then get N −1 M n n = 1.43) if i = 2N − 1.41) n = 1. λ0. we get that 2N −1 M h(τ. if i = 0. · · · . m . N − 1.39).41). we show that the achieved by if i = k Pτ (n)−1 .42) where q(τ ) = {qi (τ )}i=0 2N −1 satisfies (3.44) h(τ. By (3. · · · . Pτ χPτ (n) (τ ). · · · . λ0. (3. In Appendix B. (3.40). m +χPτ (N ) (τ ) m=1 N ζ k=1 sPτ (k). N} is a permutation of {1. 70 . (3. (3. where Pτ : {1.

N}.27) implies that 0 ≤ χn (τ ) ≤ 1.53) ≤ ≤ = n=1 N 1 T ψn. λ0.mµn An . m AΠ(n) . m . 0 n = 1. N and any permutation P of {1. Noting that (3. (3. 0 (3.m µΠ(n) sΠ(n).Pτ χPτ (n) (τ )dτ − M N ζ m=1 n=1 N ψn.45). m sn. we thus see from (3. λ0. n=1 ζ m=1 n=1 71 . χ (τ )) = n=1 N ∆ ψn. · · · . m χn (τ )An .49). χN (τ ))dτ. n = 1. m χn (τ )An .Π µΠ(n) − µΠ(n) ψn. χN (τ ))dτ 0 1 = T N T 0 N M N n=1 T 0 ψn. (3.50) g(τ. (3.38). we then get 1 T T ∆ 1 T T χn (τ )dτ. λ0. (3.49) Setting µn = from (3. λ0. · · · . m .48) n=1.47) χn (τ )dτ ≤ σn . m k=1 (3.47) that 1 I(W ∧ φ(Yss(T ))) ≤ T where we have defined N M N max 1 T 0≤χn (τ )≤1 T 0 χn (τ )dτ ≤σn 1 T T g(τ. N g(τ. Pτ χPτ (n) (τ ) − ζ m=1 M n=1 N sn. N. · · · .m µn An .51) (3.52) (3. (3. N.n−1 −ζ sP (k). Π − ζ m=1 M n=1 N sn.m n=1 dτ (3. m AP (k) . Pτ χPτ (n) (τ ) − ζ m=1 n=1 sn. 1 T T 0 0 ≤ τ ≤ T.46) for n = 1.37). (3. λ0. ···. λ0. · · · .

N. T ] into L equal subintervals. Π + µΠ(N ) k=1 k=1 ψk.57) This concludes the proof of the converse part of Theorem 4.53). (3. by (3. N m=1 0≤µn ≤σn max νn ζ n=1 N n k=1 sΠ(k). the channel input waveform at the nth transmit aperture. m AΠ(n) = n=1 n=1 νn k=1 sΠ(k).55).10). (3. by (3. m AΠ(k) . (3.55) = n=1 νn m=1 ζ k=1 sΠ(k). we use the following identity to simplify (3.54) Setting xn = µΠ(n) and yn = ψn. (3. m AΠ(k) . (3.54). m AΠ(n) . Next. Divide the time interval [0. λ0. m −ζ νn n=1 k=1 sΠ(k).3.46).56) Summarizing collectively (3.where (3.29). 0 ≤ t ≤ T }. · · · . t). ···. Π . m . consider the situation in which for each message w ∈ W. λ0. n=1 n=1 N N −1 n N xn yn = n=1 n=1 (xn − xn+1 ) yk + xN k=1 k=1 yk . (3. we get N N −1 n n µΠ(n) ψn. Π (3. N.48). (3. Achievability part: Fix L ∈ Z+ and set ∆ = T /L.56).50).10). Π = n=1 n=1 N (µΠ(n) − µΠ(n+1) ) M n ψk. by (3. · · · . (3.51) is by Jensen’s inequality applied to the convex function ζ(·. Similarly.9)) is proved in Appendix B. by setting xn = µΠ(n) and yn = sΠ(n). n = 1.53) is obtained by a rearrangement of terms. {xn (w. and (3. (3. Now.53). we get N N N µΠ(n) sΠ(n). m AΠ(k) .54). is restricted to be {0.m AΠ(k) . An }–valued and piecewise 72 . M N n R n=1. For sequences of real numbers {xn }N and {yn }N .52) (with Π as defined in (3. y) for all y ≥ 0 and (3. λ0. n = 1.m . each of duration ∆. (3.

n = 1.2) requires that 1 L L (3. m |X ˜ xN L ∈ {0. l = 1. (3. x ∆ (3. and the channel transition pmf WY ˜ aperture) is given by WY ˜ (m) xN ˜ N (1|˜ ) m |X (m) ˜ N (·|·) m |X (corresponding to the mth receive x L = wm (˜ N ) T exp −wm (˜ N ) T x L = 1 − ωm (˜ . t) = 0.59) Next. l = 1. · · · . (3.62) (m) x WY |XN (0|˜ N ) ˜m ˜ . · · · . 1}M L → W based on restricted observations over the L time slots. (3. T )–codes as above – and. l) = ˜  1.   n    ∆ xn (w. consider a decoder φ : {0.58) l=1 xn (w. t) = An . comprising ˜ Ym (l) = 1(Ym (l∆) − Ym ((l − 1)∆) = 1). l) ≤ σn . L). L. Define    0. 1}M L .61) ˜ ˜ ˜ ˜ where Xn . m = 1. the capacity Cind – is clearly no smaller than C(L) . M are X – and Y–valued rvs respectively. n = 1. if xn (w.       t ∈ [(l − 1)∆. ˜ w ∈ W. l∆). if x (w. L). x 73 N = ωm (˜ N . 1}N L . hence. 1}M L . m = 1. L. · · · . N. The largest achievable rate of restricted (W. · · · .60) (with Ym (0) = 0) at the mth receive aperture. M. transition probability mass function (pmf) L M W (L) (˜ y ML |˜ x NL ) = l=1 m=1 WY ˜ (m) y xN ˜ N (˜m (l)|˜ (l)). · · · . ˜ yM L ∈ {0.constant in each of the L time slots of duration ∆. T where C(L) is the capacity of a (L –) block discrete memoryless ˜ channel (in nats per block channel use) with input alphabet X N L = {0. N and Ym . output ˜ alphabet Y M L = {0. 1}N L . · · · . Note that the condition (3.

m = 1. ˜ ˜ xN ∈ {0. 2N − 1. ···. YM L (˜ N L .64) where the joint pmf PXNL . From [14]. N where PXN . 1}N . ···. m . N L I [X (l)]≤Lσ E ˜n n l=1 max ˜ ˜ I(XN L ∧ Y M L ).64). x ˜ ˜ ˜ ˜ xN L ∈ {0. (3. y ∈ {0. i = 0.61). y ) = PXN (˜ N )WY ˜ x ˜ ˜ ˜ x ˜ (m) y xN ˜ N (˜|˜ ).63) and under constraint (3. Ym is given by ˜ ˜ PXN .64) is achieved by L x PXNL (˜ ˜ so that from (3. 1}.69) 74 . ˜ yM L ∈ {0. the maximum in (3. where ˜ pi = Pr{Xn = bn (i). Ym (˜ N . m |X ˜ xN ∈ {0. it is readily obtained that C(L) = P ˜ NL : X n=1. 1}N L . · · · . · · · .65) By a standard argument which uses (3.59). Ym ).67) n=1. 1}M L . m=1 (3. (3.with N wm (˜ ) = x n=1 N ∆ sn. 1}N . ˜ (3. m xn An + λ0. YM L is given by ˜ ˜ ˜ x y x PXNL . (3. · · · .68) 2N −1 ˜ Let the joint pmf of XN be represented by the vector p = {pi }i=0 . 1}N L . N}. yM L ) = PXNL (˜ N L )W (L) (˜ M L |˜ N L ). (3. x ˜ ˜ xN L ∈ {0. n = 1. (3. NL ) = l=1 PXN (l) (˜ N (l)).66) M C(L) = L P˜N X :I [Xn ]≤σn E ˜ max ˜ ˜ I(XN . M.

N.62) and (3.71) 2N −1 i=0 Therefore.72) with M βL (p) = m=1 N ∆  2 −1 where p = {pi }i=0 satisfies (3. pi hb ωm bN (i). 2N − 1. n = 1. λ0. · · · . (3.m bn (i)An .  (3. i = 0.74) N = max p m=1  pi ζ m n=1 sn. Since L ∈ Z+ was arbitrary. n = 1. (3. it follows that C(L) = max LβL (p). .m  . by (3. Clearly. pi ≤ σn . ˜ Pr{Ym = 1} ˜ ˜ H Y m |XN ˜ H(Ym) = = = hb 2N −1 i=0 2N −1 i=0 pi ωm bN (i). i:bn (i)=1 where the last inequality follows from the constraints I Xn ] ≤ σn . M.70) pi = 1. it follows that for m = 1.71). L pi ωm bN (i).m An n=1 i=0 pi bn (i). · · · . (3. p = {pi }i=0 N satisfies the constraints pi 2N −1 i=0 ≥ 0.70).2 −1 where bn (i) is the nth bit in the binary representation of i.75) 75 .67).m 2N −1 i=0 −ζm   N sn.73)  Cind ≥ C(L) L→∞ T βL (p) ≥ max lim p L→∞ T /L  lim M 2N −1 (3. · · · .69). p (3. L . L . (3. N. we have hb   2N −1 i=0 pi ωm bN (i)  −  2N −1 i=0 pi hb ωm bN (i)  . λ0. E[ ˜ From (3. · · · .

78). λ0.2 −1 with p = {pi }i=0 satisfying (3. m µn An .79) pi = 1. λ0. and (3.70) and (3. otherwise. 76 (3. ···. and the corresponding largest value is N h(µN ) = n=1 µΠ(n) ψn.72). pi = µn .70). N 0≤µn ≤σn max h(µ ) − N ζm m=1 n=1 sn.  if i = 0. 2N − 1. · · · .81) .78) 2 −1 with p = {pi }i=0 satisfying the constraints pi 2N −1 i=0 ≥ 0. where (3. i = 0. N − 1.74) is by (3. (3.42) and (3. it follows that n = 1. · · · . (3.75) is estab- N lished in Appendix B. ∗ (3.4. we immediately see that the optimum p∗ = {p∗ }i=0 i N that obtains the maximum in (3. i:bn (i)=1 2 −1 Comparing (3.  Π(1)        0. Setting 2N −1 µn = i=0 pi bn (i) pi . N. (3.77) 2N −1 M N h(µ ) = max p i=0 N N pi m=1 ζm n=1 sn. k = 1. N. if i = 2N − 1. n = 1. where Π(·) is as defined in (3.80) pi =   1−µ . · · · .75). m bn (i)An . · · · .  n=1       µ  Π(N ) .76) M N Cind ≥ where n=1.78) is given by    µ  Π(k) − µΠ(k+1) .9). m . Π . m . if i = k 2Π(n)−1 . (3. i:bn (i)=1 = by (3.

m AΠ(k) .82) by (3. m µn An . 0 ≤ τ ≤ T.46). m .5)).77). · · · .85) µn An ≤ σ 77 . N 1 n=1 T T 0 0 ≤ t ≤ T. (3. N n=1 An . This concludes the proof of the achievability part. ···.5).83) I E[Xn (τ )]dτ ≤ σ whence instead of (3. Π as defined in (3. it follows that 0 ≤ N n=1 µn ≤ 1.27). we get N M N Cind ≥ = n=1. · · · . N. m −ζ νn n=1 k=1 sΠ(k).74). λ0. ···. (3. Π − N n ζ m=1 n=1 sn. · · · . whence by (3. (3.55). λ0.84) χn (τ )An dτ ≤ σ An . Proof of Corollary 5: The proof follows the same arguments as the proof of Theorem 4. N.56). we identify only the differences. N n=1 n = 1. (3. N 0≤µn ≤σn max n=1 M µΠ(n) ψn. Consider first the proof of the converse part. n = 1.with ψn. In the following. Summarizing collectively (3. taking into account the difference between individual average power constraints at each transmit aperture (see (3. (3. n = 1.81). m n=1. λ0.50). from (3. m AΠ(k) . (3.4)) and an average sum power constraint (see (3. it follows that 0 ≤ Xn (t) ≤ An . N m=1 0≤µn ≤σn max νn ζ n=1 N k=1 n sΠ(k). (3. we now get 0 ≤ χn (τ ) N 1 n=1 T T 0 ≤ 1. Note that instead of (3. N. N n=1 An .47).

88) which leads to M C(L) = L PXN : ˜ max N I [Xn ]An ≤σ E ˜ n=1 N N A n=1 n ˜ ˜ I(XN . ···.5). we now have 1 L L N N l=1 n=1 xn (w. N N N n=1 µn An ≤σ n=1 max νn ζ An m=1 n=1 N k=1 n sΠ(k). N. The proof of the achievability part is completed by replacing the constraints on {µn }N in (3.87) Instead of (3. λ0. With p = {pi }i=0 as defined in (3.82) by (3.85).59). instead of (3. the last inequality in (3. (3.70) is now replaced by N N An n=1 i:bn (i)=1 pi ≤ σ An . m AΠ(k) .69). N n=1 (3. n=1. λ0. Ym ).86) which concludes the proof of the converse part.64). n=1 78 . (3. Instead of (3. n=1 w ∈ W.91). Consider next the achievability part. · · · . N n=1 µn An n = 1. m AΠ(k) . by (3.57). (3.67).By (3.91) ≤ σ An . m=1 (3.90) which implies the following constraints on {µn }N : n=1 0 ≤ µn ≤ 1. we now have C(L) = PXNL : ˜ N n=1 I [XL ]An ≤Lσ E ˜n max N n=1 An ˜ ˜ I(XN L ∧ Y M L ). we now get M N n R 0≤µn ≤1. l)An ≤ σ ˜ An . m −ζ νn n=1 k=1 sΠ(k). n=1 (3.89) 2 −1 instead of (3. m .

The case of σ1 ≤ σ2 follows by symmetry. it follows that ∂I1>2 ∂x1 ∂I1<2 ∂x1 ∂I1>2 ∂x2 λ0. (3.93) max I(µ1 . 1 2 we divide the constraint set {0 ≤ µ1 ≤ σ1 . m 2 n=1 x1 ζ(s2.96) . s) =    I (µ − µ .5). . For the special case of N = 2 transmit apertures. s).94) −ζ x1 s1.m Bm log 1+α(b1. m A1 .95) = ∂I1<2 ∂x2 Observe that I(x1 .92) 0≤µ1 ≤σ1 0≤µ2 ≤σ2 where for x1 ≥ 0. m An . Without loss of generality.m +x2 Bm . s ∈ (IR+ )2×M . (3.m log λ0. m An . µ∗) that achieves the maximum in (3.  1<2 2 1 1 x1 ζ(s1. (3. λ0. Setting F1>2 = {(x1 . λ0.m b2. x2 . 0 ≤ x2 ≤ σ2 . x2 ) : 0 ≤ x1 ≤ σ1 . x1 ≥ x2 }. (3. sn. µ . x2 . m A2 + x2 By (2.m +x2 Bm 1+α(b2. m A1 + x2 I1<2 (x1 . s) = M m=1   I (µ − µ . λ0. x2 ) : 0 ≤ x1 ≤ σ1 . x2 . m . 2 n=1 sn.m 1+x1 b2. x1 ≤ x2 }.12). λ0. m An . µ . m = = = M m=1 M m=1 M m=1 λ0. F1<2 = {(x1 . we have defined 0 I1>2 (x1 . µ2 . (3.m )b1. from (3.92).m 1+x1 b1. assume that σ1 ≥ σ2 .7). m . x2 ≥ 0. m ) + x2 ζ 2 n=1 sn. 0 ≤ x2 ≤ σ2 .  1>2 1 2 2 if µ1 ≥ µ2 .m +x2 Bm 1+α(Bm )Bm 1+x1 b1. we have Cind = with I(µ1. In 0 order to determine the optimal pair (µ∗ .Proof of Theorem 5: The proof is an exercise in convex optimization. m A2 . m An . s) = M m=1 λ0. m ) + x2 ζ 2 n=1 sn.m b1. otherwise. s). 0 ≤ µ2 ≤ σ2 } into two half sets as follows. 79 (3. −ζ x1 s2. .m )b2. s) is nondifferentiable for any s ∈ (IR+ )2×M if x1 = x2 . µ2 .m log λ0. s).

103) where ηk ≥ 0. ν2 . µ2 . ν2 ) = I1<2 (ν1 .92). By (3. 3 are Lagrange multipliers to be determined. ν2 )∈G1>2 C1<2 = where (ν1 . G1<2 = {(x1 . ν2 . ν1 = µ2 − µ1 ) and ν2 = µ2 (resp. (3.97) (µ1 . x2 ) : x1 ≥ 0.by (3.101) G1>2 = {(x1 . we get Cind = max {C1>2 .98). (3.100) (ν1 .101) satisfies the following Kuhn- 80 . x1 + x2 ≤ σ2 }. The optimal ∗ ∗ pair (ν1 . s) + η1 ν1 + η2 ν2 + η3 (σ2 − ν1 − ν2 ). ν2 )∈G1<2 max I1<2 (ν1 . k = 1. (3.99) We apply the transformation ν1 = µ1 − µ2 (resp. x1 + x2 ≤ σ1 }. s). x2 ≥ 0. µ2 )∈F1<2 max I1<2 (µ2 − µ1 . ν2 = µ1 ) in order to compute C1>2 (resp. s). µ1 . x2 ) : x1 ≥ 0. ν2 . s). C1>2 = max I1>2 (ν1 . We first compute C1<2 using the Lagrangian J1<2 (ν1 . (3. (3. s). 0 ≤ x2 ≤ σ2 . C1<2 } . µ2 )∈F1>2 C1<2 = (µ1 . ν2 ) that achieves the maximum in (3.102) (3. 2. (3.96). where C1>2 = max I1>2 (µ1 − µ2 . (3.98) (3.99). C1<2 ).

Tucker conditions (cf. 233): ∂I1<2 ∂ν1 ∂I1<2 ∂ν2 ∗ ∗ (ν1 . ρ}.101). κ∗ ) that achieves the maximum in (3. κ2 ) = I1>2 (κ1 .104) ∗ ∗ η3 (σ2 − ν1 − ν2 ) = 0. ¯ C1<2 = I1<2 (0. · · · .106) where ηk ≥ 0. min{σ2 .g.105) = min{σ2 . p. ¯ We now compute C1>2 using the Lagrangian J1>2 (κ1 . ν2 ) + η1 − η3 = 0.. ∗ η1 ν1 = 0. The optimal pair (κ∗ .14). ∗ η2 ν2 = 0. s). whose solution yields ∗ ν1 = 0. (3. 4 are Lagrange multipliers to be determined. k = 1. ν2 ) (3. + η2 − η3 = 0. ρ}. e. ¯ where ρ is as defined in (3. so that by (3. κ2 . ∗ ν2 (3. s) + η1 κ1 + η2 κ2 +η3 (σ2 − κ2 ) + η4 (σ1 − κ1 − κ2 ). ∗ ∗ (ν1 . [35].100) satisfies the following Kuhn1 2 81 .107) (3.

Conditions 2 2 (1). κ∗ ) 1 2 + η1 − η4 = 0.108) η4 (σ1 − κ∗ − κ∗ ) = 0.  1>2 1 1 2 2 2 ¯ if σ2 ≥ ρ1 . (3). 2 (3. otherwise. (3.110) Comparing (3.15) respectively. κ∗ ) 1 2 + η2 − η3 − η4 = 0. β (σ )} − σ . ρ and β1 (·) are as defined in (3. 1 2 whose solution yields    0. (κ∗ . Cind = C1>2 . (4).  if σ2 ≥ ρ1 . so that ¯ by (3.14) and (3. we see that if σ1 ≥ σ2 .  1 1 2 2 where ρ1 . ρ}. Therefore. the optimal pair µ∗ = κ∗ + κ∗ and 1 1 2 µ∗ = κ∗ satisfies one of the conditions (1). (6) of Theorem 5. ρ}. This concludes the proof of Theorem 5. η1 κ∗ = 0.106) and (3. Proof of Theorem 6: First. β (σ )} − σ .13). ρ}. (3. (2). s). (6) corresponding to the case σ1 ≤ σ2 can be verified using similar arguments. s). (5). note that standard Lagrangian techniques (as used in the proof of The82 . then C1>2 ≥ C1<2 . min{σ .110).  1>2 2 ¯ C1>2 =   I (min{σ .Tucker conditions: ∂I1<2 ∂κ1 ∂I1<2 ∂κ2 (κ∗ . 1 η2 κ∗ = 0. 2 η3 (σ2 − κ∗ ) = 0. ¯ 2   min{σ . min{σ . and by the inverse transformation. we get    I (0. otherwise. (3.109) κ∗ = 1 κ∗ = min{σ2 .100).

113). we get (3. (1 + 1/a)σ]. m A1 . From (3. using (2. λ0. m ) + xζ( 2 sn. λ0. otherwise. λ0.12). m An .  1 if x < σ. m A1 + ((1 + a)σ − ax)s2. m )]. µ∗ ) that achieves the maximum in (3.orem 5). piecewise differentiable function on [0. m 1 2. λ )].   1. when (3. m An .111) holds at optimality. the “uncon¯ 1 2 1 2 strained” optimum pair. with a = Csum = where   M   m=1 [(1 + a)(σ − x)ζ(s2.113) I0 (x) =  m=1 [(1 + a)(x − σ)ζ(s1. m    M (3. m )       +((1 + a)σ − ax)ζ( 2 sn. with a nondifferentiable point at σ. it can be verified that dI0 (x) =  dx  K − d(x. µ∗ A1 + µ∗ A2 = σ(A1 + A2 ) 1 2 (3. we consider the case σ < ρ. Observe that I0 (·) is a continuous. if x > σ. If σ > ρ. m A2 .11). λ0.114) . A2 from (3. m A2 . m )  n=1        −ζ(xs1. where ρ is as defined in (3.  2 83    K − d(x. max A1 .112) 0≤x≤(1+1/a)σ I0 (x).111) holds iff σ ≤ ρ. 1 I0 (x) = ′ (3. m )  n=1       −ζ(xs A + ((1 + a)σ − ax)s A .5) and (3. σ). concave. Next.11) is given by µ∗ = µ∗ = ρ. λ0. and µ∗ = arg max0≤x≤(1+1/a)σ I0 (x). Substi¯ tuting µ1 = x and µ2 = (1 + a)σ − ax. σ). can be used to show that at optimality. m 2 0. if x ≤ σ.14). then the optimal pair ¯ ¯ ¯ (µ∗ .

(c) I0 (σ − ) > 0.2 (c) 10 dI (x)/dx −> 5 0 −5 −10 0 0. ′ ′ ′ ′ ′ ′ ′ 84 .1 x −> 0.2 x −> 0. (a) I0 (0) < 0.1 x −> 0. (b) I0 (x0 ) = 0 for some 0 ≤ x0 < σ.2 Figure 3.5 x −> 1 0 0.2 x −> 0. I0 (σ + ) < 0.(a) 0 dI (x)/dx −> dI0(x)/dx −> 10 0 (b) −5 −10 −20 0 −10 0 0.4 0 0 0. (d) I0 (x0 ) = 0 for some σ < x0 ≤ (1 + 1/a)σ.3: The possible variations of I0 (x) versus x.4 (d) 20 dI (x)/dx −> 10 0 0 (e) 30 dI (x)/dx −> 20 10 0 0 −10 0 0. (e) I0 ((1 + 1/a)σ) > 0.

. m log 1 + σBm 1 + ρ2 Bm . By ¯ ∆ ∆ (3. i.. then µ∗ = (1 + 1/a)σ.m aBm log m=1 1 + ρBm ¯ 1 + σBm + (1 + a)b1.e. then µ∗ = x0 = γ2 (σ). if I0 (x0 ) = 0 for some 0 ≤ x0 < σ. 1 The statement of Theorem 6 is a rearrangement of these observations.18) respectively. m Bm log 1 + ρBm ¯ 1 + σBm + (1 + a)b2.e. σ) M = m=1 λ0.114). ·) are as defined in (3. if γ1 (σ) < 0. if limx→σ− I0 (x) > 0 and limx→σ+ I0 (x) < 0. if I0 (x0 ) = 0 for some σ < x0 ≤ (1 + 1/a)σ.e. y) is non-decreasing for any y ∈ IR. 1 ′ 4. (3.17) and (3. and D2 (σ) = limx→σ+ I0 (x).m log 1 + σBm 1 + ρ1 Bm . then µ∗ = σ.3) ′ 1. then µ∗ = x0 = γ1 (σ). Proof of Corollary 6: ′ ′ For 0 ≤ σ ≤ ρ.116) ( 85 . 1 ′ 5. we conclude that (see Figure 3.. i. 1 ′ ′ 3.e. Keeping in mind that K1 ≥ K2 and d(·. K2 and d(·. if I0 (0) < 0. σ) M = − λ0. 3.. if I0 ((1 + 1/a)σ) > 0. i. if γ2 (σ) > (1 + 1/a)σ. define D1 (σ) = limx→σ− I0 (x).where K1 . i. then µ∗ = 0. D1 (σ) = K1 − d(σ. if 0 ≤ γ1 (σ) < σ. if σ < γ2 (σ) ≤ (1 + 1/a)σ. 1 ′ 2.115) and D2 (σ) = K2 − d(σ.

On the other hand. We determine the capacity of a 2 × 2 MIMO Poisson channel with constant channel fade. then I0 (x0 ) = 0 for some σ ≤ x0 ≤ (1 + 1/a)σ. and the facts that ¯ D2 (0) = K2 > 0. In summary. (3.5 Numerical examples In this section. it follows that µ∗ = x0 ≤ σ. 1 2 1 2 Case 2: K2 > 0: By the continuity of D2 (·) on [0. we conclude that there exists 0 ≤ σc ≤ ρ such that ρ ¯ D1 (σc ) = 0. it follows that µ∗ = µ∗ = σ. ρ].116) are by (3. On the other hand. then I0 (x0 ) = 0 for some 0 ≤ x0 ≤ σ. If σ ≥ σc . If σ ≤ σc .115). D2 (0) = K2 . By Theorem 6. it follows that µ∗ = µ∗ = σ. we provide some illustrative examples to supplement the results described in this chapter. Note that D1 (0) = K1 . so that ¯ µ∗ = µ∗ = σ. so that by Theorem 6. ρ].115). so that by Theorem 6. and µ∗ = (1 + a)σ − aµ∗ ≤ σ.18). 1 2 1 2 Case 3: K1 ≥ 0 ≥ K2 : For all 0 ≤ σ ≤ ρ. We consider symmetric and asymmetric peak 86 . By (3. D2 (¯) ≤ 0.where (3.116).17). and the fact that ρ ≥ max{ρ1 . ¯ ρ ρ Case 1: K1 < 0: By the continuity of D1 (·) on [0. it follows that D1 (¯) ≥ 0 and D2 (¯) ≤ 0. In summary. D1 (σ) ≥ 0 and D2 (σ) ≤ 0. then D1 (σ) > 0 and D2 (σ) < 0. µ∗ ≤ σ ≤ µ∗ for all 0 ≤ σ ≤ 1. 1 2 1 if σ < σc . and D1 (σ) − D2 (σ) = K1 − K2 for all 0 ≤ σ ≤ 1. it follows that µ∗ = x0 ≥ σ. µ∗ ≥ σ ≥ µ∗ for all 0 ≤ σ ≤ 1. (3. then D1 (σ) > 0 and D2 (σ) < 0. 1 2 ′ ′ 3. By Theorem 6. we conclude that there exists 0 ≤ σc ≤ ρ such that ρ ¯ D2 (σc ) = 0. ρ2 }. 1 2 1 if σ > σc . D1 (¯) ≥ 0. and µ∗ = (1 + a)σ − aµ∗ ≥ σ. (3. and the facts that ¯ D1 (0) = K1 < 0.

K1 = 0.0.1]. The channel conditions experienced by ¯ the two transmit apertures are identical. λ0.3 0. K2 = −0. therefore.9 1 Figure 3.8 0.3 R2 σ2 = ρ1 0.4 0.1. as well as average sum power constraint on the transmitted signals. 87 .1.1 0. 1.4).0.7 R1 0.457.8 R5 R3 0. and average power constraints.2 R6 σ1 = β1(σ2) 0.1 σ1 = ρ2 _ σ1 = ρ 0. 1 = λ0.5 σ1 −−> 0.1 R4 0 0 0.6 σ2 −−> _ σ2 = ρ σ2 = β2(σ1) σ1 = σ2 0.7 0. σ2 for Example 3.0 0. Example 3. s = [1. It can be verified that ρ1 = ρ2 = 0. Furthermore.4 0.6 0.236.5 0.0 0. ρ = 0.5324. it is not surprising that the decision region of the optimal duty cycles is symmetric when the transmit apertures are subject to individual average power constraints (see Figure 3. 2 = 1.5324.9 0.2 0.4: Decision region of optimal duty cycles for individual average power constraints σ1 .1: Consider a 2 × 2 MIMO Poisson channel with A1 = A2 = 1.

6 µ .9 1 Figure 3.4 0.9 0.8 0.3 µ1= µ2= σ 0. µ −−> _ * * µ =µ =ρ 1 2 * 1 * 2 0.6 0.1 0.5: Optimal duty cycles for an average sum power constraint σ for Example 3.8 0.7 0.2 * * 0.4 0.1 0 0 0.5 σ −−> 0.1.1 0.5 0. 88 .3 0.2 0.7 0.

2 ind 0. σ2 for Example 3.35 0.1 0.3 ) −−> Capacity (C 0.05 0 1 0.1.0.4 0.2 σ1 −−> 0.4 0.6: Plot of capacity (Cind ) versus σ1 . 89 .6 0.15 0.8 1 Figure 3.6 0.25 0.8 0.2 σ2 −−> 0 0 0.

3 0.4 0.9 1 Figure 3.1 0.5 σ −−> 0.3 0.1.1 0.8 0.7: Plot of capacity (Csum ) versus σ for Example 3.7 0.25 Capacity Csum −−> 0. 90 .0.2 0.35 0.6 0.05 0 0 0.15 0.2 0.

2.471. In Figures 3.3 0.1. In this example.1 0. s = [1.1 σ1= ρ2 _ σ1= ρ 0.425.8: Decision region of optimal duty cycles for individual average power constraints σ1 .9 1 Figure 3.4 σ2= ρ1 0. ρ = 0.2 R6 σ1= β1(σ2) R4 0. 0.1]. it can be seen ¯ 91 .1 0. K1 = 0.4575. K2 = 0. ρ2 = 0.6 0. 1 = λ0.5).5 σ1 −−> 0.8 R5 0. we plot respectively the capacities for individual and sum average power constraints versus the average power constraint parameters.4 0. λ0. the optimal average duty cycles of both transmit apertures are equal for all 0 ≤ σ ≤ 1 (see Figure 3.1 0 0 0. It can be verified that ρ1 = 0.0.6 and 3.3 σ1= σ2 0.078.7.6 σ2 −−> _ σ2= ρ R σ2= β2(σ1) 2 0. 2 = 1. σ2 for Example 3. when the transmit apertures are subject to an average sum power constraint σ.2 0. Example 3.2: Consider a 2 × 2 MIMO Poisson channel with A1 = A2 = 1.7 R3 R1 0.8 0.9 0.3054.0 0.0.7 0.5 0.

2 0.6 0. 92 .3 µ = 2σ * 1 0.9 µ 0.5 σ −−> 0...3 0.9 1 * 2 Figure 3.7 0.1 µ =0 0 0 0...2 µ = 2σ− γ (σ) 2 * 2 0..2. 0.4 0.9: Optimal duty cycles for an average sum power constraint σ for Example 3.4 µ =µ =σ * 1 * 2 0.8 * 1 * µ 2 0.1 0.6 µ .7 0.5 µ = γ (σ) 2 * 1 0. µ −−> _ * * µ =µ =ρ 1 2 * 1 * 2 0.1 ___ .8 0.

8 0.2.02 0 1 0.0.6 0.12 0.08 0.06 0.4 0.8 1 ind Figure 3. σ2 for Example 3.6 0.2 σ1 −−> 0.4 0.2 <−− σ2 0 0 0.04 0.1 −−> Capacity C 0. 93 .10: Plot of capacity (Cind ) versus σ1 .

08 Capacity (Csum) −−> 0.6 0. 94 .04 0.1 0.12 0.7 0.3 0.2.4 0.8 0.9 1 Figure 3.11: Plot of capacity (Csum ) versus σ for Example 3.1 0.0.02 0 0 0.06 0.2 0.5 σ −−> 0.

8) is clearly asymmetric. and one of two types of average power constraints: (a) individual average power constraints. σ) is plotted.11). At each receive aperture.10 (resp. the variation of capacity with average power constraint parameters σ1 and σ2 (resp.9) for a range of values of σ. The decision region for optimal duty cycles when the transmit apertures are subject to individual average power constraints (see Figure 3. 3. which is captured in the condition K1 > K2 > 0. or (b) a constraint on the sum of the average powers from all transmit apertures. The transmit apertures are subject to individual peak power constraints. we see that (see Figure 3. the optimal average duty cycle of transmit aperture 1 is strictly larger than that of transmit aperture 2. Figure 3. When the transmit apertures are subject to a sum average power constraint σ. The capacity of this channel is explicitly characterized and optimum 95 . In Figure 3.that transmit aperture 1 experiences significantly better channel conditions than transmit aperture 2.6 Discussion We have studied the shot-noise limited N × M MIMO Poisson channel with constant channel fade and background noise rates. scaled by the respective channel fade. with a bias towards transmit aperture 1. the asymmetric nature of channel conditions with respect to the two transmit apertures is reflected in the asymmetry of these plots. the optical fields received from different transmit apertures are assumed to be sufficiently separated in frequency or angle of arrival such that the received total power is the sum of powers from individual transmit apertures.

There is a specific ordering of the transmit apertures (denoted by Π) which dictates that whenever a transmit aperture is ON. in which all the transmit apertures are simultaneously ON or OFF. as was shown in Theorem 5 for N = 2 transmit apertures. It has been established that a two-level signaling strategy through each transmit aperture with arbitrarily fast intertransition times is capacity-achieving. For the special case of a symmetric average power constraint. However. The transmitted signals through the N apertures are in general correlated across apertures and i. the optimum set of transmission events has at most N + 1 values (out of 2N possible values).i. the transmit apertures with higher average power levels may remain ON even when the ones with lower average power levels are OFF. N. a fact not mentioned therein.strategies for transmission are determined. all the apertures with stronger average power constraints (in the sense of Π) must also remain ON. where k = 0. 1. in time. for a range of values of average power levels. the optimum transmission strategy corresponds to the simultaneous ON-OFF keying events. This implies that the lower bound derived in [19] for the MIMO Poisson channel with a symmetric average power constraint is always tight. · · · . the optimum strategy assigns nonzero mass to only the simultaneous ON-OFF keying 96 . in the case of asymmetric average power constraints.d. Each of these values correspond to events in which exactly k transmit apertures are ON and the remaining N − k are OFF. Furthermore. Specifically. the optimum set of transmission events may have more than 2 values. On the other hand. The two levels correspond to no transmission (OFF state) and transmission at the peak power level (ON state).

In the next chapter. Several properties of optimal transmission strategies. It has been demonstrated that the relative ordering of the optimal average duty cycles across the transmit apertures does not depend on the average sum power constraint σ.. and a relative ordering of transmit apertures according to optimal average duty cycles will be shown to extend to the random fading channel as well.i.transmission events. Finally.d. 97 . binary signaling from each transmit aperture with arbitrarily fast intertransition times. the result of Corollary 6 will be used to obtain a simplified expression for channel capacity of the symmetric MIMO channel with isotropically distributed channel fade.g. We have identified a key property of the optimal transmission strategy for communication over a MIMO Poisson channel subject to individual peak power constraints and a constraint on the sum of the average power constraints. optimality of i. e. these results will be used to characterize the capacity of a MIMO Poisson channel with random channel fade.

we describe the problem formulation. While an explicit characterization of the capacity-achieving optimal power control law in its full generality is yet to be determined. while the transmitter CSI can be imperfect. the main contributions are summarized in Section 4. and changes across successive intervals in an i. we consider the N × M MIMO Poisson channel with random channel fade.6. The transmitted signals from each transmit aperture are subject to peak and average power constraints.1 Introduction In this chapter. and the proofs are outlined in Section 4. 98 . We provide a single-letter characterization of the channel capacity in terms of the channel and signal parameters. we are able to identify interesting properties of the optimal transmission strategy for the special case of a symmetric MIMO Poisson channel with isotropically distributed channel fade when the transmitter has perfect CSI. manner.Chapter 4 MIMO Poisson channel with random channel fade 4.d. Finally. The results are discussed in Section 4. In Section 4. The receiver is assumed to possess perfect CSI.3. An illustrative example is discussed in Section 4.i.4.5.2. The remainder of this chapter is organized as follows. We assume that the channel fade matrix remains unvarying over intervals of duration Tc .

We also assume that the receive apertures are sufficiently separated in space. t ≥ 0. m = 1.2 Problem formulation A block schematic diagram of the channel model is given in Figure 4. m ≥ 0 is the (constant) background noise rate at the mth receive aperture. · · · . m = 1. m (t)xn (t) + λ0. the processes Ym = {Ym (t).M (t) Channel fade matrix Λ1 (t) PCP Y1 (t) Y2 (t) Λ2 (t) PCP λ0.1 x1 (t) x2 (t) S1. (4. and λ0.2 (t) S1.1) where {Sn. m . M. 0 ≤ t ≤ T } is the IR+ -valued random fade from the nth transmit 0 aperture to the mth receive aperture.1 (t) S1. · · · . · · · . M. are taken to be mutually independent [47. 99 .1: N × M MIMO Poisson channel with random channel fade. m (t). · · · . t ≥ 0}N . For a given set of N IR+ –valued transmitted signals {xn (t).λ0. N.2 ΛM (t) PCP λ0. the received 0 n=1 signal {Ym (t).1. so that conditioned on the knowledge of transmitted signals and the instantaneous ∞ realization of channel fade at the receiver. t ≥ 0} at the mth receive aperture is a Z+ -valued nondecreasing 0 (left-continuous) PCP with rate equal to N Λm (t) = n=1 Sn.M YM (t) Figure 4.M (t) xN (t) SN. M. m = 1. 4. 19]. t ≥ 0}. n = 1.

1 T T 0 0 ≤ t ≤ T.. 2. m (t). kTc ). · · ·. · · · . · · · . We assume that the channel fade remain fixed over time intervals of duration Tc . 0 ≤ t ≤ T }N . are fixed. collectively denoted by xT = {xn (t). M. m (t). (4. · · · .2) xn (t)dt ≤ σn An . symmetric and asymmetric peak and average power constraints can be defined in a manner similar to Definition 2. m [k].i. t ∈ [(k − 1)Tc . and which satisfy peak and average power constraints of the form 0 ≤ xn (t) ≤ An . kTc ) be denoted by the rv Sn. · · · . 100 k = 1. i. one 0 corresponding to each transmit aperture. Sn. 0 ≤ σn ≤ 1. N. Also. where the peak powers An > 0 and the ratios of average-to-peak power σn . and change in an i. . For k = 1. 2.2: Block fading channel. let the channel fade from the nth transmit aperture to the mth receive aperture on [(k − 1)Tc . manner across successive such intervals.d. in other words.e. The channel fade. The input to the channel is a set of N IR+ –valued transmitted signals. N. t ≥ 0}. n = 1.S[1] 0 S[2] Tc 2Tc (k − 1)Tc S[k] kTc S[K] (K − 1)Tc T = KTc Transmission interval Figure 4. m [k] = Sn. The channel 0 coherence time Tc is a measure of the intermittent coherence of the time-varying channel fade. from the nth transmit aperture to the mth receive aperture is a IR+ -valued random process {Sn. for n = 1. m = 1. path gain. each of which is proportional to the transmitted optical power from the n=1 respective transmit aperture.

repetitions of a random matrix S with known distribution FS . m log Sn. Our general results below hold for a broad class of distributions for S which satisfy the following technical conditions: (a) Pr{Sn. · · · . h is the trivial (constant) mapping. s[k] ∈ (IR+ )N ×M . i.i. · · · (see Figure 4. 101 . · · · . m [k]. 2. k ∈ Z+ } denote the CSI at the transmitter. As before. t ≥ 0. (b) I n. the transmitter has perfect CSI. n = 1. N. we can model the CSI available at the transmitter in terms of a given mapping h : (IR+ )N ×M → U.e. M}. we shall assume that the receiver has perfect CSI. m .e. k ∈ Z+ . kTc ).e.2). K ∈ Z+ . We assume. not 0 0 necessarily finite. We shall be particularly interested in two special cases of this general framework.. Let {U[k] = h(S[k]). that the message transmission duration T is an integer multiple of the channel coherence time. receiver) 0 is provided with CSI u[k] = h(s[k]) (resp. where U is an arbitrary subset of (IR+ )N ×M . m > 0} = 1. i. the transmitter has no CSI. kTc ) is denoted by S[k] = {Sn. hereafter referred to as the transmitter CSI h. Note that the rate of {Ym (t). n = 1. h is the identity mapping. In the second case. m = 1. E Various degrees of channel state information (CSI) can be made available to the transmitter and the receiver. The channel fade is then described by the random sequence {S[k]. m [⌈t/Tc ⌉]xn (t) + λ0. N. m ] < ∞. · · · . s[k]) on [(k − 1)Tc . without loss of generality. k ∈ Z+ } of i. For S[k] = s[k].. k = 1. M. i. t ≥ 0} is thus given by N Λm (t) = n=1 Sn. m = 1.d. In general.. · · · . T = KTc . In the first case.The channel fade matrix on [(k − 1)Tc . the transmitter (resp. m ] < ∞ and (c) E[S I [Sn.

u ⌈t/Tc ⌉ )dt ≤ σn An .3) xn (w. u⌈t/Tc ⌉ ). m = 1. w ∈ W. T )–code ˆ is given by R = given by 1 T log W nats/sec. φ) = W W T I Pr φ(Y1T . · · · .Then the receiver CSI is given by the collection of matrices SK = {S[1]. φ) is defined as follows. 1. xT (w. · · · . the transmitter sends a waveform {xn (w. · · · . produces an output w = φ(y1 . 0 ≤ t ≤ T }N .2): 0 ≤ xn (w. N. a (W.. uK ) = {xn (w. M. the codebook comprises a set of W waveform vectors f (w. · · · . S[K]} and the transmitter CSI is given by UK . t. and being provided T T with sK . t. t. The rate of this (W. . 1 T T 0 0 ≤ t ≤ T. The decoder is a mapping φ : (Σ(T ))M × ((IR+ )N ×M )K → W. yM . Note that the transmitted signals {xn (w. T )–code (f. For each uK ∈ U K . · · ·. · · · . upon T observing ym from the mth receive aperture. 0 0 ≤ t ≤ T } from the nth transmit aperture. n = 1. w ∈ W. SK 1 N 102 . u⌈t/Tc ⌉ ). 0 For each message w ∈ W and transmitter CSI uK ∈ U K corresponding to the fade vector sK ∈ ((IR+ )N ×M )K . t. YM . u⌈t/Tc ⌉ ) ≤ An . UK ). the causal transmitter CSI. For the channel under consideration. SK ) = w E w=1 xT (w. and the average probability of decoding error is 1 Pe (f. t)}N at time t are allowed to depend n=1 on u⌈t/Tc ⌉ . w ∈ W = {1. sK ). UK ). 2. · · · . satisfyn=1 ing peak and average power constraints which follow from (4. W }. The receiver. (4.

1 Channel capacity We begin with a single-letter characterization of the capacity of the MIMO Poisson channel with block fading.3 Statement of results 4. Corollary 7 The capacity for perfect CSI at the transmitter is given by CP = max I E[I(µN (S). 4.N where U = h(S). (4.where we have used the shorthand notation xT w.5) + µn :(I 0 )N×M →[0. (4. Theorem 7 The capacity of the MIMO Poisson fading channel for transmitter CSI h is given by C = max µn :U →[0. · · · . n In the subsequent sections. 1] R I [µn (S)]≤σn E n=1. w ∈ W.3.N 103 . S)]. we provide a “single-letter characterization” of the channel capacity C (see Definition 1) in terms of signal and channel parameters.···.8). UK = xn w. The capacities for the special cases of perfect and no CSI at the transmitter follow directly from Theorem 7. n = 1.4) n=1.···. and examine some properties of the associated optimal transmission strategies. S)]. t. ·) is as defined in (3. U⌈t/Tc ⌉ . 1] I [µn (U)]≤σn E I E[I(µN (U). 0 ≤ t ≤ T . N. and I(·.

with arbitrarily fast intertransition times. Thus.N 0≤µn ≤σn Remarks: (i) The optimization in (4. the optimality of i. max I (4.i. Conditioned on the transmitted signals xT . n = 1. The signal characteristics during each coherence interval depend only on the current transmitter CSI and not on the past transmitter CSI. S)]. it can be interpreted as the average “conditional duty cycle” of the nth aperture’s transmitted signal as a function of transmitter CSI. An }–valued transmitted signals (corresponding to ON and OFF signal levels) from the nth transmit aperture. the received T signals {Ym }M are independent across coherence intervals.4).The capacity for no CSI at the transmitter is given by CN = E[I(µN .5) and (4. hence it suffices to m=1 look at a single coherence interval in the mutual information computations. so that the maximum clearly exists. can achieve channel capacity. Furthermore. (iv) The optimizing “power control law” µn in (4. (ii) From Theorem 7.···. within a coherence interval. (4.5). and perfect receiver CSI sK .6)) is that of a concave functional over a convex compact set.4) (as well as in (4. (iii) Our proof of the achievability part shows that {0. 104 .d. N.6) specifies the conditional probability of the nth transmit aperture assuming the level An (ON state) depending on the available transmitter CSI.6) n=1. · · · . (in time) transmitted signals leads to a lack of dependence of capacity on Tc . (4. we see that the capacity of the MIMO Poisson fading channel with perfect receiver CSI does not depend on the coherence time Tc .

For s ∈ (IR+ )2×M . let ρ = ρn (s) be the solution 0 of M λ0. m bn. and let ρ = ρ(s) be the ¯ n=1. with N = M = 1.(v) In general. standard variational n n=1 techniques for differentiable functions (cf. M m=1 and {Bm }M are as defined in (3. the optimal transmission strategy will comprise at most N + 1 transmission events. where each event corresponds to a set of k transmit apertures remaining in the ON state and the remaining N − k transmit apertures simultaneously remaining in the OFF state.g. we focus on N = 2 transmit apertures.7) where {bn.g. e. n = 1.. [7]) can be used to determine the solution computationally. m log m=1 1 + α(bn. For the sake of notational simplicity. e.4).12). One difficulty in obtaining a closed form analytical solution for {µ∗ (·)}N is the nondifferentiability of I(·. The active set of transmit apertures at any given time instant will depend on the current transmitter CSI.3. m 1 + ρBm = 0. m )bn. we provide solutions in some special cases below. We begin with some notation. (vi) Theorem 1 (resp. Corollary 7)..2 Optimum power control strategy It remains to determine the optimal power control law parameters {µ∗ (·)}N n=1 n that achieve the maximum in (4. [15]) cannot be applied here. 4. Nonsmooth optimization techniques (cf. m=1 105 . m }2. (4. 2. s). Corollary 1) is obviously a special case of Theorem 7 (resp. While the analytical structure of the optimal solution remains to be characterized in full generality.

For 0 x. 2. s) = Kj (s). we assume that the receive apertures experience similar background radiation. m − ab2. 0 4. s) ≥ γ2 (y. s) for all y ∈ IR and s ∈ (IR+ )2×M . s) be the solution of d(x. m = 1. max{ρ1 (s). m y). m ) − aBm log (1 + ρ(s)Bm )] . Furthermore. j = 1. so that λ0. m log (1 + ρ1 (s)b1.8) and ρ(s) ≥ ¯ From Section 3.3.3 Symmetric MIMO channel with isotropic fade We now consider a symmetric channel model with random isotropic channel fade. m )x + (1 + a)b2. m )] .9) From Section 3. m − ab2. m [Bm log (1 + ρ(s)Bm ) − (1 + a)b2.e. y. define 0 M d(x. and let x = γj (y. m = λ0 . The channel fade coefficients are assumed to be 106 . ρ2 (s) ≥ 0. An = A.3.10) log(1 + (b1. and σn = σ. ρ2 (s)} for all s ∈ (IR+ )2×M . M. (4. A2 λ0. Let 0 K1 (s) = K2 (s) = with a = M m=1 M m=1 A1 .. i. recall that K1 (s) ≥ K2 (s). m log (1 + ρ2 (s)b2. s ∈ (IR+ )2×M .solution of M λ0. m (b1. m [(1 + a)b1. y. recall that ρ1 (s) ≥ 0. ¯ (4.11) whence it can be verified that γ1 (y. m ) × (4. n = 1. We assume that the transmit apertures are subject to symmetric peak and average power constraints. ≤ ρ(s) ≤ ¯ 1 2 (4.3. ¯ λ0. · · · . 2. s) = m=1 λ0. m Bm log m=1 1 + α (Bm ) Bm 1 + ρBm 1 e = 0. y ∈ IR and s ∈ (IR+ )2×M .

m = s2. · · · . m [k] ∼ S. 2.. The distinction between FS and FS should be clear from context.d.i. · · · .e. Lemma 1 If s(1) and s(2) are mirror states. m and s2. m = 1. i. m . k = 1. For this special channel model. M.T1 T2 1 0 1 0 1111111111 0000000000 1 1 0 0 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111 0000000000 1 0 1 0 1 0 1 0 a R1 c b d R2 c 1 0 1 0 11111111111 T1 00000000000 R1 1 1 0 0 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 a 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 d 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 11111111111 00000000000 1111111111 0000000000 1111111111 0000000000 11111111111 1 0 1 0 T2 00000000000 R2 1 0 1 0 b State s(2) State s(1) Figure 4. m = 1. across both time and apertures.e. K. Definition 3 The states s(1) ∈ (IR+ )2×M and s(2) ∈ (IR+ )2×M are called mirror 0 0 states if the rows of the respective channel fade matrices are permutations of each other. The channel characteristics experienced by the transmit apertures are interchanged in the mirror states.. An example of mirror states for a 2×2 MIMO Poisson channel is given in Figure 4. We begin with the following definition. then 1. with common cdf1 FS . Sn.. i.3.3: Mirror states of a 2 × 2 MIMO Poisson channel. m = s1. n = 1. s1. FS (s(1) ) = FS (s(2) ). i. · · · . 1 (1) (2) (1) (2) Recall that the distribution function of S[k] is denoted by FS . we provide a simplified characterization of channel capacity. 107 . M. We now summarize some interesting properties of mirror states.

Let S0 = {s ∈ (IR+ )2×M : s = smir }. b2 ≥ 0. Consider next the following partitioning of the state set that separates the mirror states. s(2) ). a. therefore. N} and n P s(P ) is obtained by applying P on the rows of s. s(2) ) ≤ 2I( a1 +b2 . 1 2 Remarks: (i) The channel fades experienced by the two transmit apertures are interchanged in the mirror states. K1 (s(1) ) + K2 (s(2) ) = 0 = K1 (s(2) ) + K2 (s(1) ). and smir ∈ S2 }. where P is any permutation of {1. I(a. s(1) ) = I(b. (ii) We can generalize the notion of mirror states and interchange of roles of transmit apertures for the case of an arbitrary number N of transmit apertures. 4. 0 S1 = {s ∈ (IR+ )2×M : K1 (s) + K2 (s) ≥ 0. 2 s(1) ). b1 . Any permutation of the rows of the channel fade matrix leads to a mirror state. a2 . it is. N. b2 . 1 2 µ∗ (s(2) ) = µ∗ (s(1) ). s(1) )+ I(a2 . n = 1. 2 a2 +b1 . 0 S2 = {s ∈ (IR+ )2×M : K1 (s) + K2 (s) ≤ 0. I(a1 . µ∗ (s(1) ) = µ∗ (s(2) ). b. we conclude that Lemma 2 If s(1) and s(2) are mirror states. a. We state without proof the following generalization of Lemma 2: at optimality.13) (4. 3. · · · . b1 .2. µ∗ (s) = µ∗ (n) (s(P ) ). Given the symmetry of the channel model. and smir ∈ S1 }. · · · . From the symmetry of the channel model. not surprising that the role reversal of transmit apertures in the mirror states is reflected in the optimum power control law. 0 108 (4.12) . then at optimality. b ≥ 0. a1 .

we show that a necessary and sufficient condition for µ∗ (s) ≥ µ∗ (s) is K1 (s) + K2 (s) ≥ 0. 1] R 0 µ2 :(I + )2×M →[0. the condition 1(K1 (s) + K2 (s) ≥ 0) allows us to 0 partition the state set into channel states for which the channel conditions are “more favorable” for one transmit aperture than the other. then smir ∈ S2 . 2 1 s ∈ (IR+ )2×M . (4. µ2 (S). Clearly.where smir is the mirror state of s. i. Theorem 8 The capacity of the 2 × M MIMO isotropic block fading channel for perfect CSI at the transmitter is given by C = max µ1 :(I + )2×M →[0.15) where I1>2 (·. Heuristically. if s ∈ S2 . for which the channel conditions experienced by the transmit 109 .94).       0. Let    F (s). there exists another fade realization with the same probability of occurrence (which corresponds exactly to the mirror state). property 1.. I FS [I1>2 (µ1 (S) − µ2 (S). the one which experiences better channel conditions. and (4.14) ˆ FS (s) = ˆ By Lemma 1. the optimal power control law assigns higher average power to the “stronger” transmit aperture. ·) is as defined in (3. it follows that for each fade realization.13). property 2. it follows that FS is a valid pdf. and vice versa. we see that if s ∈ S1 . ·. Remark: For the symmetric MIMO channel with isotropic channel fade. if s ∈ S1 . S)].  S     if s ∈ S0 . 1] R 0 µ1 (S)≥µ2 (S) I F [µ1 (S)+µ2 (S)]≤2σ Eˆ S  2FS (s). By Lemma 1. Eˆ (4. Since the channel fade is isotropically distributed.e.

µ(s)) = κ2 (s.14).19) Remark: The optimization problem in (4. even though 110 .  2 κ1 (s. ·) and κ2 (·. if µ(s) ≥ ρ(s). 1]. if K2 (s) ≥ d(µ(s).15). and hence µ∗ (·) ≥ µ∗ (·).ˆ apertures are reversed. (4. ¯ (4. We now invoke the result of Theorem 6 to further simplify (4. 1 2 Note that the individual average power constraints on µ1 (·) and µ2 (·) in the right side of (4. µ(S)). For any s ∈ S0 ∪ S1 . 1] R 0 I F [µ(S)]≤σ Eˆ S where κ1 (·.16).16) involves a single mapping µ : S0 ∪S1 → [0. Eˆ (4.18) otherwise. then ¯    min{γ (µ(s). if µ(s) < ρ(s). By defining FS as in (4.17) 2.15). µ(s)) =   µ(s). 2µ(s)}. (4. then ¯ κ1 (s. primarily due to the following reason. s). µ(s)) = 2µ(s) − κ1 (s. Theorem 9 The capacity of the 2 × M MIMO isotropic block fading channel for perfect CSI at the transmitter is given by C = max I FS [I1>2 (κ1 (S. s).  κ2 (s.15).16) µ:(I + )2×M →[0. µ(S)). It is difficult to obtain a closed form expression for the optimal µ∗ (·) that maximizes the right side of (4. we are able to reduce the state set by picking the states for which the channel conditions are more favorable for transmit aperture 1. µ(s). µ(s)). as opposed to the bivariate optimization problem in (4. ·) are given as follows: 1. µ(s)) = ρ(s). S)]. κ2 (S.5) (with N = 2) are replaced by a constraint on their sum in (4. µ(S)) − κ2 (S.

κ1 (s. based on a suboptimal power control law.95)). 1] R0 I [µ(S)]≤σ E with I(·. 1] be the power control law that achieves 0 the maximum in CSOOK = max I E[I(µ(S). Corollary 8 Suppose µL : (IR+ )2×M → [0. We now determine a lower bound on channel capacity. (4. ·.21) Eˆ where κ1 (·. 0 ≤ y ≤ 1.17)–(4. S)]. This is in contrast to the optimum power control strategy for the symmetric MIMO Poisson channel with constant fades (see Remark (ii) following Theorem 5). (4.4 Proofs We begin this section with some additional definitions that will be needed in our proofs. µL(S)) − κ2 (S. Then. ·) as defined in (3.19). First. (4. the simultaneous ON-OFF keying transmission strategy is strictly suboptimal for the symmetric MIMO channel with isotropic fade. C ≥ CLB ≥ CSOOK .22) ∆ Remark: In an illustrative example below. ·. µ(S). µL (S)). we show that for some values of σ. the number of photon arrivals Nm (τ ) 111 . y) and κ2 (s. κ2 (·. κ2 (S. s) is differentiable everywhere with respect to its arguments (see (3.20) µ:(I + )2×M →[0. observe that for any τ > 0. ·). S)]. Let CLB = I FS [I1>2 (κ1 (S. y) are nondifferentiable functions of y. 4. and demonstrate that this new lower bound improves on the “simultaneous ON-OFF keying” lower bound proposed in [21]. µL (S)).I1>2 (·. ·) are as defined in (4.8) with N = 2.

1 . Therefore. the channel n=1 output (Nm (T ).24) where yss = {(nm . (4. YM }. sK ) M = m=1 M fNm (T ). τ ≥ 0. For an input signal xT = {xn (t). In order to write the channel output sample function m m=1 density conditioned only on the fade for a given joint distribution of (XT . [47]) fNm (T ). τ ] together with the corresponding (ordered) arrival times Tmm N (τ ) = τ {Tm. M. Tmm N (T ) ) at the mth receive aperture has the “conditional sample function density” (cf. · · · . Tm. YM are condi- tionally mutually independent [47. m [⌈t/Tc ⌉]xn (t) + λ0. SK (nm . m . SK ). Nm (τ ) } are sufficient statistics for Ym . 19]. TNm (T ) |XT . SK nm .2) and a fade vector SK = sK . so that conditioned on xT and sK . The channel of interest is characterized as follows. the conditional sample function density of Yss (T ) given xT and sK is given by fYss (T )|XT . e. · · · . tnm xT .. tnm )}M . 0 ≤ t ≤ T.23) λm (t) = n=1 sn. the processes Y1T .g. · · · . Therefore. i ). tnm |xT . sK ) m m T nm = m=1 exp − λm (τ )dτ 0 i=1 λm (tm. Recall our persistent assumption that the receive apertures are sufficiently separated T in space.i ). TNm (T ) |XT . Tmm N (τ ) )}M is a complete description of the m=1 τ random processes Y τ = {Y1τ . (4.during [0. 0 ≤ t ≤ T }N satisfying (4. m = 1. 112 . · · · . SK (yss |xT . the random vector Yss (τ ) = {(Nm (τ ). sK m m T nm = exp − where N λm (τ )dτ 0 i=1 λm (tm.

it follows that conditioned on SK . with xn (τ ) = I Xn (τ )|Yss (τ ) = yss (τ ). M.T consider the conditional mean of Xn (conditioned causally on the channel output and the fade) by ∆ ˆ Xn (τ ) = I Xn (τ )|Yss (τ ). M. S⌈τ /Tc ⌉ ) for n=1 notational convenience. 0 ≤ τ ≤ T. Tmm N (T ) ˆ ) is a self-exciting PCP with rate process ΛT . m .25) ˆ where we have suppressed the dependence of {Xn (τ )}N on (Yss (τ ). pp. n = 1. m = 1. i=1 (4. S⌈τ /Tc ⌉ . Tmm (T ) |SK (nm . E 0 ≤ τ ≤ T. and define N Λm (τ ) = n=1 ∆ Sn. m = 1.28) where yss = {(nm .27) By ([47]. · · · . and m m=1 N ˆ λm (τ ) = n=1 sn. tnm )}M . m [⌈τ /Tc ⌉]Xn (τ ) + λ0. m . S⌈τ /Tc ⌉ E N = n=1 ˆ Sn. m = 1. 113 . (4. 0 ≤ τ ≤ T. · · · . N. M.26) and ∆ ˆ Λm (τ ) = I Λm (τ )|Yss (τ ). i ). ˆ E 0 ≤ τ ≤ T. · · · . m [⌈τ /Tc ⌉]Xn (τ ) + λ0. so that the m output (conditional) sample function density is given by M fYss (T )|SK (yss |s ) = = K m=1 M fNm (T ). 425-427). (4. M. S⌈τ /Tc ⌉ = s⌈τ /Tc ⌉ . · · · . tnm |sK ) N m T nm m=1 exp − ˆ λm (τ )dτ 0 · ˆ λm (tm. m [⌈τ /Tc ⌉]ˆn (τ ) + λ0. · · · . m . x 0 ≤ τ ≤ T. m = 1. (4. · · · . the process (Nm (T ). N. n = 1.

n = 1.30) where Yss (t) = {(Nm (t). N. 0 ≤ t ≤ T. (4. 0 ≤ t ≤ T .29) I E[Xn (τ )]dτ ≤ σn An . n = 1. φ) ≤ ǫ. Let (Nm (T ). n = 1. φ) with Pe (f. and independent of SK . n = 1. Tmm N (T ) T ) be the channel output at the mth receive aperture when Xn is transmitted from the nth transmit aperture. · · · . · · · . SK = I W ∧ Yss (T )|SK = h(Yss (T )|SK ) − h(Yss (T )|W. With T = KTc . (4. · · · . consider a (W. I W ∧ φ(Yss (T ). 114 (4. Tmm )}M . By a standard argument. t. m = 1. SK ) ≤ I W ∧ Yss (T ). φ) ≤ ǫ must satisfy R 1 I W ∧ φ(Yss (T ). SK ). N. SK ) ≤ h(Yss (T )|SK ) − h(Yss (T )|X T .34) (4.3) then implies that 0 ≤ Xn (t) 1 T T 0 ≤ An . Note that given. (4.Proof of Theorem 7: Converse part: Let the rv W be uniformly distributed on the message set W = {1. φ) of rate R = ∆ 1 T log W . M.31). · · · . W }. Clearly. · · · .33) . N. T )–code (f. T (4. · · · . where 0 ≤ ǫ ≤ 1 is 0 ≤ t ≤ T. SK ) .32) (4. Denote Xn (t) = xn (W. the following Markov condition holds: W −◦− Xt S⌈t/Tc ⌉ −◦− Yss (t). U⌈t/Tc ⌉ ).31) Proceeding further with the right side of (4. N. K ∈ Z+ . and with Pe (f. T )–code (f. N (t) 0 ≤ t ≤ T. the m=1 rate R of the (W.

35) is by (4. and (4.37) is proved in Appendix C. λ0. M .where (4.35) = m=1 I E 0 M ˆ Λm (τ ) − Λm (τ ) dτ  Nm (T ) + m=1 M = m=1 M I  E T 0  Nm (T ) I  E i=1 ˆ log Λm (Tm. i ) m=1 exp − 0 Λm (τ )dτ i=1  = I log E T ˆ M Nm (T ) ˆ Λm (Tm. The difference between the conditional entropies of mixed rvs on the right side of (4. and (4.36)  i=1 ˆ log Λm (Tm. m [⌈τ /Tc ⌉]Xn (τ ). 0 ≤ τ ≤ T . i )  N n=1 N = m=1 I ζ E −I ζ E Sn. (4. · · · . 0 ≤ τ ≤ T } and {Λm (τ ).30). (4. m ] < ∞. by E[ ˆ E[Λ (4. 115 .24).32)). M implies the integrability of {Λm (τ ). · · · . m where (4. 0 ≤ τ ≤ T }. m = 1.28). m = E[S ˆ 1. · · · . · · · .32) is by the data processing result for mixed rvs (cf.34) is h(Yss (T )|SK ) − h(Yss (T )|X T . (2. m = 1. N . SK ) = I − log fYss (T )|SK (Yss (T )|SK ) E − I − log fYss (T )|XT . 2 The interchange is permissible as the assumed condition I n. i ) − log Λm (Tm.34) is by (4.33) is by the independence of W from SK .27). SK (Yss (T )|XT . m dτ.36) holds by an interchange of operations2 to get T I E 0 ˆ (Λm (τ ) − Λm (τ ))dτ = T 0 I Λm (τ ) − Λm (τ ) dτ. (4. E ˆ followed by noting that I Λm (τ )] = I m (τ )]. (4.1. SK ) E   T M Nm (T ) Λm (Tm. i )   (4. i ) − log Λm (Tm. i ) m=1 exp − 0 Λm (τ )dτ i=1 M T (4. n = 1. λ0. M.37) n=1 ˆ Sn. m [⌈τ /Tc ⌉]Xn (τ ).

in the right side of (4. U⌈τ /Tc ⌉−1 ). m [⌈τ /Tc ⌉] I [Xn (τ )| S[⌈τ /Tc ⌉]] . S⌈τ /Tc ⌉ E E = I [Xn (τ ) |S[⌈τ /Tc ⌉]] . we 116 .37). 0 ≤ t ≤ T.40) = I ζ E n=1 N = I ζ E n=1 where (4.42) (4. m E (4. (4.2 by setting A = (W. (4. m [⌈τ /Tc ⌉]Xn (τ ). E and (4.37).34). (4. λ0. (4.41) S[⌈τ /Tc ⌉] which is obtained from Appendix A.39) (4. U[⌈τ /Tc ⌉] ] E E = I [Xn (τ ) |U[⌈τ /Tc ⌉]] E by virtue of the Markov condition Xn (τ ) −◦− U[⌈τ /Tc ⌉] −◦− S[⌈τ /Tc ⌉]. m [⌈τ /Tc ⌉]Xn (τ ). λ0. m S[⌈τ /Tc ⌉] (4.40). m E Sn. y).Next.39) is from I Xn (τ ) |S[⌈τ /Tc ⌉] E ˆ = I I Xn (τ ) Yss (τ ). m n=1 N Sn. y ≥ 0.38) is by Jensen’s inequality applied to the convex function ζ(·. m [⌈τ /Tc ⌉] I Xn (τ ) S[⌈τ /Tc ⌉] .38) ≥ I ζ E = I ζ E I E n=1 N ˆ Sn.40) holds as I [Xn (τ ) |S[⌈τ /Tc ⌉]] = I [Xn (τ ) |S[⌈τ /Tc ⌉]. N I ζ E n=1 ˆ Sn. N. m N = I I ζ E E n=1 N ˆ Sn. λ0. λ0. λ0. λ0. · · · . B = S[⌈τ /Tc ⌉]. n = 1. m [⌈τ /Tc ⌉]Xn (τ ) S[⌈τ /Tc ⌉] . Summarizing collectively (4. C = U[⌈τ /Tc ⌉] and X = Xn (τ ). m E ˆ Sn. m [⌈τ /Tc ⌉] I [Xn (τ )| U[⌈τ /Tc ⌉]] .

get that I W ∧ φ(Yss (T ), SK )
T M

N

I ζ E
0 m=1 N n=1

Sn, m [⌈τ /Tc ⌉]Xn (τ ), λ0, m dτ. (4.43)

−I ζ E

n=1

Sn, m [⌈τ /Tc ⌉] I [Xn (τ )|U[⌈τ /Tc ⌉]] , λ0, m E

The right side of (4.43) is further bounded above by a suitable modification of Footnote 10, Chapter 2. Considering the integrand in (4.43), fix 0 ≤ τ ≤ T and condition on S[⌈τ /Tc ⌉] = s, s ∈ (IR+ )N ×M . Then 0
M N

I ζ E
m=1 N n=1

Sn, m [⌈τ /Tc ⌉]Xn (τ ), λ0, m S[⌈τ /Tc ⌉] = s

−ζ =

n=1 M

Sn, m [⌈τ /Tc ⌉] I E[Xn (τ )|U[⌈τ /Tc ⌉]], λ0, m
N

I ζ E
m=1 N n=1

sn, m Xn (τ ), λ0, m

S[⌈τ /Tc ⌉] = s

−ζ
M

n=1

sn, m I E[Xn (τ )|U[⌈τ /Tc ⌉] = h(s)], λ0, m
N

=
m=1

I ζ E
n=1 N

sn, m Xn (τ ), λ0, m

U[⌈τ /Tc ⌉] = h(s) (4.44)

−ζ

n=1

sn, m I E[Xn (τ )|U[⌈τ /Tc ⌉] = h(s)], λ0, m

by (4.42). Consider maximizing the right side of (4.44) over all (conditional) joint distributions of {Xn (τ ), n = 1, · · · , N} conditioned on U[⌈τ /Tc ⌉] = h(s) with fixed conditional means I E[Xn (τ )|U[⌈τ /Tc ⌉] = h(s)] = χn (τ, h(s))An , n = 1, · · · , N, say, and subject to the first constraint (alone) in (4.29). The right side of (4.44) then equals
M N

I ζ E
m=1 n=1

sn, m Xn (τ ), λ0, m 117

U[⌈τ /Tc ⌉] = h(s)

N

−ζ

sn, m χn (τ, h(s))An , λ0, m
n=1

,

(4.45)

and is maximized by considering the first term above. Let h(τ, s, χN (τ, h(s))) = max

0≤Xn (τ )≤An I [Xn (τ )|U[⌈τ /Tc ⌉]=h(s)]=χn (τ, h(s))An E

n=1, ···, N N

M

I E
m=1

ζ
n=1

sn, m Xn (τ ), λ0, m

U[⌈τ /Tc ⌉] = h(s) .

(4.46)

Noting the structural similarity of the optimization problems in (4.46) and (3.38), by (3.45), we conclude that
N

h(τ, s, χ (τ, h(s))) =
n=1

N

ψn, Pτ (s)χPτ (n) (τ, h(s)),

(4.47)

where Pτ is a permutation of {1, · · · , N} such that χPτ (n) (τ, h(s)) ≥ χPτ (n+1) (τ, h(s)), and
M n

n = 1, · · · , N − 1,

(4.48)

ψn, P (s) =
m=1

ζ −ζ

sP (k), m AP (k) , λ0, m
k=1 n−1

sP (k), m AP (k) , λ0, m
k=1

(4.49)

for n = 1, · · · , N and any permutation P of {1, · · · , N}. For notational brevity, we have suppressed the dependence of Pτ on h(s). Since (4.29) implies that 0 ≤ χn (τ, h(s)) ≤ 1,
1 T T 0

0 ≤ τ ≤ T,

s ∈ (IR+ )N ×M , 0

(4.50)

I n (τ, h(S[⌈τ /Tc ⌉]))]dτ ≤ σn , E[χ 118

n = 1, · · · , N,

we thus see from (4.43)–(4.47), (4.50) that 1 I(W ∧ φ(Yss (T ), SK )) T ≤ max
1 T χn :[0, T ]×U →[0, 1] T E 0 I [χn (τ,h(S[⌈τ /Tc ⌉]))]dτ ≤σn

1 T

T 0 T 0

I E[g(τ, S[⌈τ /Tc ⌉], χN (τ, h(S[⌈τ /Tc ⌉])))]dτ I E[g(τ, S[⌈τ /Tc ⌉], χN (τ, U[⌈τ /Tc ⌉]))]dτ, (4.51)

n=1, ···, N

=
1 T

max
χn :[0, T ]×U →[0, 1] T E 0 I [χn (τ,U[⌈τ /Tc ⌉])]dτ ≤σn

1 T

n=1, ···, N

where for 0 ≤ τ ≤ T , s ∈ (IR+ )N ×M , we have defined 0
N

g(τ, s, χN (τ, h(s))) =
n=1

ψn, Pτ (s)χPτ (n) (τ, h(s))
M N

ζ
m=1 n=1

sn, m χn (τ, h(s))An , λ0, m . (4.52)

In order to simplify the right side of (4.51), for u ∈ U, define ηn (k, u) = µn (u) = 1 Tc 1 K
kTc

I n (τ, U[k])|U[k] = u]dτ, E[χ
(k−1)Tc K

k = 1, · · · , K, (4.53) (4.54)

ηn (k, u),
k=1

n = 1, · · · , N,

whence by (4.50), we get 0 ≤ µn (u) ≤ 1, and σn ≥ = = = 1 T 1 K 1 K 1 K
T 0 K

u ∈ U,

n = 1, · · · , N,

(4.55)

I n (τ, U[⌈τ /Tc ⌉])]dτ E[χ 1 Tc
kTc

I n (τ, U[k])]dτ E[χ
(k−1)Tc

k=1 K

I n (k, U[k])] E[η
k=1 K

(4.56) (4.57) (4.58)

I n (k, U)] E[η
k=1

= I n (U)], E[µ 119

54).58) is by (4.57) holds by the i. note that 1 K K (where for notational brevity. λ0. (4. U[⌈τ /Tc ⌉]))]dτ K 1 = K = 1 K k=1 K 1 Tc 1 Tc M kTc (k−1)Tc kTc I E[g(τ. m [k]χn (τ. U[k]) dτ n ≤ I E νn (U) n=1 m=1 ζ j=1 SΠ(j). λ0. N} so that for u ∈ U. · · · . (4. m AΠ(j) .53). n = 1. U[k])An . if n = 1. χN (τ. and (4. Pτ (S[k])χPτ (n) (τ. U[k])An . m . if n = N. m dτ (4. In Appendix C.d. U[k]))]dτ N I E (k−1)Tc N n=1 ψn. · · · . (4. m [k]χn (τ.59) k=1 − ζ m=1 n=1 Sn.56) is by (4.2.i.60) where Π is a permutation of {1. we show that 1 K K k=1 1 Tc kTc N I E (k−1)Tc N n=1 M ψn.52).63) k=1 ≥ I E k=1 m=1 ζ n=1 120 . and    µ  Π(n) (u) − µΠ(n+1) (u). The time-averaged integral on the right side of (4. λ0. νn (u) = (4. 1 Tc 1 K kTc M N I E (k−1)Tc K M m=1 ζ n=1 N Sn. (4. S[⌈τ /Tc ⌉]. m by (4. m [k]ηn (k. m Sn. Pτ (S[k])χPτ (n) (τ. χN (τ. λ0. we have suppressed the dependence of Π on u). N − 1. · · · . S[k]. nature of channel fade S∞ .where (4.61) Furthermore.62)   µ  Π(N ) (u). U[k]) dτ. U[k])An . µΠ(n) (u) ≥ µΠ(n+1) (u).51) can be written as 1 T T 0 I E[g(τ. N − 1.

(4.67) This concludes the proof of the converse part of Theorem 7.1 = K ≥ I E = I E K M N I E k=1 M m=1 N ζ n=1 Sn. λ0.31). m . m AΠ(n) .66). (4.d.63) is by Jensen’s inequality applied to the convex function ζ(·. Fix L ∈ Z+ and set ∆ = Tc /L. (4. Summarizing collectively (4. (4. m µn (U)An . m AΠ(j) . in the channel fade sequence S∞ = {S[k]}∞ .59).55).64) holds by i.66) where (4. N − ζ m=1 n=1 νn (U) j=1 SΠ(j). each S[k] remains unvarying for a block of k=1 L consecutive ∆-duration subintervals within [(k − 1)Tc .53).d.60). m AΠ(j) . (4. kTc ]. λ0. (4. each of duration ∆. m AΠ(j) .i. Then.54) with the substitution xn = µΠ(n) (U). λ0. λ0. yn = SΠ(n). nature of S∞ . m n=1 N n ζ m=1 M = I E m=1 ζ n=1 νn (U) j=1 SΠ(j). 121 ∆ . Divide the time interval [0. T ]. (4. λ0. U)An . m (4.66) is by application of (3. m SΠ(n). λ0. y) and (4. we get N M n R max µn :U →[0. Now. and {S[k]}∞ varies k=1 across such blocks in an i. into KL equal subintervals. where T = KTc with K ∈ Z+ . m ηn (k. the channel input waveform at the nth transmit aperture.i. manner. (4. Achievability part: We closely follow Wyner’s approach [52]. (4. m .54).58). ···.65) is by Jensen’s inequality and (4. (4. m n=1. m µΠ(n) (U)AΠ(n) . 1] I [µn (U)]≤σn E I E n=1 M νn (U) m=1 N ζ j=1 n SΠ(j). consider the situation in which for each message w ∈ W and transmitter CSI uK ∈ U K . and (4.51).65) ζ m=1 M n=1 N Sn.64) (4.

· · · . l. N. output alphabet Y M L = {0. u⌈t/Tc ⌉ ) = 0. · · · . s) = l=1 m=1 WY ˜ (m) y xN ˜ N (˜m (l)|˜ (l). comprising ˜ Ym (l) = 1(Ym (l∆) − Ym ((l − 1)∆) = 1). l = 1. · · · . Define    0.69) ˜ Next. is restricted to be {0. m = 1. and S are X –. n = 1.(4. t.3) requires that 1 KL KL (4. at the mth receive aperture. yM L ∈ {0. state alphabet (IR+ )N ×M . s ∈ (IR+ )N ×M . · · · .71) 0 ˜ ˜ ˜ ˜ where Xn . Note that the condition (4. u⌈l/L⌉ ) ≤ σn . u       t ∈ [(l − 1)∆. l∆). 0 transition pmf L M W (L) (˜ y ML |˜ x NL . 0 ≤ t ≤ T }. u⌈l/L⌉ ) = ˜ ) = An .{xn (w. n = 1. Y –. An }–valued and piecewise constant in each of the KL time slots of duration ∆. t. m = 1. 1}M L . ˜ ˜ xN L ∈ {0. the capacity C – is clearly no smaller than C(L) . Tc where C(L) is the capacity of a (L –) block discrete memoryless channel (in nats per block channel use) with input alphabet ˜ ˜ X N L = {0. KL. 1}N L . 1}M L . u⌈t/Tc ⌉ ). M. t.  1. if xn (w. · · · . hence. The largest achievable rate of restricted (W. if x (w. N. uK ∈ U K . (4. Ym . S s).   n    ∆ ⌈t/Tc ⌉ xn (w. m |X . w ∈ W. (4. M. · · · . consider a decoder φ : ({0. 1}N L . and the channel transition pmf (corresponding to the mth 122 .68) l=1 xn (w. l. T )–codes as above – and. l = 1. 1}M )KL× ((IR+ )N ×M )K → W based on 0 restricted observations over the KL time slots.70) (with Ym (0) = 0) and SK . KL. and (IR+ )N ×M 0 –valued rvs respectively.

x (m) xN ˜ N (0|˜ . L). x (4. x y x ˜ ˜ ˜ xN L ∈ {0. it is readily obtained3 that C(L) = P ˜ NL : X |U max L I [X (l)]≤Lσ E ˜n n l=1 ˜ ˜ I(XN L ∧ Y M L |S). S (m) ˜ N (·|·. 1}N .75) By a standard argument which uses (4. m |X . 0 so that from (4. Remark A2 following Proposition 1). ···. M (4. s). xN ∈ {0. m |X . S is taken to be finite-valued. m xn An + λ0. m |X . s. S s). 1}N L . YM L |S (˜ N L . 0 (4. s ∈ (IR+ )N ×M .74) n=1. N m=1 ˜ ˜ I(XN ∧ Ym |S). and under constraint (4. yM L ∈ {0. (4. 0 123 .71).receive aperture) WY ˜ WY ˜ WY ˜ (m) xN ˜ N (1|˜ . s) Tc x x L L s) = 1 − ωm (˜ N . S where N wm (˜ N .74). the maximum in (4.74) is achieved by L PXNL |U (˜ x ˜ NL |h(s)) = PXN (l)|U (˜ N (l)|h(s)).74) can be seen to hold also when S is IR+ -valued. s. L).77) In [9]. N with U = h(S). x ˜ l=1 ˜ xN L ∈ {0. is given by 0 = ωm (˜ N . s ∈ (IR+ )N ×M . ···.76) C(L) = L 3 max P˜N X |U : I [Xn ]≤σn E ˜ n=1.69). however the result (4. 1}M L . m . s) = x n=1 ∆ ˜ sn. (4. 1}N L . where the joint conditional pmf PXNL . yM L |s) x ˜ ˜ = PXNL |U (˜ N L |h(s))W (L) (˜ M L |˜ N L . s) Tc exp −wm (˜ N .72) ∆ s) = wm (˜ N . YM L |S is given by ˜ ˜ ˜ PXNL . From ([9]. s ∈ (IR+ )N ×M . s ∈ (IR+ )N ×M . (4.73) ˜ 0 with transmitter CSI h (and perfect receiver CSI).

and i=0   M 2N −1 βL (p. E[β (4.77) and (4. 2N − 1. ···. s). 1}. N. S)]. y |s) = PXN |U (˜ N |h(s))WYm |XN . 2N − 1.81) 2N −1 i=0 2N −1 i=0 By (4. E[p where the last inequality follows from the constraints I Xn ] ≤ σn .79) N 2 −1 where bn (i) is the nth bit in the binary representation of i. S . ∆ y ∈ {0. i = 0. Ym |S is given by ˜ ˜ PXN . Clearly. 1] i=0. n = 1. E[ ˜ By (4. pi (h(S))hb ωm bN (i). N.72) and (4.79). 1]2 pi hb ωm bN (i). 1}N . s) = ∆ m=1 2N −1 hb  i=0 pi ωm bN (i).83) 124 .where PXN .80). S (˜|˜ N . N −1 . S pi (h(S))ωm bN (i). ˜ s ∈ (IR+ )N ×M . (4. n = 1. 2N −1 L I L (p(U). = 1. (4. L    − i=0 p ∈ [0. M. s. · · · . s ∈ (IR+ )N ×M . · · · . i = 0. Ym |S (˜ N . · · · . .81). define ˜ pi (u) = Pr{Xn = bn (i).82) where p(·) = {pi (·)}2 −1 satisfies (4. 0 (4. · · · . N|U = u}. x ˜ x yx ˜ ˜ ˜ ˜ ˜ ˜ xN ∈ {0. s. s . it then follows that for m = 1. n = 1. For u ∈ U. ˜ Pr{Ym = 1|S = s} ˜ ˜ H Y m |XN . (4. · · · . L  . it follows that C(L) = N max pi :U →[0. p(·) = {pi (·)}i=0 satisfies the constraints pi (u) 2N −1 i=0 pi (u) ≥ 0. S ˜ H(Ym |S) = = I E = I hb E 2N −1 i=0 pi (h(s))ωm bN (i).80) i:bn (i)=1 I i (U)] ≤ σn . · · · .78) 0 (4.

86) by (4. · · · . pi = µn .80) and (4. Setting 2N −1 N −ζm   N I  E pi (U)ζm 2N −1 Sn.85). ···. 1] i=0.89) pi = 1. N.m An n=1 i=0 pi (U)bn (i). 0 2 −1 with p = {pi }i=0 satisfying the constraints N pi 2N −1 i=0 ≥ 0. it follows that M N C ≥ where max µn :U →[0. (4.84) is by (4.m bn (i)An . 125 i:bn (i)=1 . ···. we have C ≥ ≥ = C(L) L→∞ Tc lim I L (p(U). m . N 2N −1 M N h(µ . · · · . where (4. N. 2N −1 max m=1 with p(·) = {pi (·)}2 −1 satisfying (4. s ∈ (IR+ )N ×M . 1] L→∞ Tc /L i=0. s) = max p i=0 N N pi m=1 N ζm n=1 sn. 1] . λ0. S) − E N ζm m=1 n=1 Sn. u ∈ U. (4.3. (4.85) is i=0 established in Appendix C. 2N −1  max lim M 2N −1 (4. S)] E[β pi :U →[0. m µn (U)An .m  . m . λ0. λ0.Since L ∈ Z+ was arbitrary.88) µ ∈ [0. m bn (i)An . i = 0.87) n=1. ···.m n=1 i=0 Sn. 2N − 1. · · · . 1] I [µn (U)]≤σn E I h(µ (U). λ0. i:bn (i)=1 = n = 1. n = 1.85) µn (u) = i=0 pi (u)bn (i) pi (u). (4.82). and (4.84) N pi :U →[0.80).  (4.

62) respectively. m µn (U)An . 1] I [µn (U)]≤σn E I E m=1 n=1 N νn (U)ζ k=1 n SΠ(k). P (·) is as defined in (4.78) and (4.54)). · · · . This concludes the proof of the achievability part of Theorem 7. otherwise. N − 1.87). if i = 0. it follows that the optimum joint pmf p∗ that obtains the maximum in             ∗ pi =            (4. m .49). (4.88). m AΠ(k) . we get N C ≥ max µn :U →[0.91). N − M ζ m=1 N n=1 Sn. 1 − µQ(1) .55). (4. 126 . λ0. Π (S) N n=1. if i = k n=1 2Q(n)−1 .61). {νn (·)}N are as defined in (4.56) for similar applications of (3. (4. n = 1. m AΠ(k) . where Q is a permutation of {1.92) = max µn :U →[0. · · · . λ0. · · · .88) is given by µQ(k) − µQ(k+1) . N −ζ νn (U) n=1 k=1 SΠ(k). (4. Q (s). and (4. k = 1.Comparing (3. (4.93) is by n=1 (3.88). m n=1.90) if i = 2N − 1.93) where Π(·). (3. µQ(N ) . N − 1. λ0. (4. N} such that µQ(n) ≥ µQ(n+1) . and the corresponding largest value is N h(µ . ···.54) (see (3. ···. Summarizing collectively (4. 1] I [µn (U)]≤σn E I E n=1 M µΠ(n) (U)ψn. m n (4.91) where ψn. 0. s) = n=1 N µQ(n) ψn.

m A. m = 1. m ) λ0 = (s2. m = λ0 . Property 2: We prove K1 (s(1) ) = −K2 (s(2) ). ρ(s(1) ) ¯ By (4. λ0 m=1 127 . (1) b2. · · · . ρ1 (s(2) ) = ρ2 (s(1) ). s(1) ) = (a − b) M ζ s1.94) = (1) A s2. b. m − 2b1. M. so that by (4. · · · . m A + bs2.Proof of Lemma 1: Property 1: The first property follows directly from the definition of mirror states and the isotropic nature of channel fade matrix S. m λ0 = b2. M. m ) λ0 = Bm . m A. (4. m log 1 + ρ1 (s(2) )b1.8). m log 1 + ρ2 (s(1) )b2. m + s2. λ0 (1) (1) − ζ as1. m λ0 = (2) b1. m )A.96). m = s1. m (2) (2) (1) (1) (2) = −K2 (s(2) ). Property 3: Without loss of generality. it follows that K1 (s(1) ) = λ0 = λ0 M m=1 M m=1 (4. Noting that A1 = A2 = A.9). (4. and Bm (1) A A = (s1. m + s2. m . λ0 + b m=1 (1) (1) m=1 (1) ζ (s1. M M I(a. A A b1.94)–(4. m λ0 = s2. (1) (1) (2) (2) (2) (4.96) = ρ(s(2) ).7). m λ0 = (2) A s1. with N = 2. The proof of K1 (s(2) ) = −K2 (s(1) ) follows in a similar manner. m (1) (1) (2) (2) (4. λ0.8). ¯ Bm log 1 + ρ(s(1) )Bm ¯ Bm log 1 + ρ(s(2) )Bm ¯ (2) (1) (1) − 2b2. let a ≥ b ≥ 0. by the definition of mirror states.95) for m = 1. By (3. ρ1 (s(1) ) = ρ2 (s(2) ). m + s1. m .

m A. . s(2) ). m A + a2 s2.8). b1 . a2 . λ0 m=1 = I(b. b2 . a2 .s 2 2 − I(a1 . Property 4: By property 3. m A + s2. (4. λ0 + ζ b2 s1. s(1) ) = [a1 − b1 ]+ + [b2 − a2 ]+ − [a1 − b1 + b2 − a2 ]+ × ζ(s1. b1 . s(1) ) = m=1 ζ a1 s1. m )A. we get D(a1 . m A. b2 . m A. a2 . a2 . b2 . b1 . after rearrangement. m A + s2. b1 . From (3. b2 . λ0 −2ζ b1 + a2 (1) a1 + b2 (1) s1. m A. a2 . m A. m A + bs1. a2 . m A. s(1) ) = Z(a1 . λ0 ) . a2 . b1 . b1 . s(1) ) − I(b2 . s(1) ) = 2I ≥ 0. a. s(1) ) M ∆ a1 + b2 a2 + b1 (1) . 128 (1) (1) (1) (1) M (4. λ0 + b m=1 (2) ζ (s2.97) (1) (1) (1) (1) and L(a1 . λ0 ) − ζ(s2. m + s1. a2 .M M = (a − b) M ζ m=1 (2) (2) s2. m A. λ0 (2) (2) − ζ as2. s(2) ) = I(b2 .98) m=1 . where Z(a1 . m A + b1 s2. b2 . Therefore. we get that I(a2 . λ0 2 2 . λ0 ) − ζ(s1. s(1) ). m A. b2 . s(1) ) + L(a1 . it suffices to prove that D(a1 . s(1) ). b2 . b1 .

x. y ∈ IR. s) + I(p1 (smir ). p2 (S). from (4. 2. 2. we get L(a1 . λ0). p2 (·)} satisfying the constraints I n (S)] ≤ σ. Clearly. so that D(a1 . 2 . n = 1. s(1) ) ≥ 0. . it follows that for any s ∈ (IR+ )2×M (and the corresponding mirror 0 state smir ) I(p1 (s).e. S)]. In other words.98). by (3. b1 . s(1) ) ≥ 0. it follows that I 1 (S)] = I 2 (S)] = E[p E[p ′ ′ ′ ′ ′ 1 I 1 (S) + p2 (S)] ≤ σ. 1] and p2 : (IR+ )2×M → [0. b1 .6) and (4. we can obtain another feasible power control E[p law {p1 (·). E[p 2 Therefore. I n (S)] ≤ σ. a2 . n = 1. a2 . s) = I ≥ so that I E[I(p1 (S). Furthermore. E[p 0 1 1 define p1 (s) = 2 (p1 (s) + p2 (smir )). By Jensen’s inequality applied to the convex function ζ(·. and p2 (s) = 2 (p1 (smir ) + p2 (s)). a2 .where we use the notation [x]+ = max{x. b2 . p2 (s). p2 (S). noting that [x]+ + [y]+ ≥ [x + y]+ . b1 . 2. p2 (·)} is a feasible power control law. n = 1. b2 . s ∈ (IR+ )2×M .s 2 2 1 I(p1 (s). where smir is the ′ ′ mirror state of s. smir ) . and by property 1 0 of Lemma 1. 1] constitute a 0 0 feasible power control law. {p1 (·). Proof of Lemma 2: Suppose p1 : (IR+ )2×M → [0. p2 (·)} satisfying I n (S)] ≤ σ. p2 (smir ). s(1) ) ≥ 0. n = 1.. by property 4 of Lemma 1. For every s ∈ (IR+ )2×M . S)] ≥ I E[I(p1 (S). b2 . given any feasible power control law {p1 (·). and the conditions p1 (s) = E[p 129 ′ ′ ′ ′ ′ ′ ′ ′ p1 (s) + p2 (smir ) p1 (smir ) + p2 (s) .97) we get Z(a1 . p2 (s). 0}. pn (s) ∈ [0. i. 1]. Finally. 2.

the optimal power control law {µ∗ (·).99) 1 2 (4. property 3. Proof of Theorem 8: Let µ∗ : (IR+ )2×M → [0. 2 1 (4. This completes the proof of Lemma 2.99) is by Lemma 1.103) is by Lemma 2. and (4. I FS [µ∗ (S)] = E 2 1 I ˆ [µ∗ (S) + µ∗ (S)]. S)] ≥ I E[I(p1 (S).102) (4. I FS [µ∗ (S)] = E 1 = S0 ′ ′ ′ ′ ′ S0 µ∗ (s)dFS (s) + 1 S1 µ∗ (s)dFS (s) + 1 S2 µ∗ (s)dFS (s) 1 1 ∗ (µ (s) + µ∗ (s))dFS (s) + 2 2 1 S1 (µ∗ (s) + µ∗ (s))dFS (s) (4. E Eˆ 1 2 1 2 (4. Similarly. 1].100) is by (4. property 1 and Lemma 2. I(µ∗ (s). µ∗ (smir ). µ∗ (·)} must satisfy µ∗ (s) = µ∗ (smir ).102) is by Lemma 1. µ∗ (S). we now get I FS [I(µ∗ (S). constitute an optimal power control law in n 0 (4. µ∗ (s). such that I E[I(p1 (S). S)]. µ∗ (S). S)].p2 (smir ). p2 (S). E 2 2 FS 1 (4. smir ). smir ) 1 2 2 1 = I(µ∗ (smir ).14). s) = I(µ∗ (s). At optimality. 2. for all pairs of mirror states s and 1 2 1 2 smir . smir }. Therefore. and (4. E 2 2 FS 1 where (4. for every pair of mirror states {s. and p2 (s) = p1 (smir ) for all pairs of mirror states s and smir .100) 1 = I ˆ [µ∗ (S) + µ∗ (S)]. By property 1 of Lemma 1. n = 1.104) 130 .5) with N = 2. p2 (S). S)] = I FS [I(µ∗ (S).101) Furthermore.103) where (4. µ∗ (s).

106) and  ⋄ mir  µ (s ). Eˆ 1 2 1 2 131  ⋄ mir  µ (s ). µ† (S). 1 2 µ⋄ (s) ≥ µ⋄ (s).107). 1 2 For s ∈ (IR+ )2×M .107) (4.  1 if s ∈ S0 ∪ S1 . 1] and µ⋄ : (IR+ )2×M → [0. µ2 (S).14).105) Note that the optimization problem in the right side of (4.100).  1 . (4. otherwise. µ⋄ (S). S)]. µ⋄ (s) = µ⋄ (s). s ∈ S0 . S)] = I FS [I(µ† (S).104).  2 µ† (s) = 2 if s ∈ S0 ∪ S1 . define 0 µ† (s) = 1   ⋄  µ (s).Summarizing collectively (4. (4. it follows that Eˆ 1 I FS [µ⋄ (S) + µ⋄ (S)] = I FS [µ† (S) + µ† (S)]. we get C ≤ max µ1 :(I + )2×M →[0.5). (4. 1] R 0 µ2 :(I + )2×M →[0. (4. By (4. otherwise.11)). S)]. (4.101).108). Suppose µ⋄ : (IR+ )2×M 1 0 → [0. (4. 1] constitute an optimal solution that maximizes 2 0 the right side of (4.105). Eˆ 1 2 2 and Eˆ I FS [I(µ⋄ (S). Eˆ (4.  2   ⋄  µ (s). (4. s ∈ S1 . 1] R 0 I F [µ1 (S)+µ2 (S)]≤2σ Eˆ S I FS [I(µ1 (S).105) bears a structural resemblance to the capacity formula of the MIMO Poisson channel with constant fade and an average sum power constraint (see (3.108) where smir is the mirror state of s. Then. by Corollary 6.

112).110) (4.109) is by (4. µ⋄ (s). µ⋄ (S). (4.109) 1 2 (4. (4. the pair {µ† (·).114) is by (4. µ† (S). µ† (S). and 132 . s)dFS (s) 1 2 (4. I FS [µ† (S)] = E 2 1 I FS [µ⋄ (S) + µ⋄ (S)] ≤ σ. Eˆ 1 2 2 (4.113) I(µ† (s). S)]. I FS [µ† (S)] = E 1 = S0 S0 µ⋄ (s)dFS (s) + 1 S1 µ⋄ (s)dFS (s) + 1 S2 µ⋄ (sm )dFS (s) 2 1 ⋄ (µ (s) + µ⋄ (s))dFS (s) + 2 2 1 S1 (µ⋄ (s) + µ⋄ (s))dFS (s) (4. where (4.111) is by (4.111).111) 1 = I ˆ [µ⋄ (S) + µ⋄ (S)] E 2 2 FS 1 ≤ σ.113). S)] E 1 2 = S0 ∪S1 ∪S2 (4.116) = S0 ∪S1 I(µ⋄ (s).110) is by (4.108). Similarly. s)dFS (s) + 1 2 I(µ⋄ (s). S)].114) 2 1 I(µ⋄ (s). property 1.115) (4. Furthermore.112) By (4. and (4. Eˆ 1 2 where (4.105). µ⋄ (s).14). E 1 2 Proceeding further with the right side of (4. µ† (·)} also constitutes an optimal solution that maximizes 1 2 the right side of (4. we get that I FS [I(µ† (S).5).105). µ⋄ (smir ). (4.107). s)dFS (s) 1 2 I(µ⋄ (smir ). s)dFS (s) (4. properties 1 and 3.115) is by Lemma 1. (4. s)dFS (s) + 1 2 S2 = S0 ∪S1 S1 = I FS [I(µ⋄ (S).Therefore. (4.106) and Lemma 1. it thus follows that C ≥ I FS [I(µ† (S). µ⋄ (s). µ† (s).

120) can be expanded to include all pairs {µ1 . µ2 (S). Eˆ (4. (4. 133 .120) for only the states s ∈ S0 ∪ S1 .14). µ2 . s) = max I1>2 (µ1 − µ2 . Summarizing collectively (4. whence by (3.105).120) max I FS [Imax (µ(S). the constraint set in the right side of (4. it suffices to limit our attention to the solution of the optimization problem in the right side of (4.113). µ1 + µ2 ≤ 2µ0 . we get C = max µ1 :(I + )2×M →[0.116). s).116) is by (4. it suffices to consider µ1 (S) ≥ µ2 (S) with probability 1 in the computation of the optimal solution in (4.117). Therefore. Eˆ (4. 1] R 0 I F [µ1 (S)+µ2 (S)]≤2σ Eˆ S I FS [I(µ1 (S). it follows that C ≤ where Imax (µ0 . 1] R 0 µ1 (S)≥µ2 (S) I F [µ1 (S)+µ2 (S)]≤2σ Eˆ S I FS [I1 > 2(µ1 (S) − µ2 (S).117) ˆ Recall that FS (s) = 0 for all s : K1 (s) + K2 (s) < 0. s). κ2 = κ2 (µ0 . 1] R 0 µ2 :(I + )2×M →[0. for s ∈ S0 ∪ S1 . µ2 (S). ·. s)} that maximizes the right side of (4. for all s ∈ S0 ∪ S1 . By Theorem 6. 0 ≤ µ2 ≤ 1. Eˆ (4. by Corollary 6.119) µ:(I + )2×M →[0. Proof of Theorem 9: By (4. This concludes the proof of Theorem 8. 0 ˜ Since FS (s) = 0 for all s ∈ S2 .118) where I1>2 (·.120) is given as follows4 : 4 By Corollary 6. 1] R 0 µ2 :(I + )2×M →[0. (4. S)]. S)]. S)] .(4. 0 ≤ µ0 ≤ 1. ·) is as defined in (3.8).15).94). 1] R 0 I F [µ(S)]≤σ Eˆ S µ1 +µ2 ≤2µ0 0≤µ2 ≤µ1 ≤1 s ∈ (IR+ )2×M . the optimal pair {κ1 = κ1 (µ0 . (4. µ2 } satisfying 0 ≤ µ1 ≤ 1. we get C = max µ1 :(I + )2×M →[0. without changing the optimal solution.

we thus get that C ≤ max I FS [I1>2 (κ1 (S. otherwise. 1] that achieves the maximum in (4. µ(S)). µ(S)) − κ2 (S. µ:(I + )2×M →[0. with appropriate modifications to account for the complexity of the MIMO channel.  2 0 0 κ1 =   µ. Eˆ I F [µ(S)]≤σ Eˆ S whence we get the desired result.20). Therefore. then ¯ κ1 = κ2 = ρ(s). the power control laws that dictate all the transmit apertures to simultaneously remain in either the ON or the OFF state. s ∈ S0 ∪ S1 . if µ0 ≥ ρ(s). Eˆ C ≥ µ:(I + )2×M →[0. s) ≤ κ1 (µ0 . s). if µ0 < ρ(s). s ∈ (IR+ )2×M . S)].1. s) ≤ 1.120). S)] ≤ I FS [2µ(S)] ≤ 2σ. Eˆ if K2 (s) ≥ d(µ0 . It can be verified that the maximum rate achievable by any simultaneous ON-OFF keying strategy is given by (4. κ2 (S. 134 . S)]. let µ = µρ (s) 0 5 The proof is similar to the proofs of Theorems 1 and 2.e. and is omitted here. ¯ 2.119). 1] R0 max I FS [I1>2 (κ1 (S. and the optimal power control law µL : (IR+ )2×M → [0. κ2 (S. By (4. µ(S)).. µ(S)).20) has a 0 closed-form structure. which we state next5 . For ρ ≥ 0. then ¯    min{2γ (µ . 1] R 0 I F [µ(S)]≤σ Eˆ S It can be verified that 0 ≤ κ2 (µ0 . 0 ≤ µ0 ≤ 1. S) +κ2 (µ(S).15). µ(S)). s). (4. 2µ }. i. µ(S)) − κ2 (S. µ0 . Proof of Corollary 8: Consider the class of “simultaneous ON-OFF keying” strategies.  0 κ2 = 2µ0 − κ1 . and Eˆ I FS [κ1 (µ(S). by (4.

µL (s(1) ) = µL (s(2) ). µL (S).16). Bm = Bm . Lemma 1.128) where (4. · · · .124)   [µ ∗ (s)]+ . By (4.123). if s(1) and s(2) are mirror states.be the solution of the equation M Bm log m=1 ρ (1 + α(Bm )Bm ) = . µL (S)). let ρ = ρ∗ > 0 be E[µ the solution of the equation I [µρ (S)]+ E Then µL is given by    µ (s).94).124) and property 1. If σ0 = I 0 (S)] > σ. m λ0 . S)] Eˆ ≤ C. S)] = I FS [I(µL (S).128) is by (4.127) (4. (4. (4.120). = σ.125) is by (4.125) (4. so that by (4. we now get CSOOK = I E[I(µL (S).126) is by (3. S)] Eˆ ≤ I FS [Imax (µL (S). µL (S)) − κ2 (S. (1 + µBm ) λ0 ∆ (4. S)] Eˆ = I FS [I1>2 (0. M. This concludes the proof. κ2 (S. (4.  ρ 0 0 (1) (2) = I FS [I1>2 (κ1 (S. (4. m = 1. M. µL (S). µL (S). (4.  0 σ0 ≤ σ.20). s ∈ (IR+ )2×M . m = 1.8) and (3. σ > σ.122) µL (s) = (4. µL(S)). S)] Eˆ (4. · · · . then by definition.121) with Bm = 2 A n=1 sn.121)–(4.126) (4.127) is by (4.123) Clearly. 135 .

the new lower bound (CLB ) is shown to yield higher information rates than the simultaneous ON-OFF keying lower bound (CSOOK ) for some values of σ. m . viz. 2. A1 = A2 = 1.20) and (4. we consider twovalued state sets. we plot two lower bounds on channel capacity. In the first example. m = 1. M = 2. the optimal average conditional duty cycles need not be equal.4. For simplicity. A pictorial representation of the state set is provided in Figure 4. It is clear from this figure that the new lower bound (CLB ) on capacity yields higher rates than the simultaneous ON-OFF lower bound (CSOOK ).0. 1 = λ0. In Figure 4. Example 4.0. 2 be independent and uniformly distributed on S = {0.21)). we demonstrate that simultaneous ON-OFF keying is optimal for all values of 0 ≤ σ ≤ 1..1. In the second example. However. and which is represented by a circle in Figure 4. Let Sn. 1.4. 2 = λ0 = 1.0}. CSOOK and CLB (see (4.1: Consider a symmetric MIMO Poisson channel with N = 2.5. It is clear that for a state s ∈ S 2×2 which satisfies the conditions K1 (s) ≥ 0.5 Numerical examples We now discuss a few illustrative examples.4.4. and achieves channel capacity. 136 . we show that for certain fade realizations. λ0. for the states which are represented 1 2 by squares in Figure 4. n = 1. one transmit aperture enjoys significantly better channel conditions and hence should be assigned higher average power levels than the other transmit aperture. the optimal average conditional duty cycles satisfy the condition µ∗ (s) = µ∗ (s) for any 0 ≤ σ ≤ 1. K2 (s) ≤ 0.

5 K +K = 0 1 2 −1 −1.5 K −−> 2 0 −0.5 Figure 4.5 0 K −−> 1 0.1. 137 .5 1 K =K 1 2 0.5 −1.4: State set diagram for Example 4.1.5 1 1.5 −1 −0.

25 0.2 0.1 0.15 0.4 0.1.7 0.8 0.6 LB SOOK 0.1 0.9 1 Figure 4.3 0.05 ____ C −−−− C 0 0 0. 138 .5 σ −−> 0.5: Lower bounds on capacity for Example 4.0.2 Lower bounds on capacity −−> 0.

2. 0.7.2}.04 0. K2 (s) ≤ 0 are satisfied. both 1 2 lower bounds. as shown in Figure 4. 2 are independent and uniformly distributed on S = {0. λ0 = 1.0.06 −0.08 0. For this special case.04 −0.04 K1+K2 = 0 −0.02 K2 −−> 0 −0. n = 1. the conditions K1 (s) ≥ 0.02 −0. but now Sn. Example 4. it follows that for all s ∈ S 2×2 . 139 .08 −0.02 0 K1 −−> 0. m = 1. CLB and CSOOK are tight for all 0 ≤ σ ≤ 1.06 0.08 −0.06 0. and hence the optimal conditional duty cycles satisfy µ∗ (s) = µ∗ (s).0.06 −0.08 Figure 4. m .6.1. with A = 1.2: Consider the 2 × 2 MIMO channel from the previous example.04 K1 = K2 0.02 0.6: State set diagram for Example 4.2. From Figure 4.0 as before.

01 0.5 σ −−> 0.6 0.0.8 0.005 ____ C=C =C LB SOOK 0 0 0.015 0. 140 .025 0.3 0.2 0.7: Channel capacity for Example 4.4 0.9 1 Figure 4.2.1 0.02 Channel capacity −−> 0.7 0.

the optical fields received from different transmit apertures are assumed to be sufficiently separated in frequency or angle of arrival such that the received total power is the sum of powers from individual transmit apertures. Each of these values corresponds to an event in which exactly k transmit apertures are ON and the remaining N − k are OFF. While the structure of the optimal transmission strategy is yet to be characterized in full generality. At each receive aperture. It has been shown that a two-level signaling scheme (ON-OFF keying) with arbitrarily fast intertransition times through each transmit aperture is capacityachieving. The transmitted signals through the N apertures are.6 Discussion We have studied the shot-noise limited N × M MIMO Poisson fading channel with perfect receiver channel state information. The capacity of the MIMO Poisson channel with random channel fade has been explicitly characterized and several properties of optimum strategies for transmission over this channel have been discussed. N. A block fading channel model has been proposed to account for the slowly varying nature of fading in the free-space optical channel.i. correlated across apertures and i.d. in general. the optimum set of transmission events for every realization of random channel gains has at most N + 1 values (out of a possible 2N values). Furthermore. For the 141 . in time. 1. where k = 0. · · · . several interesting properties have been identified. scaled by the respective channel fades. The transmit apertures are subject to asymmetric peak and average power constraints.4.

142 . it follows that the optimal power control laws for the respective transmit apertures are also interchanged in the mirror states. which improves upon the previously known “simultaneous ON-OFF keying” lower bound [21]. This observation leads to a more tractable expression for channel capacity. a simpler characterization of channel capacity has been obtained based on the notion of “mirror states.special case of a symmetric MIMO channel with isotropically varying channel fade. the channel characteristics experienced by the transmit apertures are interchanged. By the symmetry of the channel and input parameters. A new easily computable lower bound on channel capacity based on a simple suboptimal power control law has been proposed.” In the mirror states.

It is natural to ask what happens when the assumption of perfect receiver CSI is relaxed. In RF communication. e.1 Directions for future research We have addressed the general capacity problem of the block fading MIMO Poisson channel with peak and average transmitter power constraints. We conclude by addressing some of the information theoretic issues associated with reliable communication over optical and RF wireless channels. 5. two-level signaling (“ON-OFF keying”) with arbitrarily fast intertransition times through each transmit aperture. [40]) that the conditional 143 .i. an exact characterization of the optimal power control law. it is well-known (cf. a single-letter characterization of capacity. which specifies the optimal average conditional duty cycles (conditioned on current transmitter CSI) of the transmit apertures.d..Chapter 5 Conclusions We now present some of the unresolved issues in this dissertation. can achieve channel capacity.g. With perfect CSI at the receiver. is yet to be determined. it follows that that i. as a function of current transmitter CSI. For this general setup. has been obtained. which does not depend on the channel coherence time. and discuss possible directions for future research.

k=1 where fNT .i. 43.3. with D ⊂ IR+ arbitrary. Then the 0 0 channel output (represented by the sufficient statistics (NT . ·|·. · · · . is given by fNT . It is worthwhile to address these issues for the 144 . the received signal {Yt .TNT |X T . 0 ≤ t ≤ T } is a doubly stochastic Poisson process with rate Λ(t) = S[⌈t/Tc ⌉]x(t) + λ0 . K}. where K = T /Tc . 0 ≤ t ≤ T } need no longer be independent.23).TNT |X T . Consider now the SISO Poisson fading channel. ·) is as defined in (2.d. TNT )) “sample function density. Thus. k = 1. The effects of various forms of bandwidth constraints on the capacity of Poisson channels without any fading have been studied in [41. Hence the capacity may. Also. and bounds on capacity have been obtained. thenm conditioned on the transmitted signal xT and the receiver CSI DK = dK . the channel transition pdf is now a mixture of Poisson counting processes. tnT |xT . 31]. and the optimality of i. in general. Section 2. t |x .TNT |X T . for which the received signal {Y (t). Suppose that the receiver CSI is given by {D[k] = g(S[k]). SK (·. the transmitter and receiver devices are limited in bandwidth.d. In practice. s ) · 1(d[k] = g(s[k]))dsK . the optimum input distribution for the resulting channel with memory need no longer be i. Gaussian input signals. DK (nT . 42. 0 ≤ t ≤ T. and g : IR+ → D is a given mapping. dK ) = sK ∈(IR+ )K 0 K nT T K fNT . unlike in Remark (ii). depend on the coherence time Tc . break down when the receiver CSI is imperfect.” conditioned on the receiver CSI DK and channel input X T . SK (nT .i.Gaussianity of the channel transition pdf.

e.i. whose intensity can often far exceed the intensity of the transmitted signal. which has been addressed in this dissertation. It has been noted that given the slowly varying nature of optical fade and the existing high-data rate laser transmitters. In this setting...g. [40]). In free space optical communication systems. other notions of capacity. in such scenarios. A simple block fading channel model has been introduced in which the channel fade is assumed to remain unchanged for a fixed time interval of duration Tc . which can be further strengthened with the results discussed in this dissertation. the ambient background radiation is the primary source of receiver noise. The capacity problem for such channels has been studied in [23.Poisson fading channel.d. millions of consecutive transmission bits experience identical fade. capacity versus outage may be more relevant (cf. Some of these issues for the MIMO Poisson channel have been addressed in [21]. relies heavily on the ergodicity assumption T ≫ Tc . manner across successive such intervals. and varies in an i. the notion of ergodic capacity. 53]. when this ergodicity condition does not hold. Another assumption recurrent in this work is the shotnoise limited operation regime of the direct detection receiver. e.g. for delay-limited applications. 145 . but several issues remain unresolved. a Gaussian model for background noise is often used [26].

recent proposals have recommended the use of free space optics (FSO) as a viable alternative for RF communication [10. Consider a sum channel comprising two DMCs with disjoint input and output alphabets (see Figure 5. At each time instant.1 Motivation With the explosive growth in demand for RF bandwidth. We are motivated by [46].1). the switch at the transmitter chooses one of the two DMCs for transmission. In the following. and the increasing popularity of optical wireless systems.Encoder 1 Xt (1) DMC 1 Yt (1) Decoder 1 W Switch Vt Switch ˆ W Xt (2) Encoder 2 DMC 2 Yt (2) Decoder 2 Figure 5.2. 38.1: Block schematic of the DMC sum channel. and additional information is conveyed by switching between the channels. 5. at each time instant. 30]. we introduce a simple “sum channel” model for communication over RF and optical wireless channels.2 RF/optical wireless sum channel 5. the switching 146 . information is transmitted using only one of the two channels. information can be conveyed. in which Shannon demonstrated that by randomly switching between two DMCs. There are several issues associated with the design and analysis of such heterogeneous communication systems. by virtue of disjointness of the output alphabets.

additional information can be conveyed by the means of switching between the two channels.2) and the corresponding maximum value is given by Csum = log (exp(C1 ) + exp(C2 )) nats/ channel use. although its amount and whether it would make a significant difference. Furthermore. If an optical wireless link is available.g. it is well known (cf. remains to be seen..3) We propose a new sum channel model for communication over RF and optical wireless channels.e. On the other hand. 2. In practice. (i. In RF fading channels. when the multiplicative fade coefficient is very small). then the optical channel can be used under such circumstances as a backup for the RF channel. i = 1.1) 0≤α≤1 where α is the probability of using channel 1.information is losslessly conveyed to the receiver. the RF channel can also be used as a backup for the optical channel under certain fade conditions. The optimal α∗ that achieves the maximum on the right side of (5. The capacity of this sum channel is given by [46] Csum = max [hb (α) + αC1 + (1 − α)C2 ] . e..1) is given by α∗ = exp(C1 ) . exp(C1 ) + exp(C2 ) (5. and Ci is the capacity of channel i. then the optimal power control strategy for the transmitter dictates it to remain silent. (5. [40]) that under severe fading conditions. this channel model may be useful for conveying critical information over unreliable wireless links. (5. in which information is transmitted using the channel which experiences “better” 147 .

the switch at the transmitter picks one of two channels. For each use of the channel. instantaneous channel conditions.Channel state estimator Ut St Xt (R) St Yt (R) RF encoder Ut W RF fading channel RF decoder St Vt Switch Switch ˆ W Optical encoder Xt (O) Optical channel Yt (O) Optical decoder Figure 5.3) and an 148 .2: Block schematic of the combined RF/optical sum channel.2 Problem formulation We consider a discrete-time memoryless sum channel model for communication over RF and optical wireless channels. A block schematic diagram of the channel model is given in Figure 5. and in delay-limited applications. xt (R) x + Yt (R) St Zt Figure 5.3: RF fading channel model. where using the sum channel to convey time-sensitive information may lead to higher communication throughput while incurring lower delay..2. viz. 5.2. a RF fading channel (see Figure 5.

t ≥ 1. with σ ≥ 0 fixed. how- ever. RF channel fade with I t2 ] < ∞. (5.i.d. which is proportional to t=1 the instantaneous amplitude of the transmitted RF waveform. (5.xt (O) + λt PCP Yt (O) λ0 Figure 5. {St }∞ is the IR+ 0 t=1 valued i.i. 149 .4) where {xt }∞ is the IR-valued RF transmitted signal. and I [(Zt − I t ])2 ] = σ 2 .d. t ≥ 1} is a Poisson rv with in- (O) + λ0 . t=1 0 where U ⊆ IR+ is arbitrary. The IR-valued received RF signal is given by Yt (R) (R) = St xt (R) + Zt . in this dissertation we focus on an optical channel model without fading. while t=1 the transmitter CSI is given by {Ut = h(St )}∞ . 0 The Z+ -valued received optical signal {Yt 0 tensity (or rate)1 λt = xt 1 (O) . t ≥ 1. with the mapping h : IR+ → U. optical channel (see Figure 5.5) For simplicity. block fading structure as discussed in the previous chapters. and {Zt }∞ is the IR-valued i. E[Z E E[Z The rvs {St }∞ and {Zt }∞ are assumed to be mutually independent. the ideas can be extended (with appropriate modifications) to the case of an optical channel with an i. E[S t=1 Gaussian noise rv with I t ] = 0.4).i. We assume t=1 t=1 that the receiver possesses perfect channel state information (CSI) {St }∞ .4: Optical channel model.d.

i. sn ) n (O) = t=1 W (R) (R) Yt |Xt (R) .e. . depend on transmitter CSI Ut . since now the switching probability (Pr{Vt = 1} = 1 − Pr{Vt = 0}) will. We assume that the switches at the transmitter and the receiver are perfectly coordinated. Note that by allowing the switching to depend on transmitter CSI. st ) · 1(vt = 1) + W 150 (O) (O) Yt |Xt (O) (yt |xt ) · 1(vt = 0) . vn . in general.where {xt }∞ is the IR+ -valued transmitted optical signal. Vn . the t=1 optical and RF modulation and demodulation schemes are fundamentally different from one another. i. Physically. The channel transition pdf for n uses of the combined RF/optical sum channel is given by WYn |Xn . and also allows us to separate the respective input and output alphabets in our abstract model. and λ0 ≥ 0 is the (constant) background noise (“dark current”) rate. The switching device at the transmitter decides which channel to use at each time instant. the switching rv {Vt }∞ is available to the receiver. we have introduced here an additional form of coding. Sn (yn |xn . t ≥ 1.. if the optical channel is picked for use at time instant t. based on the available (causal) transmitter CSI.e. St (yt |xt . which justifies the perfectly coordinated switching assumption. then the switch sets Vt = 0.. The {0. otherwise. then the switch sets Vt = 1. 1}-valued switching rv {Vt = Vt (Ut )}∞ is given as follows: if the RF channel is picked for t=1 use at time instant t. t ≥ 1. which is proportional t=1 0 to the instantaneous intensity of the transmitted optical waveform. which may lead to higher achievable information rates than the DMC sum channel discussed earlier.

St (y|x.9) 0 and ≤ xt ≤ A. 0 and the optical channel transition pmf is given by   (x+λ0 )y exp(−(x+λ0 ))   . t = 1. where {xt }n satisfies the following t=1 t=1 peak and average power constraints2 : (5. n} : vt = 1}|. then xt is proportional to 2 In (5. y ∈ IR. t=1 and a IRn -valued transmitted signal {xt }n . yn ∈ IRn .10). · · · . we have considered an average power constraint involving both the RF and op- tical transmitted signals.7) x. t xt ≤ P (O) . if vt = 0. alternatively. t = 1.xn .8) The inputs to the sum channel are a {0. vn ∈ {0. t (5. n. (5. where P (R) ≥ 0 and P (O) ≥ 0 are fixed. Note that xt is proportional to the amplitude of the transmitted RF signal if vt = 1. 1}n -valued switching signal {vt }n . s) = √ 1 2πσ 2 exp − (y − sx)2 2σ 2 . whereas if vt = 0. individual average power constraints on the transmitted signals from both the channels can be considered as follows: (i) and (ii) 1 n−n1 t:vt =0 1 n1 t:vt =1 x2 ≤ P (R) .6) Yt |Xt (R) .10) where 0 ≤ A < ∞ and P ≥ 0 are fixed. 1}n . · · · . · · · . 0 0 y! (O) W (O) (O) (y|x) = Yt |Xt   0. if x ∈ IR+ . and n1 = |{t ∈ {1. · · · . y ∈ Z+ . t = 1. (5. s ∈ IR+ . 1 n n t=1 [x2 · 1(vt = 1) + xt · 1(vt = 0)] ≤ P. n. 0 where the RF channel transition pdf is given by W (R) (R) (5. 151 . n. sn ∈ (IR+ )n .  otherwise.

· · · . u )) =  (O)  x (w). w ∈ W. ut ) = 1.  t t t t xt (w. a 0 t=1 RF signal vector {xt (w. 152 . 0 0 For each message w ∈ W and transmitter CSI un ∈ U n corresponding to the fade vector sn ∈ (IR+ )n .  t otherwise. ut . · · · . the codebook comprises a set of W codewords f (w. ut ) (resp. This accounts for the difference in the exponents of xt in the expression of average power constraint in (5. xt (w)) if 3 (R) (O) (R) (O) The notation IR ⊎ Z+ is used to denote the physical separability of the output sets of the RF 0 and the optical detector signals. xt (w. the transmitter generates a switching vector {vt (w.12) xt (w. For each un ∈ U n . the transmitter sends xt (w.10). (5. n. ut ). At t=1 t=1 time instant t = 1. ut ) = 0) 2. ut )}n . φ) defined as follows. w ∈ W. (O) t = 1. ut ))}n .11) satisfies the following peak and average power constraints which follow from ≤ xt (w) ≤ A. W }. · · · . ut )}n . ut ) t=1 (O) (R) 2 · 1(vt (w. ut ) = 1) ≤ P. vt (w. (5. ut ). Let the set of messages be denoted by W = {1. and an optical signal vector {xt (w)}n .10): 0 and 1 n n (5. vt (w. where t=1   (R)  x (w.the intensity of the transmitted optical signal. w ∈ W. 1. u .13) +xt (w) · 1(vt (w. n. if v (w. The decoder is a mapping3 φ : (IR ⊎ Z+ )n × (IR+ )n → W. (5. A length-n block code for the channel under consideration is a pair (f. un ) = {vt (w.

ut ) = 0) over the RF (resp. w ∈ W. w ∈ W. vt (w. t : vt = 0} at the optical detector. Un )). Sn )| (5. We are interested in obtaining a “single-letter characterization” of the channel capacity Csum of the RF/optical sum channel in terms of the signal and channel parameters. Un . Ut ). Ut )). upon observing {yt .3 Channel capacity We provide below an expression of the channel capacity of the RF/optical sum channel in terms of the capacities of the individual RF and optical channels.14) vn (w. The rate of this length-n block code (f. optical) channel. t = 1. φ) = W W 1 n (O) (R) (R) (O) log W nats/channel use.2. t : vt = 0}. Sn }]. φ) is given by R = given by 1 Pe (f. · · · . {Yt (O) . {yt . Un )) = {xt (w. 5. and being provided with sn . Ut ) = 1}. n}. Ut . and the average probability of decoding error is I E[Pr{φ({Yt w=1 (R) . The receiver. t : ˆ vt = 1}. vn (w. 153 . where we have used the shorthand notation vn (w. vn (w. t : vt (w. n}. sn ). t = 1. Un . produces an output w = φ({yt . Un ). t : vt = 1} at the RF detector. · · · . {yt . t : vt (w. ut ) = 1 (resp. Ut ) = 0}. Un ) = {vt (w. vt (w. and xn (w.vt (w. xn (w.

0 ≤ m2 ≤ A. µ1 (U). m2 ) = hb (a) + aCRF (s. m1 ) = max I(X (R) ∧ Y (R) |S = s).17) PX (R) |U =h(s) :I [(X (R) )2 |U =h(s)]=m1 E x. (5. Y (O) are related by   (x+λ0 )y exp(−(x+λ0 ))   . y ∈ Z+ . y ∈ IR. (5. s. µ2 )]. a. 1]. m1 ) + (1 − a)CO (m2 ).18) (5. 0 and CO (m2 ) = P max X (O) :0≤X (O) ≤A I(X (O) ∧ Y (O) ). a ∈ [0.20) . m1 ∈ IR+ . S are related by WY (R) |X (R) . 1] µ1 :U →I + R 0 0≤µ2 ≤A I E[g(S. 0 ≤ m2 ≤ A.Theorem 10 The capacity of the RF/optical sum channel is given by Csum = max I [α(U )µ1 (U )+(1−α(U ))µ2 ]≤P E α:U →[0. 0 0 otherwise. α(U). m1 . s ∈ IR+ . s ∈ IR+ . 0 where X (R) . where g(s. ·) and CO (·) are obtained as follows: CRF (s. (5. Y (R) . S (y|x. m1 ∈ IR+ .19) I [X (O) ]=m2 E where X (O) .15) with U = h(S). (5. (5.  154 if x ∈ IR+ . s) = √ 1 2πσ 2 exp − (y − sx)2 2σ 2 .16) 0 0 and CRF (·. y! WY (O) |X (O) (y|x) =   0.

e. and (b) the power constraints on the transmitted signals. It is easy to see from ([8]. The capacity problem for the discrete-time Gaussian channel is well documented (cf. The additional complexity in (5.g. One major difficulty is the lack of availability of an exact expression for CO (·). µ∗ : U → IR+ . m1 ) = s2 m1 1 log 1 + 2 2 σ . to the best of our knowledge. and 0 ≤ µ∗ ≤ A that achieve the maximum 1 0 2 in (5. inner and outer bounds on the capacity of the discrete-time Poisson channel have been obtained (cf.. s..1). an exact expression for CO (·) is not available. Theorem 10. 1]. [8].g. 0 On the other hand. Remark: Note the structural similarity of the capacity formula (5.15) is due to (a) the dependence of the switching rv on transmitter CSI. m1 ∈ IR+ . 155 .1. [31]. and the references therein).1) that CRF (s.Proof: See Appendix D. Chapter 10). It remains to determine the optimal switching and transmitter power control strategies α∗ : U → [0.15) with the capacity formula for the DMC sum channel (5.15). e.

0 ≤ τ ≤ T }. n Therefore. then for any function g(·) integrable on [0.2) where we denote the unordered arrival times on [0. Ui are i. 62–63) that given NT = n. λ(τ )dτ 0 (A. (A. NT .1) Proof: A proof of this proposition can be found in [19.1 Proof of (2. Here.i. ∼ U with pdf   λ(u)   T . 0 ≤ τ ≤ T } is a PCP with (deterministic) rate function {λ(τ ). [47].. (A.37) We begin with the following proposition: Proposition 1 If {Y (τ ). T ] by Ui . pp.  otherwise. i=1 = n=1 Pr{NT = n} I E (A.d. · · · . T ]. if 0 ≤ u ≤ T.Appendix A A.4) 156 . we provide a simpler proof. i = 1.g. 18]. It can be verified (cf. Observe that NT ∞ n I E i=1 g(Ti ) = n=1 ∞ Pr{NT = n} I E i=1 n g(Ti ) NT = n g(Ui ) NT = n . NT T I E i=1 g(Ti ) = 0 λ(τ )g(τ )dτ. e.3) fU (u) =   0. I E i=1 g(Ui ) NT = n = nI E[g(U)] = n T 0 g(τ )λ(τ )dτ T 0 λ(τ )dτ .

λ0 ) − ζ(S⌈τ /Tc ⌉ X(τ ).6) = I E 0 Similarly. λ0 ) − I ζ(S⌈τ /Tc ⌉ X(τ ). (A.9) = I E 0 T = 0 ˆ I ζ(S⌈τ /Tc ⌉ X(τ ).5) T g(τ )λ(τ )dτ. we get NT I E i=1 ˆ log Λ(Ti ) T = I E 0 ˆ ˆ Λ(τ ) log Λ(τ )dτ . 0 ≤ t ≤ T }. so that NT NT I E i=1 log Λ(Ti ) = I I E E i=1 T log Λ(Ti ) ΛT Λ(τ ) log Λ(τ )dτ . we get NT I E i=1 ˆ log Λ(Ti ) − log Λ(Ti ) T = I E 0 T ˆ ˆ Λ(τ ) log Λ(τ ) − Λ(τ ) log Λ(τ ) dτ ˆ ζ(S⌈τ /Tc ⌉ X(τ ). by (A. (A. 0 since NT is a Poisson rv with mean T 0 λ(τ )dτ .2).7) Combining (A. for any {λ(t). TNT ) is a doubly stochastic PCP with rate process {Λ(t). it follows that NT I E i=1 g(Ti) = = T 0 g(τ )λ(τ )dτ T 0 λ(τ )dτ I E[NT ] (A. λ0 ) dτ dτ. TNT ) conditioned on SK is a self-exciting PCP with ˆ rate process {Λ(t). (A.8) (A. (A.so that by (A. by noting that (NT . 0 ≤ t ≤ T }. λ0 ) E E 157 .6). Therefore.1). 0 ≤ t ≤ T } integrable on [0. NT T I E i=1 log Λ(Ti ) ΛT = λT = 0 λ(τ ) log λ(τ )dτ. Note that (NT . T ].7).

11). U⌈τ /Tc ⌉−1 ). l being given mappings.25). C). B = S⌈τ /Tc ⌉ . X .where (A.10) is satisfied. 0 ≤ τ ≤ T }.42) follows from (A. (2.10). 1 The interchange is permissible as the assumed condition I E[|ζ(SA. C = U⌈τ /Tc ⌉ and X = X(τ ). Set A = (W. C) + I(X ∧ B|A. Clearly (A. and (A. λ0 )|] < ∞ ensures the ˆ integrability of {ζ(S⌈τ /Tc ⌉ X(τ ). Then. B. A is independent of (B. C) = 0.8) is by (2. X = l(A. C).11) C = k(B). with k. where A is independent of B.8).9) holds by interchanging the order of operations1 in (A. B. C. 158 .26). so that (2. λ0 ). (A. (A. (2. C) ≤ I(A ∧ B.42) Consider rvs A. 0 ≤ τ ≤ T } and {ζ(S⌈τ /Tc ⌉ X(τ ). 0 ≤ I(X ∧ B|C) ≤ I(X. A. λ0 ). X with (arbitrary) alphabets A.2 Proof of (2.3). A ∧ B|C) = I(A ∧ B|C) + I(X ∧ B|A. C.10) since by (A. C) and X = l(A.

for L ≫ 1.3).12) (A.14) (A.74) follows. for x ≪ 1.12) that for s ∈ IR+ . and (A.13).14) is by (A. λ0 )] + O L−2 log L L (A. λ0 ) − ζ(sµ(h(s))A. βL (s) = µ(h(s))ζ(sA. and in (2.74) In (2.A. a ∈ IR. Hence.3 Proof of (2. w11 (s.71) and (A. to get w10 (L) = λ0 Tc /L + O (L−2 ) . we use the approximations [52] hb (x) = −x log x + x + O(x2 ).71).13) where (A. we have from (2. Then. s ∈ IR+ . for L ≫ 1. L) = (λ0 + sA)Tc /L + O (L ) . λ0 ) − ζ(sµ(h(s))A.64). we use the approximation exp(a/L) = 1 + a/L + O(L−2). L→∞ Tc /L lim whereby (2.15) 2 2 −2 (A. hb (x + O(x )) = hb (x) + O(x log x). 0 βL (s) = hb (λ0 + sµ(h(s))A)Tc /L + O L−2 −µ(h(s))hb (λ0 + sA)Tc /L + O L−2 −(1 − µ(h(s)))hb λ0 Tc /L + O L−2 = Tc [µ(h(s))(λ0 + sA) log(λ0 + sA) + (1 − µ(h(s)))λ0 log λ0 L −(λ0 + sµ(h(s))A) log(λ0 + sµ(h(s))A)] + O L−2 log L = Tc [µ(h(s))ζ(sA.15) is by (2. λ0 ). 0 159 .

(B. i )  T 0 T  T 0 = m=1 M I E I E m=1 0 Λm (τ ) log(Λm (τ ))dτ − I E N ˆ ˆ Λm (τ ) log(Λm (τ ))dτ (B. i )) = I I  E E T   Nm (T ) i=1 = I E Λm (τ ) log(Λm (τ ))dτ .3) = ζ n=1 N sn. by noting that Ym is a self-exciting PCP with rate process {Λm (t). i )) = I E  T 0 ˆ ˆ Λm (τ ) log(Λm (τ ))dτ . m Xn (τ ). m Xn (τ ). λ0. m Xn (τ ). we thus get that  M Nm (T ) m=1 I  E i=1 ˆ log(Λm (Tm. m dτ.1 Proof of (3.1) 0 T ˆ Similarly. (B. i ) − log Λm (Tm. log(Λm (Tm.1).5) −ζ n=1 160 . i )) ΛT  m  (B. we get that   Nm (T ) I  E i=1 log(Λm (Tm. λ0. λ0. 0 ≤ t ≤ T }.4) −ζ M T n=1 N = m=1 0 I ζ E n=1 N sn.Appendix B B.34) T By Proposition 1.2) I  E i=1 M ˆ log Λm (Tm. m ˆ sn. λ0. 0 ≤ t ≤ T }. by (A. m Xn (τ ). we get  Nm (T ) From (3.33). m dτ (B. noting that Ym is a doubly stochastic PCP with rate process {Λm (t). m ˆ sn.

j ∈ {0.43) From (3. ζ N n=1 sn. m ) n=1 N (B. m . 161 . ···. N. 2N −1.6) with respect to the 2 −1 variables {qi (τ )}i=0 . Therefore. · · · .6) where M N N ci = m=1 ζ n=1 sn.2 Proof of (3.8).4) is by (2. 0 ≤ τ ≤ T and λ0. 0 ≤ τ ≤ T . 0 ≤ τ ≤ T .where (B. ∗ q2N −1 (τ ) should be assigned the highest allowable value. (B.1).2).3) is by (B. B. n = 1. (B.24). · · · . m . m Xn (τ ).3). 2N − 1}. Therefore. after rearrangement we get N M 2N −1 h(τ.7) 2 −1 for i = 0.25). m Xn (τ ). (B.42). ensures the integrability of the rvs ζ N ˆ n=1 sn. · · · . it follows that c2N −1 ≥ ci for all i = 0.6): Ii ⊆ Ij ⇒ 0 ≤ ci ≤ cj . it follows that q2N −1 (τ ) ≤ χn (τ ). n = 1. · · · . (B. m ) + max q(τ ) i=0 ci qi (τ ). λ0.41). (3. m − bn (i)ζ (sn. m An . N .8) The linearity of the summand on the right side of (B. λ0. m bn (i)An . By (3. and the special property (B. M .5) holds by interchanging the order of operations1 in (B. λ0. · · · . · · · . i. m An . 2N − 1.8) of c suggest the following NN step algorithm to compute the optimal pmf vector q∗ (τ ): • Step 1: By (B. The following property of c = {ci }i=0 can be derived from (3.4). ∗ q2N −1 (τ ) = 1 n=1.41) and (3. (3.9) The interchange is permissible as the assumed conditions 0 ≤ Xn (τ ) ≤ 1. N min χn (τ ) = χPτ (N ) (τ ). χN (τ )) = n=1 χn (τ ) m=1 ζ(sn. λ0. m = 1. (B. and (B.

it follows that qi (τ ) ≤ χn (τ ) − χPτ (N ) (τ ).12) • Step k. for all i : bPτ (N ) (i) = 0. By (3. · · · .44). repeating the arguments of Step 2. i = 2Pτ (n)−1 . (B. N. n=1 (B. · · · .41). it follows that cj ≥ ci .14) bPτ (N −k+1) (i) = 1. by (B. we get N −1 n=1 ∗ qi (τ ) = 0. we have q∗ (τ ) = χPτ (N −1) (τ ) − χPτ (N ) (τ ).14). i : bPτ (j) (i) = 0.41). i = ∗ • Step N: The only remaining pmf is q0 (τ ). N −k+1 Pτ (n)−1 2 .10) • Step 2: For all i ∈ {0.13) j = N − k + 2. we get q∗ and ∗ qi (τ ) = 0. (B. 2N − 1} : bPτ (N ) (i) = 0. · · · . By (3.8).where Pτ is as defined in (3. Furthermore. (B.9)–(B. n = Pτ (1). i : bPτ (N ) (i) = 0.15) 162 . N−k+1 n=1 2Pτ (n)−1 (τ ) = χPτ (N −k+1) (τ ) − χPτ (N −k+2) (τ ). Summarizing these facts. i = 2N − 1. where j = N −1 n=1 2Pτ (n)−1 .11) N−1 n=1 2Pτ (n)−1 whence by (3. i : bPτ (N ) (i) = 1. Pτ (N − 1). 2 < k < N: In this step. (B. (B.41) and (B. by (3.41). bPτ (N −1) (i) = 1. it follows that 2N −1 ∗ q0 (τ ) = 1− i=1 ∗ qi (τ ) = 1 − χPτ (1) (τ ). it follows that ∗ qi (τ ) = 0.

(B. i = 1. i .. we have N! i=1 li Θn. and li is the proportion of Ti relative to the transmission interval [0. i ∈ {1.17) i = 1. li ≥ 0.17) and (B. · · · . N − 1. T ]: Ti = {τ ∈ [0. clearly. N. · · · . (B. · · · . Let P denote the set of all possible permutations of {1. µn ≥ µn+1 . and let li = 1 T dτ. i = 1. N!.19) By (3. (B.B. N !} and Pτ . N. i = µn . Π µΠ(n) . · · · . T ] when the elements of χN (τ ) satisfy the permutation Pi but no other permutation Pj .18) In other words. The distinction should be clear from the context. Clearly. i = 1. i = 1 dτ Ti n = 1. N!. T ]. · · · . τ ∈ [0. N − 1. N!. Consider the following partition of [0. n = 1.e. · · · . We fix an indexing2 {Pi . Ti Θn. · · · . j < i. i. Ti is the set of time instants τ ∈ [0. Define χn (τ )dτ. i = 1. (B. Pτ χPτ (n) (τ )dτ ≤ ψn. we assume that Π(n) = n for all n = 1.52) It suffices to prove that N n=1 1 T T 0 N ψn. · · · .16) Without loss of generality. For the sake of brevity. |P| = N!. n=1 (B.3 Proof of (3. · · · . and N! i=1 li = 1.18).50). n = 1. N!} of the elements of P with P1 = Π. N!. 163 . we use similar notation to denote the permutations Pi . Ti (B.20) ΘPi (n). T ]\ ∪j<i Tj : χPi (1) (τ ) ≥ · · · ≥ χPi (N ) (τ )}. n = 1. i 2 ≥ ΘPi (n+1). T ]. · · · . · · · . · · · . N. N}.

23) +ΘPi (N ). λ0. P1 − ψk. Pi ΘPi (n). N and i = 1. λ0. N!. On the other hand. P1 i=1 N li Θn.46).21) and (B. m Aj .17)–(B. m APi (j) .24)  = m=1 k=1 where (B. P1 − ψk. · · · . · · · . Π µΠ(n) − N n=1 N! 1 T T ψn. Pi where (B. m APi (j) . Pi (n)})(B. N.21) (B. By definition (see (3. and (B. i n = i=1 N! li n=1 N −1 = i=1 li n=1 ΘPi (n).In order to prove (B. λ0.22) = i=1 N! li n=1 N ψn. m −ζ sPi (j). λ0.m Aj 1(j ∈ {Pi (1).     n n M Pi (k) Pi (k)−1 ψPi (k). i and yn = ψPi (n). i − ΘPi (n+1). P1 Θn. n ζ sPi (k). we now establish the nonnegativity of the difference N N n=1 ψn. Pi ΘPi (n). P1 = k=1 k=1 m=1 M n ζ   j=1 sj. i k=1 ψPi (k). m APi (j) . m Aj  . by (3.46)). Pi . i N k=1 ψPi (k). m +  Pi (k)−1 j=1 sj. λ0. m + sj. m  − ζ  j=1 j=1 Pi (k)−1 sj. λ0. (B.  164 . m j=1 = m=1 M ζ j=1 n = m=1 k=1 ζ sPi (k). i ψPi (n). λ0. Pi k=1 n M k k−1 = k=1 m=1 M ζ j=1 n sPi (j). · · · . m APi (k) . m   (B. ψk.6).25) .54) with the substitution xn = ΘPi (n).24) follows by (3. Pi χPi (n) (τ )dτ Ti (B.22) follow by (B. m sPi (j). Pi . m Aj .23) follows by (3. i − ψn.20) with Π = P1 . P1 − ψn.16). m APi (k) . for every n = 1. · · · . n = 1. i − n=1 i=1 li dτ Ti ψn. P1 − ψn. Pτ χPτ (n) (τ )dτ 0 N N! = n=1 N! ψn.

75) In (3. n = 1. i = 1. N} such that µΠ(n) ≥ µΠ(n+1) .20). K. µn = 1 K K k=1 χn (k). we use the approximation exp(a/L) = 1 + a/L + O(L−2). (B. y. we get n k=1 ψPi (k).16) holds for the case of a discrete sum instead of an integral. 1}N . L) = wm (˜ N ) x x T + O(L−2 ). y + z) ≥ ζ(x.24) and (B.23). (B. B. N − 1. Π µΠ(n) . L 165 ˜ xN ∈ {0.where (B. Remark: It can be seen from the preceding proof that (B. n = 1. Pk χPk (n) (k) ≤ ψn. n=1 (B. Π µΠ(n) − n=1 1 T T 0 ψn. N. · · · . k = 1. P1 − ψk. n = 1. the following inequality is true: N n=1 1 K K N k=0 ψn.28) . Using the fact that ζ(x. z ≥ 0. i.26) whence by (B. · · · . N!.25). · · · .62).4 Proof of (3. for L ≫ 1. N. Pi ≥ 0. N − 1. · · · . · · · . it follows that N N n=1 ψn. a ∈ IR.e.. and Π is a permutation of {1.25) is by (3. by (B. y) for all x. which is the desired result. · · · . Pτ χPτ (n) (τ )dτ ≥ 0.6). to get ωm (˜ N . n = 1. · · · . · · · .27) where Pk is a permutation of {1. N} such that χPk (n) (k) ≥ χPk (n+1) (k). (B.

(B. λ0. λ0. m  + O(L−2 log L).75) follows.for m = 1. m bn (i)An .31) holds by (2.30) is by (3. m  where (B.28) that    M 2N −1 T hb  βL (p) = pi wm (bN (i)) + O(L−2 ) L m=1 i=0  2N −1 T pi hb wm (bN (i)) + O(L−2 )  − L i=0 T = L m=1 T = L m=1 M M 2N −1 pi wm (bN (i)) log i=0 2N −1 N wm (bN (i)) pi w m 2N −1 i=0 (bN (i)) + O(L−2 log L) (B. (B.63). For L ≫ 1. m bn (i)An .3).29) is by (A. m lim L→∞ T /L m=1 n=1 i=0   2N −1 N −ζm   pi i=0 n=1 sn.30) T  = L m=1 M log    + λ0. λ0. m  . m bn (i)An + λ0.13).32) 166 .31) whence (3. m 2N −1 pi ζ m i=0 n=1 2N −1 N sn. M. m bn (i)An + λ0. (B. m N n=1 sn. we have from (3. −ζm  pi i=0 n=1 sn. Hence  M 2N −1 N βL (p)  = pi ζ m sn. m bn (i)An 2N −1 i=0 ×   + O(L−2 log L) (B.73) and (B. m pi N N n=1 sn. m bn (i)An .29) pi i=0 n=1 sn. λ0. and (B. m bn (i)An . · · · .

(C. λ0.37) T By Proposition 1. m [⌈τ /Tc ⌉]Xn (τ ). λ0. i )) = I E  T 0 ˆ ˆ Λm (τ ) log(Λm (τ ))dτ . m dτ. 0 ≤ t ≤ T }.1). by (A. we get that  Nm (T ) By (C.3) n=1 T ˆ Sn.2) I  E i=1 M ˆ log Λm (Tm.Appendix C C.2). log(Λm (Tm. i ) − log Λm (Tm. m [⌈τ /Tc ⌉]Xn (τ ). i )  T 0 T  T 0 = m=1 M I E I E m=1 N 0 Λm (τ ) log(Λm (τ ))dτ − I E N ˆ ˆ Λm (τ ) log(Λm (τ ))dτ = −ζ M ζ n=1 Sn. (C. λ0. m 167 . 0 ≤ t ≤ T }.1). λ0.1 Proof of (4. we get that     Nm (T ) Nm (T ) I  E i=1 log(Λm (Tm. i )) = I I  E E T i=1 = I E Λm (τ ) log(Λm (τ ))dτ . m [⌈τ /Tc ⌉]Xn (τ ).4) −I ζ E n=1 ˆ Sn. m N = m=1 0 I ζ E n=1 N Sn. (C. noting that Ym is a doubly stochastic PCP with rate process {Λm (t). i )) ΛT  m  (C.1).1) 0 T ˆ Similarly. by noting that Ym is a self-exciting PCP with rate process {Λm (t). m dτ (C. we then get  M Nm (T ) m=1 I  E i=1 ˆ log(Λm (Tm. by (A. m [⌈τ /Tc ⌉]Xn (τ ).

7) (C. n = 1. Pτ (s)χPτ (n) (τ.27). and 1 The interchange is permissible as the assumed conditions I ζ E N n=1 m = 1. 0 ≤ τ ≤ T . u)dτ ≤ ψn. (C. ensure the integrability of ζ ζ N n=1 Sn. u ∈ U. M . m = 1. where we have suppressed the dependence of Qk on u for notational brevity. m . N} such that ηQk (n) (k. n=1 (C. m [⌈τ /Tc ⌉]Xn (τ ). u).3) is by (2. K. m N n=1 . m AΠ(j) . · · · . m (C. Π (S)µΠ(n) (U) n=1 N M n νn (U) n=1 m=1 ζ j=1 SΠ(j). u). · · · . M . s ∈ (IR+ )N ×M . 168 . U[k]) dτ ψn. C. Qk (S[k])ηQk (n) (k.3). Qk (S)ηQk (n) (k. · · · . · · · .6). λ0. m [⌈τ /Tc ⌉]Xn (τ ). it follows that N n=1 1 Tc kTc (k−1)Tc N ψn.16. (4. Therefore.60) Let Qk be a permutation of {1. by (C. · · · . λ0. u) ≥ ηQk (n+1) (k.3). By B. m An .8) ≤ I E = I E ψn. U) (C. k = 1. (4. N − 1. λ0. m .where (C.9) < ∞. Qk (s)ηQk (n) (k.26).2 Proof of (4. and (C. U[k]) 1 ≤ K = 1 K I E k=1 K n=1 N I E k=1 N n=1 ψn. · · · . Sn.6) for k = 1. 0 ≤ τ ≤ T ˆ Sn. we now get 0 1 K K k=1 1 Tc kTc N I E (k−1)Tc K N n=1 ψn. λ0.5) for u ∈ U. Pτ (S[k])χPτ (n) (τ.4) holds by interchanging the order of operations1 in (C. K.

s)  lim = pi ζ m L→∞ Tc /L m=1 i=0  whence (4. C.72).7) is by the i.  (C.i. L s ∈ (IR+ )N ×M .54). m  + O(L−2 log L). yn = ψn. to get ωm (˜ N . 1}N . m bn (i)An . m L m=1 i=0 n=1   2N −1 N where (C.10) that    M 2N −1 Tc hb  βL (p. nature of {S[k]}∞ .31). Hence  M 2N −1 βL (p.61). λ0. for m = 1. we have from (4. 0 (C. s) = pi wm (bN (i).12) 169 . m bn (i)An . (C. m  .d. a ∈ IR. s) + O(L−2 )  L i=0  M 2N −1 N Tc  = pi ζ m sn.11) holds by (B. for L ≫ 1. λ0. m bn (i)An . m n=1 N 2N −1 pi n=1 i=0 sn. For L ≫ 1.83) and (C. s) x x Tc + O(L−2 ). (C. and (C. we use the approximation exp(a/L) = 1 + a/L + O(L−2). Π (S). s.where (C.85) In (4. λ0.54) with the substitution xn = µΠ(n) (U). · · · .85) follows.8) follows by (4. (4. s) + O(L−2 ) L m=1 i=0  2N −1 Tc − pi hb wm (bN (i). m bn (i)An .11) N sn. L) = wm (˜ N .10) ˜ xN ∈ {0. k=1 (B.9) is by the application of (3. λ0. M.3 Proof of (4. −ζm  −ζm  pi i=0 n=1 sn.27).

Appendix D Proof of Theorem 10
Denote the input and the output alphabets of the RF/optical sum channel by X = IR ⊎ IR+ and Y = IR ⊎ Z+ respectively, where we use the notation A ⊎ B 0 0 to denote the physical separability of the sets A and B by virtue of the material differences between the respective modulation and demodulation schemes. It can be inferred from [5, 31, 9] that the capacity of the memoryless RF/optical sum channel is given by Csum = max I(X, V ∧ Y |S), (D.1)

PX, V |U :I [X 2 ·1(V =1)+X·1(V =0)]≤P E

where the rvs X, Y, V, S, U take values in the sets X , Y, {0, 1}, IR+ , U respectively, 0 and the joint distribution of the rvs X, Y, V, S, U is determined by U = h(S), FS , PV |U , X = X (R) 1(V = 1) + X (O) 1(V = 0), Y = Y
(R)

(D.2)

1(V = 1) + Y

(O)

1(V = 0),

with the rvs X (R) , Y (R) , X (O) , Y (O) taking values in the sets IR, IR, [0, A], Z+ re0 spectively; the distribution function1 WY |X, V, S (y|x, v, s) = WY (R) |X (R) , S (y|x, s) · 1(v = 1) + WY (O) |X (O) (y|x) · 1(v = 0),
We denote by WY |X, V, S the conditional distribution function of a collection of discrete and
(R) (O)

1

continuous rvs.

170

x, y ∈ IR, s ∈ IR+ , v ∈ {0, 1}, 0
(R) (O)

(D.3)

where WY (R) |X (R) , S (·|·, ·) and WY (O) |X (O) (·|·) are given by (5.18) and (5.20) respectively; and the Markov condition X, V −◦− U −◦− S. Note that I(X, V ∧ Y |S) = I(V ∧ Y |S) + I(X ∧ Y |V, S) = H(V |S) − H(V |Y, S) + I(X ∧ Y |V, S) = H(V |S) + I(X ∧ Y |V, S) = I b (α(U))] + I(X ∧ Y |V, S), E[h (D.5) (D.6) (D.4)

where (D.5) follows by the physical separability of the RF and optical signals at the detector, so that H(V |Y, S) = 0; and (D.6) holds with α(u) = Pr{V = 1|U = u}, u ∈ U, since by (D.4), H(V |S) = H(V |U) = I b (α(U))]. E[h Next, for any s ∈ IR+ , note that 0 I(X ∧ Y |V = 1, S = s) = I(X (R) ∧ Y (R) |V = 1, S = s) = I(X (R) ∧ Y (R) |S = s), (D.7) (D.8)

where (D.7) is by (D.2), and (D.8) follows from the fact that the RF encoding and decoding are independent of the switching operation, so that V is conditionally independent of (X (R) , Y (R) ) given S. Similarly, for s ∈ IR+ , 0 I(X ∧ Y |V = 0, S = s) = I(X (O) ∧ Y (O) |V = 0, S = s) 171 (D.9)

= I(X (O) ∧ Y (O) ),

(D.10)

where (D.9) is by (D.2), and (D.10) follows from the fact that optical encoding and decoding are independent of the switching operation as well as RF channel fade, so that (X (O) , Y (O) ) is independent of (V, S). Combining (D.6), (D.8), (D.10), we get that I(X, V ∧ Y |S) = I b (α(U))] + I E[h E[α(U)I(X (R) ∧ Y (R) |S)] +I E[(1 − α(U))I(X (O) ∧ Y (O) )]. Next, note that I E[X 2 · 1(V = 1) + X · 1(V = 0)] = I E[X (R) · 1(V = 1) + X (O) · 1(V = 0)] = I E[X (R) · 1(V = 1) + X (O) · 1(V = 0)|S]] E[I = I E[α(U) I E[X (R) |U] + (1 − α(U)) I E[X (O) ]],
2 2 2

(D.11)

(D.12)

(D.13)

where (D.12) is by (D.2); and (D.13) holds by (D.4), the conditional independence of V and X (R) given S, and the independence of (V, S) and X (O) . Summarizing collectively (D.1), (D.11), (D.13), we get that I b (α(U)) + α(U) I(X (R) ∧ Y (R) |S) E[h (D.14) (O) (O) +(1 − α(U)) I(X ∧ Y )].

Csum =
2

α:U →[0, 1] 0≤X (O) ≤A

max

I [α(U ) I [X (R) |U ]+(1−α(U )) I [X (O) ]]≤P E E E

Finally, setting CRF (·, ·) and CO (·) as in (5.17) and (5.19) respectively, by (D.14), we have Csum = max
I [α(U )µ1 (U )+(1−α(U ))µ2 ]≤P E
α:U →[0, 1] µ1 :U →I + R 0 0≤µ2 ≤A

I b (α(U)) + α(U) CRF (S, µ1 (U)) E[h +(1 − α(U)) CO (µ2 )], 172

15). 173 .which is the desired result (5.

49(11): 3076–3093. Apr–June 1999. A. IEEE Trans. SpringerVerlag. [8] T. [4] M. Point Processes and Queues: Martingale Dynamics. John Wiley & Sons. IEEE Trans. Shamai. Optimization & Nonsmooth Analysis. and S. 1991. New York. 1983. I. IEEE Trans. M. Shamai. capacity. 47(5): 1999– 2016. Inform. 1981. [7] F. [6] G. 174 . [2] S. Thomas. 45(5): 1468–1489. Burnashev. Inform. Nov 2003. Biglieri. Optimum power control over fading channels. A. H. Cover and J. 35(2): 95–111. July 2001. [5] G. Taricco. G. On the sphere-packing bound. I. [3] S. Inform. and similar results for Poisson channels. New York. Bross. Caire. Bremaud. Problems of Information Transmission. John Wiley & Sons. Theory. IEEE Trans. M. 45(6): 2007–2019. Theory. V. Shamai. Bross and S. Theory. Burnashev and Y. V. New York. Theory. Kutoyants.BIBLIOGRAPHY [1] P. Error exponents for the two-user Poisson multiple-access channel. Elements of Information Theory. Inform. Sept 1999. On the capacity of some channels with channel state information. July 1999. Clarke. Capacity and decoding rules for Poisson arbitrarily varying channel. and E. Caire and S.

Optical Communication Systems. Prentice-Hall. S. 175 . Theory.. I. 37(2): 244–256. 48(1): 4–25. Jan 2002. Milner. Haas. IEEE Trans. 2003. [10] C. Gelfand and S. Prentice-Hall International. New Jersey. MA. A. Karp. [15] I. Mag.[9] A. John Wiley & Sons. M. 26(6): 710–715. and S. [11] M. Gowar. [17] J. Select. [12] M. Jindal. Inform. Optical Communications. Gagliardi and S. IEEE Commun. Das and P. thesis. IEEE Trans. Jafar. 1963. Ph. D. Gallager. Areas Commun. Inc. Vishwanath. Davis. Capacity limits of MIMO channels. Information Theory and Reliable Communication. 21(5): 684–702. M. [18] S. Goldsmith. Capacity and coding for multiple-aperture. 41(3): 51–57. M. 1968. R. New York. 1995. IEEE Trans. Calculus of Variations. C. H.. Flexible optical wireless links and networks. Inform. Theory. [13] R. Theory. and S. Information capacity of the Poisson channel. G.. Cambridge.. 1984. Nov 1980. V. wireless.D. IEEE J. Smolyaninov. optical communications. Fomin. London. Mar 1991. Inform. [16] A. Massachusetts Inst. Narayan. New York. Davis. 2nd edition. Frey. Capacity and cutoff rate for Poisson-type channels. John Wiley & Sons. [14] R. N. A. June 2003. Mar 2003. I. Tech. Capacities of time-varying multiple-access channels with side information.

IEEE Press and Oxford University Press. Oct 2001. Inc. 1999. Shapiro. City of Light: The Story of Fiber Optics. Theory. KS. M. Capacity bounds for power. J. R. 176 . IEEE Trans. Proceedings of the Kansas Workshop on Stochastic Theory and Control. [26] J. H. Capacity of wireless optical communications. Haas. 23(1): 143–147. Ishimaru. R. Kschischang. IEEE. [25] Y. Kahn and J. 85(2): 265–298. Pasik-Duncan.and bandlimited optical intensity channels corrupted by Gaussian noise. [22] J. M. Space-time codes for wireless optical communications. [24] A. [21] S. Feb 1997. Inform. editor. Select. Hranilovic and F. Lawrence. Hecht.[19] S. Barry. multiple-output Poisson channel. and V. Shapiro. Mar 2002. [23] S. Haas and J. H.. Areas Commun.. May 2004. Capacity of a channel of the Poisson type. M. Wave Propagation and Scattering in Random Media. Proc. IEEE J. Lecture notes in Control and Information Sciences. Haas and J. [20] S. Eurasip Journal on Applied Signal Processing. 50(5): 784–795. 1997. 21(8): 1346–1357. Capacity of the multiple-input. Wireless infrared communications. H. Tarokh. Oxford University Press. M. 2002(3): 211– 220. Oct 2003. Theory of Probability and its Applications. Shapiro. M. In B. Kabanov. 1978.

44(2): 488–501. and A. Kim and E. R. P. Urbanke. Stotts. Scintillation reduction using multiple transmitters. M. Hakakha. In Proceedings of 41st Annual Allerton Conference on Communication. 2990: 102–113. Control. Shamai. IEEE J. Inform. E. Monticello. Gagliardi. K. and Computing. Theory. IEEE Trans. Lapidoth and S. [32] A. Dec 2003. Proceedings of SPIE ITCOM 2001. Lapidoth. Part 1: optical communication over the clear turbulent atmospheric channel using diversity. Mar 1998. Optics & Photonics News. 1988. [34] E. Theory. B. 13(10): 36–42. Kim. [30] I. S. 2001. M. Vol. S. I. Inform. Adhikari. Telatar. and the Atmosphere. Korevaar. Proc. 1-3. IL. 2003. Bounds on the capacity of the discrete-time Poisson channel. Moran. Clouds. Oct. Oct 2002.. August 21. Free space optics for laser communication through the air. E. Water. I. Areas Commun. J.[27] S. 4530. H. [28] D. Lee and V. [33] A. Free Space Laser Communication Technologies IX. Korevaar. The Poisson multiple-access channel. I. 22(9): 1896–1906. SPIE. Chan. Availability of Free Space Optics (FSO) and hybrid FSO/RF systems. Nov 2004. Moser. and R. NY. 1997. 177 . Karp. [31] A. Lapidoth and S. Optical Wireless Communications IV. Mazumdar. Killinger. Optical Channels: Fibers. E. 49(12): 3250–3258. and L. IEEE Trans. On wideband broadcast channels. [29] I. Plenum Press. Select. W.

[40] J. Masud. IEEE Trans. 178 . Luenberger. 44(6): 2619–2692. Dec 1990. San Fransisco.[35] D. Information and Information Stability of Random Variables and Processes. Theory. S. IEEE Trans. and T. [39] M. Theory. MA.. Theory. 37(6): 1540–1550. Oct 1998. Inform. Jan 1993. E. Capacity of a pulse-amplitude modulated direct detection photon channel. Holden-Day. Channels with block interference. On the capacity of a direct-detection photon channel with intertransition-constrained binary input. [43] S. and S.mil/ato/programs/orcle. Telecommunications Americas. Proc. IEEE. Shamai. [37] R. Stark. Introduction to Linear and Nonlinear Programming. Theory. Bounds on the capacity of a spectrally constrained Poisson channel. 30(1): 44–53. Proakis. S. Shamai. Shamai and A. Nov 1991. Available online at http://www. 137(6): 424–436. Shamai. Inform. IEEE Trans. [41] S. Lapidoth. [42] S. Jan 1984. [38] The Optical & RF Combined Link Experiment (ORCLE). J. May 2001. 35(5): 30–35. [36] S. Inform. Inc. Biglieri. Pinsker. Ten hottest technologies. Addison Wesley. Inform. McKenna.htm. 1972. G.darpa. 39(1): 19–29. Fading channels: information-theoretic and communications aspects. 1964. E. Buckley. IEEE Trans. McEliece and W.

Apr 1983. Berlin.. A. Ghuman. Wave Propagation in a Turbulent Medium. 179 . I. N. [50] H. Commun. Willebrand and B. IEEE Trans. [47] D. editor. 1978. In J. New York. 2002. 38(8): 41–45. In J. SAMS Publishing. Springer. 1961. Leveque. W. [49] H. IEEE Spectr. and J. 410: 98–102. H.[44] J. Tatarskii. Fiber optics without fiber. J. 1948. Brandt-Pierce. Strohbehn. 1991. H. Laser Beam Propagation in the Atmosphere. 2nd edition. 53(8): 1402– 1412. Shannon. E. G. Willebrand and B. [51] S. Random medium propagation theory applied to communication and radar system analyses. S. Ghuman. [45] J. chapter 6. New York. Dover. A mathematical theory of communication. editor. Shapiro. A. I. Proceedings of SPIE. M. Q. Miller. Indianapolis. [46] C. Tech. Random Point Processes in Time and Space. Snyder and M. Aug 2001. Wilson. Bell Sys. Aug 2005. Carl Leader.. Free Space Optics: Enabling Optical Connectivity in Today’s Networks. Shapiro. Cao. IN. 27: 379–423. Free-space optical MIMO communication with Q-ary PPM. Springer-Verlag.. Imaging and optical communication through atmospheric turbulence. S. Laser Beam Propagation in the Atmosphere. [48] V.

IEEE Trans. [54] X. Kahn. IEEE Trans. M. Zhu and J. Theory. 180 . Feb 2002. Kahn. 34(6): 1449–1471. Nov 1988.. Wyner. Inform. Upper-bounding the capacity of optical IM/DD channels with multiple subcarrier modulation and fixed bias using trigonometric moment space method. IEEE Trans. Aug 2002. 48(2): 514–523. parts I & II. Commun. [53] R. D. Theory. M. 50(8): 1293–1300. Inform. Capacity and error exponent for the direct detection photon channel. You and J.[52] A. Free-space optical communication through atmospheric turbulent channels.

Sign up to vote on this title
UsefulNot useful