You are on page 1of 11

A Systematic Approach to Peak-to-Average Power Ratio in OFDM

Curt Schurgers Mani B. Srivastava

Electrical Engineering Department, University of California at Los Angeles (UCLA)

ABSTRACT
OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.

1. INTRODUCTION
Orthogonal Frequency Division Multiplexing (OFDM) belongs to the more general class of multicarrier modulation systems 1 and has found its way to the high-speed wireless and mobile communications arena. It has been standardized for Digital Audio Broadcast (DAB) in Europe, enabling the mobile reception of high-quality digital audio combined with data services with a total gross capacity of about 2.3 Mb/s 22, 23. Furthermore, OFDM has been adopted for Wireless Local Area Networks (WLANs) to satisfy the high bit rate requirements of multimedia services 24, 25, 26 and is officially included in IEEE 802.11aBRAN standard 27. It has also been proposed for a wide range of other applications, such as Flarions Flash OFDM for 3G systems 28 and CISCOs vectored OFDM for digital microwave communications 29.

In an OFDM system, the data is split into N streams, which are independently modulated on parallel closely-spaced carrier frequencies or tones. In the applications mentioned above, typically 256 or more tones are used 2. The frequency separation between carriers is 1/T, where T is the OFDM symbol time duration. Practical implementations use an Inverse Fast Fourier Transform (IFFT) to generate a sampled version of the composite time signal. The most distinct advantage of OFDM over single carrier modulation is the easy mitigation of inter-symbol interference and fading, without having to resort to elaborate equalization 1. However, spurious high amplitude peaks in the composite time signal occur when the signals from the different tones add constructively 2. Compared to the average signal power, the instantaneous power of these peaks is high, and consequently, so is the Peak-to-Average power Ratio (PAR). The occurrence of these peaks seriously hampers practical implementations and is generally considered as one of the major drawbacks of OFDM. Over the years, different solutions have been proposed to combat this problem. Although addressing the same basic issue, these solutions differ greatly in the specific approach taken. Furthermore, different authors do not entirely agree on the impact of the high signal peaks on the system performance. They therefore cling to different viewpoints even on the exact problem itself that has to be attacked. As a consequence, no general overview or consistent treatment of this problem is available in literature to the best of our knowledge. First, we briefly identify the exact issues involved with high signal peaks in OFDM systems, to arrive at a general problem definition. This allows us to postulate a framework in which all existing solutions can be categorized, analyzed and compared. We show that three distinct classes of techniques can be identified; each with their own advantages and

disadvantages. These classes are called block coding, clip effect transformation and probabilistic. Based on this framework and these classes, we provide a structured overview of several existing solutions to the PAR problem and offer a broad comparison between them.

2. PEAK POWER ISSUE IN OFDM


For clarity, we use upper case letters for frequency domain and lower case letters for time domain variables in the remainder of this paper. To make a distinction with individual vector components, vectors are underlined. Figure 1 presents the block diagram of a multi-carrier transmitter. The input bit stream is converted into N parallel streams, which are encoded and modulated, yielding the OFDM symbols. Such symbol consists of N modulated tones, which can be represented by a vector X with elements Xn, 1 n N. A sampled version of the time domain waveform is generated via an IFFT, as expressed by (1). This time domain symbol is denoted by x with elements xk, 1 k N. x = IFFT(X) (1)

Serial to parallel convertor

Encoder / modulator

N-point IFFT

Parallel to serial convertor

D/A

Filters / Front end

Figure 1: Block diagram of MCM transmitter

These samples of the discrete time signal can exhibit large peaks, which are caused by the addition of the several independently modulated tones. Correspondingly, the Peak-to-Average power Ratio (PAR), defined by (2), is high. In this equation E[.] denodes taking the expected value. Not that E[ xk2] is equal to the variance x2, since the symbols are zeromean.
PAR =

(x ) E [x ]
2 k 2 k

1 k N

(2)

The crest factor CF, widely used in literature as well, is the square root of this PAR. Figure 2 illustrates what the amplitudes xk of one OFDM symbol could look like for a particular example that exhibits a large peak. A high PAR is undesirable, as it requires a large dynamic range of the D/A and A/D convertors. Consequently they are used very inefficiently, as most of the signal amplitudes are only a fraction of this dynamic range. In order to keep the quantization noise at an acceptable level, a large precision is required, or equivalently a large number of bits 2. The continuous time transmit signal s(t) is obtained by supplying the discrete time samples to the D/A convertor and filtering the outputs using a pulse shaping function. This time signal s(t) also exhibits these large peaks, even more so than the sampled signal 30. Equivalently, the continuous time crest factor CFc, defined by (3), is even higher.
CFc = s (t ) max s RMS

(3)

This high CFc could cause problems when the signal is applied to a non-linear device such as a power amplifier, since it results in in-band distortion and spectral spreading. To counteract these effects, the amplifier needs to be highly linear or operated with a large back-off. Both approaches result in a severe power efficiency penalty and are expensive 3. An analysis of non-linear amplifiers, however, points out that an acceptable performance regarding the in-band distortion is obtained with

only a small back-off 4. The tolerable out-of-band radiation or spectral spreading sets the bound on the back-off that is needed 3 . In addition, regulatory bodies, such as the FCC or the CEPT, can specify the PAR limit for a certain band 2. As a conclusion, simply dimensioning the system components to be able to cope with the worst-case signal peaks is practically impossible. That is why solutions have been proposed over the years, to counteract the PAR problem.

Figure 2: Amplitude of one OFDM symbol for N = 256

3. GENERAL FRAMEWORK
Practical systems do not use the full signal amplitude range 2. In our general framework, we therefore set the dynamic range of the system to , which is smaller than the maximum signal level. When is normalized by dividing it by the square root of the average signal power x2, we refer to it as ( = /x). The dynamic range of the D/A and A/D only needs to be equal to now. Also the power amplifier has to deal with a smaller input signal range. When a signal is larger than this threshold , it is clipped, which refers to hard amplitude limiting 2. However, the conversion from the digital to the analog domain causes peak regrowth and therefore some limiting could also be needed in the analog part of the system 5. Another and probably better option to avoid peak regrowth is to consider an oversampled digital signal in the first place 6, 7. In any case, clipping causes in-band signal degradation, called clipping noise, which is approximately white in the signal band. For an oversampled signal or analog clipping, also out-of-band radiation is generated. Appropriate filtering can reduce this effect 5. The amount of clipping noise depends on the probability that the signal is clipped, which is the probability that the signal amplitude is above . Since each time sample calculated using (1) can be approximated as a large sum of independent contributions (for N > 64), the probability density function of the signal amplitude is approximately Gaussian 2, 3. The probability P() that at least one of the N elements of x is clipped is thus given by (4) in the case of no oversampling is used 3, 30, 36 . As mentioned above, this equation neglects regrowth. The probability that the maximum peak in the band limited OFDM signal is above , can be approximated by (5) 30.

P ( ) = 1 1 e

(4)
2

P ( ) = 1 e

N . e

(5)

Setting the value of at such a level that the clipping noise is negligible (e.g. 50 dB below the signal level is obtained 5 when 4) is not optimal. Lowering for a constant number of bits reduces the quantization noise quadratically. The clipping

noise however increases, as the clipping probability is larger. Trading off one noise source versus the other minimizes the total noise and results in an increased overall SNR (Signal-to-Noise Ratio) 2. Alternatively, for a constant SNR, the number of bits in the D/A and A/D can be decreased, lowering the implementation cost.

To recapitulate: our framework imposes a limited signal range and clips peaks accordingly, governed by the total noise tradeoff. Existing PAR reduction techniques fit into this framework, as they all improve this noise tradeoff in some way. Block coding schemes aim at completely eliminating the clipping noise by allowing only symbols that have a peak amplitude smaller than , at the cost of a lower data rate. Another class of techniques, clip effect transformation, tries not to eliminate clipping, but to reduce the effect it has on the system. Finally, a third class reduces the clipping noise directly by lowering the probability of clipping. This class is therefore called probabilistic, in contrast to the deterministic behavior of block coding. We will discuss all three classes in more detail in the next three sections.

4. BLOCK CODING
This class of techniques limits the set of possible signals that can be transmitted. Only those signals with a peak amplitude below are chosen. This results in a complete elimination of the clipping noise. A general analysis reveals that only limited redundancy is needed to achieve this goal 3. This can be readily seen as P() expresses the fraction of the symbols that needs to be discarded for a certain given . Exactly Rac redundant bits (called anticrest bits) can suppress a fraction (1-2-Rac) of the entire set of symbols, which is sufficient if this fraction equals P() 3. Although this theoretical limit shows that only moderate redundancy is needed, no good codes for practical values of N (> 64) are known. A simple strategy is to exhaustively check all possibilities and use a table lookup 8. Other options restrict the phase possibilities of certain tones 9 or only use part of the bits in a differential phase modulation scheme 10. Some codes are chosen based on the observation that a symbol x with a small PAR has an instantaneous power that is most of the time close to the average power. X has therefore a close to flat spectrum, or alternatively an impulse-like autocorrelation 11, 12. Two codes based on this criterion are Golay sequences 11, 13 and m-sequences 12. All these block codes provide a low PAR (typically below 3 dB for the small number of carriers considered), but suffer from some serious drawbacks. They introduce a lot of overhead (25% to 50%) and are only available for a small number of carriers (4 to 16) and small constellation sizes (1 to 4 bits per carrier). Moreover, no efficient implementations are available yet. These drawbacks dramatically limit their usefulness with regard to real applications. In theory, the error correcting capabilities of these codes can also be exploited. However, analysis indicates that decoupling the clipping functionality and the channel coding improves the overall performance 13. It is therefore a better idea to apply clipping with or without one of the methods discussed in the next two sections, and independently add an error correcting code.

5. CLIP EFFECT TRANSFORMATION


5.1. Block Scaling Block scaling determines the optimal clipping threshold out of a limited set on a symbol-by-symbol basis 14. The selected threshold is conveyed to the receiver by means of a dedicated scaling tone. The improvement compared to having a fixed threshold is not reported, but it is expected to be smaller than the other methods we discuss next. 5.2. Receiver Reconstruction Regular clipping is performed at the sender, and the receiver tries to undo some of its effects. This requires the receiver to estimate the clipping that has occurred and to compensate the received symbol accordingly. Typically at most one clip occurs per symbol and the receiver therefore has to estimate only two parameters: the size and location of the clip 15. One technique is based on measurements from the most reliable subcarriers (highest received SNR) and additional checks on the validity of the correction 15. The complexity is approximately equivalent to 1 IFFT. The effect of the clip can also be compensated for

with an iterative algorithm 16. This last method results in significant BER improvements for AWGN and fading channels, but the algorithm performs an IFFT in each iteration step and is therefore likely to be too complex for most practical purposes. 5.3. Modification of the Clipping Peaks Modification methods apply clipping and reduce its negative effects with extra signal processing. Peak windowing multiplies the large signal peaks by a small window like Kaiser or Hamming 17. A similar approach uses an additive correction function in the vicinity of the clip 18. Both approaches smooth the hard limiting effect, thereby decreasing the out-of-band distortion. They are therefore alternatives for the filter that normally follows the clipping. Peak windowing results in less out-of-band distortion, but more in-band noise, than the additive approach. An alternative is to shape the in-band noise, such that it is located on those tones where it is the least harmful. In xDSL systems, this corresponds to the higher frequency tones, which have the lowest SNR 14. However, in OFDM systems the tones with the lowest SNR are not know a priori, and this technique is therefore not directly amendable to wireless systems. We discuss it here since the Tone Reduction in section 6.4 can be viewed as an extension of this approach. No performance results are provided in 14, but it is illustrated that the noise spectrum undergoes the desired change. However, probabilistic techniques based on the same principle offer a better option, see section 6.

6. PROBABILISTIC
The final class of techniques is classified as probabilistic. They do not aim at reducing the maximum signal amplitude, but rather the occurrence of peak values. As a consequence, the clipping noise is reduced 2. The basic idea is thus to modify the function P() to P() in such a way that large values occur with a lower probability. The general probabilistic approach is to introduce some limited redundancy. This resembles block coding, but the goal is not to eliminate the peaks, but only to make them less frequent. The basic way to achieve this is by a linear transformation as in (6). In this equation, Yn are elements of the N-point input vector Y of the IFFT and Xn are elements of the original frequency domain data vector X. Yn = An Xn + Bn 1 n N (6) The goal is to find the N-point vectors A and B with elements An and Bn respectively, such that the transmit symbol y = IFFT(Y) has a small probability of peaks. We have mapped the existing techniques onto this basic formula. Selected Mapping (SLM) and Partial Transmit Sequences (PTS) try to select a good A, while B is equal to the zero vector. They both use the restriction that the N components of A all have unit amplitude: An = ejn, n [0,2[, 1 n N. This results in a pure rotation vector. Tone Injection (TI) and Tone Rejection (TR) optimize B, while A is set to the all-one-vector. Each of these four techniques has a different performance versus overhead and complexity trade-offs. 6.1. Selected Mapping (SLM) The basic idea is to have D statistically independent vectors Yd represent the same information 3, 36. The vector resulting in the time-domain symbol yd with the lowest PAR is selected for transmission. The probability PD() that the peak amplitude of all these D independent symbols exceeds is expressed by (7), where P() is from (2).

P() = PD( )= P()D

(7)

These D independent vectors Yd are generated using D distinct pseudo-random, but fixed, rotation vectors Ad, with 1 d D. Without performance loss, the first modified vector Y1 can be chosen as the unchanged original X, which means that A1 is the all-one vector 3.

This approach was first proposed for only one alternative vector (D = 2) 2. The implementation involves a serial evaluation of the two options. Only if the original symbol has an amplitude peak that is too large, the input vector is transformed as in (6) and the resulting symbol is transmitted. A generalization uses an arbitrary number D of these rotation vectors Ad 3. For implementation purposes, it is advantageous to set the N elements of the vectors Ad {1, j}, as these factors can be implemented without multiplications. The setup presented in 3 involves calculating all D alternative time domain symbols yd in parallel and selecting the one with the smallest PAR (see figure 3). However, this results in a costly implementation since D IFFTs operate in parallel. The interplay between this approach and clipping is studied in 31.

A1 IFFT A2 IFFT X AD IFFT yD y2 Selection of best yd y1

Figure 3: SLM block diagram

A variant on this approach does not use random rotation vectors for the Ad, but rather D cyclically inequivalent m-sequences 19 . As explained in section 4, m-sequences have a reasonably flat spectrum. Multiplying X with an m-sequence is likely to result in a more flat spectrum as well and therefore reduced amplitude peaks. No detailed performance comparison with the previous methods is available. More important, a heuristic is presented to select the optimal m-sequence out of the D possibilities only based on Yd without having to actually run the IFFT. This greatly reduces the implementation complexity. For all variants, the receiver has to invert the operation performed at the sender and therefore needs to know which transformation vector was chosen. The easiest solution is to transmit the number d as side information, protected using channel coding 3. In general log2(D-1) bits of side information are needed. If the transmitted data is channel encoded, no side information is needed at all. In this case all the possibilities are tested and the most probable one is selected 36. This option however seems too costly to be of practical use. 6.2. Partial Transmit Sequences (PTS) PTS is based on the same principle as SLM, but the transformation vectors have a different structure. A vector X is now subdivided into V non-overlapping subvectors Xv of size N/V 6, 32, 36. Because they are non-overlapping, X = v=1..V Xv. Each carrier in the subvector Xv is multiplied with the same rotation factor Rd(v). The rotation factors of the different subvectors are statistically independent. This means that the vector Ad is now composed of only V different components. The derivation (8), which uses the linearity of the IFFT, illustrates the advantage of this approach: the d time domain vectors yd can be composed after the IFFT (see figure 4).
(v) d

v =1

IFFT (Y )
V (v ) d v =1

R
v =1

(v) d

( IFFT (X d v ) )

y d = IFFT

, 1 d D

(8)

Again as in SLM, one block can be left unchanged without performance degradation, so Rd(1) = 1, for 1 d D. By choosing the other rotation factors Rd(v) from the same set of size W, there are D = W(V-1) distinct possibilities. Only V IFFTs need to be calculated however. Since each such IFFT only has N/V non-zero inputs, its implementation can be greatly simplified. It is also advantageous in practice to choose the rotation factors from the set {1, j} 36. Optimization results show that the subvectors should not be comprised of adjacent tones, but rather of a more random selection (while maintaining the requirement that subvectors do not overlap) 3. However, the number of iterations can be strongly reduced by stopping as soon as a satisfactory PAR is achieved 33. This adaptive approach reduces the complexity of PTS by about 98% for a very slight performance penalty.

X(1) X(2) X

IFFT Rd1 IFFT Rd2 +

X(V)

IFFT RdV Optimization of yd : optimal d = d

Figure 4: PTS block diagram

The time-domain vector yd with the lowest peak amplitude is transmitted. The receiver again needs to know which rotation vectors were applied. In this case, (V-1)log2(W) bits of side information are required. When differential modulation is used in each subvector, no side information needs to be conveyed. This also simplifies the receiver structure, since it does not need to do the inverse transformation 3. An alternative option for selecting the rations vectors is proposed in 32. It approximates the continuous phase vector that minimizes the peak amplitude in each sample, selects the best one amongst these vectors and uses a quantized version. The performance is actually better than the exhaustive search approach over a limited set of rotation vectors. A variant on PTS suggests to look at the autocorrelation function of Yd(v) to determine the optimal vector to transmit 6. This eliminates the extra IFFTs, but requires the calculation of the autocorrelation function. Another advantage of this technique is that it optimizes the CFc instead of the CF itself (see also section 3). Other variations include cyclic shifts in addition to rotations to create the new signal out of the composite blocks 34. 6.3. Tone Injection (TI) Tone Injection uses an additive correction, which means that it optimizes B in (6). The basic idea is to extend the QAM constellation, such that the same data point corresponds to multiple possible constellation points 20, 35. One option 20 in this class is illustrated in figure 5, where the original shaded constellation is replicated into several alternative ones. B is therefore a translation vector, such that (Y) mod = X. The receiver only needs to know how to map the redundant constellations on the original one, but no extra side information is required. On the flip side, the alternative constellation points have an increased energy compared to the original ones. The calculation of the optimal translation vector of size N is too complex. An iterative algorithm is proposed instead. Each iteration step selects a translation for one tone, such that the PAR reduction is as large as possible and the extra power is minimized. The corrections are applied on y

immediately, without having to do an IFFT for each iteration. Typically setting in the order of 5 to 10 iterations as a maximum bound, generates sufficient PAR reduction 20.

Figure 5: TI extended constellation

An alternative strategy in this class of algorithms is to move the constellation points by applying an FFT on the clipped time signal 35. Again an iterative strategy is used, such that all constellation points are within specified boundaries and the PAR specification of the time signal is satisfied. 6.4. Tone Reduction (TR) Tone Reduction is a clever extension of the clipping noise shaping technique of 14, which we have discussed in section 4. The idea there was to modify the noise spectrum such that the noise is concentrated at high frequencies where the SNR is low anyway. Since bit allocation algorithms allocate bits only to those tones with sufficient SNR, typically some of them will carry no data 25. It is desirable to shape the clipping noise in such a way that it is only present on those unused tones. This technique is applicable to every type of MCM system, provided some tones are unused. With respect to the general formula, the clipping noise is represented by B and B is orthogonal to X, or equivalently B.X = 0. The problem of finding the transformation function that distributes the clipping noise over the unused tones, is a convex-setintersection problem 15. An iterative algorithm can perform this functionality with limited complexity, requiring around 8 iterations as a maximum. During each iteration step, the clip is spread out using a shaping function, such that the clipping noise only appears on the unused tones. Also a gradient descent algorithm can be used to allocate the clipping noise 20, 21. In both cases, the operations are performed on the time-domain vector y, so no extra IFFT operations are needed. As a comparison, the shaping technique of 14 only used a single iteration and the shaping function was an ad hoc high-pass filter. When the unused tones are all clustered together (as in xDSL), the PAR reduction is not as good as with randomly selected tones 20. However, A random selection of tones can reduce the data rate, since these tones would normally have carried data in this case. An iterative algorithm explores this tradeoff between PAR reduction and data rate loss 20. 6.5. Comparison A performance comparison between these four probabilistic techniques is presented in figure 6. The enhanced probability P() of the symbol amplitude exceeding the clipping threshold is plotted, in the case of N = 256. The curves for SLM are for two values of D, where it is clear that an increased number of alternatives leads to a better performance at the cost of an increased implementation complexity. For PTS, the curves are plotted corresponding to two values of V, each with W = 4. Again, we notice the same tradeoff as in SLM. The TI approach is shown using 10 iterations as a maximum bound (which is close to optimal). Finally, the two curves for TR use randomly selected tones for positioning the clipping noise. The amount of these tones is either 5% or 20% of the total N. As a reference, the original curve for clipping is shown as well.

1.E+00

P()
1.E-01

TI
1.E-02

SLM (D = 2) SLM (D = 4) PTS (V = 2) PTS (V = 4)

1.E-03

TR (5%) Original TR (20%)

1.E-04

1.E-05

1.E-06 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

(dB)
Figure 6: Performance comparison of probabilistic methods

Except for TR, all methods have a small (in the case of SLM and PTS) or no (in the case of TI) loss in terms of data rate. In the case of TR, the data rate loss depends on the number of tones used for clipping mitigation and on the amount of data that would otherwise have been transmitted on these tones. The distribution of unused or lowly loaded carriers and thus the data rate loss for a certain PAR reduction, depends on the characteristics of the channel. The implementation complexity of the different techniques is the major issue to prefer one or the other. Methods that involve multiple IFFTs are too complex in comparison to the other techniques. This means that SLM is only a worthwile option if it chooses the appropriate A based on the frequency-domain vectors Y using the autocorrelation function. For PTS, either the parallel IFFTs have to efficiently exploit the fact that only N/V inputs are non-zero or a frequency domain selection criterion should be used as well. Such frequency domain selection methods have an implementation complexity of the order O(N), which is expected to be comparable to one IFFT, although adaptive alternatives are available. The iterative methods of TR and TI operate on time-domain vectors and avoid extra IFFTs this way. The iterative TR algorithm requires O(N) operations. More specifically, the algorithm cost per iteration is about 25% of that of an IFFT 21. For the expected number of iterations needed (about 8 to 10), this is a bit more expensive than the previous methods. All these iterations are not needed for each symbol however, so the actual average use of resources is expected to be less. The gradient based algorithm is slightly more complex than the one from 15, but has a faster convergence. Finally, the TI method has a complexity approximately of the order O(N3/2) per iteration. Also in absolute terms the complexity is higher than that of TR, but because of the zero data rate loss TI is usefull for smaller number of tones 20, 21. Again, not all symbols need the full range of iterations. This also brings us to the aspect of delay, which linearly depends on the amount of iterations for TR and TI. A large number of iterations is however only necessary for a relatively small fraction of the data vectors. For the non-iterative schemes (SLM and PTS), the same selection operation between the possible options is needed for each symbol. The average delay is expected to be about the same for the iterative and non-iterative schemes. The delay jitter is however larger for the iterative schemes, while the delay is constant for the non-iterative ones. Depending on the application, this could be an issue. Realtime applications are for example more sensitive to delay jitter than bulk data transfers.

7. CONCLUSIONS
In this paper, we have investigated the PAR problem in OFDM systems. The spurious occurrence of very high amplitude peaks results in a practically infeasible implementation, if the system were designed to cope with these peaks without introducing distortion. We have tackled this problem in a general framework, by allowing the system to clip the signal and considering the generated noise as part of the total noise tradeoff. This results in an improved SNR for the same number of bits or a reduced number of bits (and a therefore reduced implementation cost) in the case of a constant SNR. PAR reduction techniques aim at further improving this tradeoff. We have investigated three different classes of such techniques. Block coding is only applicable for very a small number of tones and its practical usefulness is therefore severely limited. Receiver reconstruction is a form of clip effect transformation and recovers from clipping at the receiver side. Although this method seems insufficient on its own, it might be useful in combination with other ones, as it is orthogonal to those methods that modify the clipping noise at the sender. This technique can also be beneficial in asymmetric scenarios were the expensive signal processing needs to be concentrated at the receiver side (for example in wireless mobile-to-basestation traffic). The probabilistic class of techniques is the most interesting one. It reduces the clipping noise by lowering the probability of clip occurrences. We have tied the separate probabilistic methods to a general formula. Each method has an own implementation complexity versus clipping probability tradeoff. Overall, this class seems the most promising for resolving the PAR issue. It is expected that most practical systems will benefit most from either ordinary clipping or enhanced with one of these probabilistic techniques.

REFERENCES
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. Bingham, J., Multicarrier modulation for data transmission: an idea whose time has come, IEEE Commun. Magazine, pp. 5-14, 1990. Mestdagh, D., Spruyt, P., A method to reduce the probability of clipping in DMT-based transceivers, IEEE Trans. on Comm., Vol.44, No.10, pp. 1234-1238, 1996. Mller, S., Buml, R., Fischer, R., Huber J., OFDM with reduced peak-to-average power ratio by multiple signal representation, Annals of Telecommunications, Vol.52, No.1-2, pp.58-67, 1997. Liu, C.-L., The effect of nonlinearity on a QPSK-OFDM-QAM signal, IEEE Trans. on Consumer Electronics, Vol.43, No.3, pp. 443-447, 1997. Gross, R., Veeneman D., SNR and spectral properties for a clipped DMT ADSL signal, Proceedings SuperComm/ICC94, New Orleans, LA, pp. 843-847, 1994. Tellambura, C., Phase optimisation criterion for reducing peak-to-average power ration in OFDM, Electronics Letters, Vol.43, No.2, pp. 169-170, 1998. Tellado, J., Cioffi, J., Further results on peak-to-average ratio reduction, T1E1.4: VDSL/ADSL, T1E1.4/98-252, 1998. Wilkinson, T., Jones, E., Minimisation of the peak to mean envelope power ratio of multicarrier transmission schemes by block coding, Proceedings VTC95, Chicago, IL, pp 25-28, 1995. Kamerman, A., Krishnakumar, A., OFDM encoding with reduced crestfactor, Symp. On Comm. & Vehicular Tecnology in the Benelux, Louvain-La-Neuve, Belgium, pp. 182-186, 1994. Friese, M., Multicarrier modulation with low peak-to-average power ratio, Electronics Letters, Vol.32, No.8, pp. 713714, 1996. Ochiai, H., Imai, H., Performance of block codes with peak power reduction for indoor multicarrier systems, Proceedings VTC98, Ottawa, Canada, pp. 338-342, 1998. Li, X., Ritcey, J., M-sequences for OFDM peak-to-average power ratio reduction and error correction, Electronics Letters, Vol.33, No.7, pp 554-555, 1997. Wulich, D., Goldfeld, L., Reduction of peak factor in orthogonal multicarrier modulation by amplitude limiting and coding, IEEE Trans. on Comm., Vol.47, No.1, pp. 18-21, 1999. Chow, J., Bingham, J., Flowers, M., Mitigating clipping noise in multi-carrier systems, Proceedings ICC97, Montreal, Canada, pp. 715-719, 1997. Gatherer, A., Polley, M., Controlling clipping probability in DMT transmission, Proc. Conf. on Signals, Systems and Computers, Pacific Grove, CA, pp. 578-584, 1997. Kim, D., Stber, G., Clipping noise mitigation for OFDM by decision-aided reconstruction, IEEE Comm. Letters, Vol.3, No.1, pp. 4-6, 1999.

17. Van Nee, R., De Wild, A., Reducing the peak-to-average power ratio of OFDM, Proceedings VTC98, Ottawa, Canada, pp. 2072-2076, 1998. 18. May, T., Rohling, H., Reducing the peak-to-average power ratio in OFDM radio transmission systems, Proceedings VTC98, Ottawa, Canada, pp. 2474-2478, 1998. 19. Van Eetvelt, P., Wade, G., Tomlinson, M., Peak to average power reduction for OFDM schemes by selective scrambling, Electronics Letters, Vol.32, No.21, pp. 1963-1964, 1996. 20. Tellado, J., Cioffi, J., Peak power reduction for multicarrier transmission, Proceedings Globecom98, Sydney, Australia, 1998. 21. Tellado, J., Cioffi, J., Revisiting DMTs Peak-to-Average Ratio, VDSL ETSI/ANSI TM6:TD08, Antwerp, 1998. 22. Zou, W.Y., Wu, Y., COFDM: an overview, IEEE Transactions on Broadcasting, Vol.41, No.1, pp. 1-8, 1995. 23. Yamauchi, K., Kakiuchi, S., Takebe, A., Sugitomo, M., Digital audio broadcasting receiver development, International Broadcasting Conference IBC '95, Amsterdam, Netherlands, pp. 71-75, 1995. 24. O'Neill, R., Lopes, L.B., A study of novel OFDM transmission schemes for use in indoor environments, Proceedings VTC96, Atlanta, GA, pp. 909-913, 1996. 25. Gyselinckx, B., Eberle, W., Engels, M., Schurgers, C., Thoen, S., Vandenameele, P., Van der Perre, L., A flexible architecture for future wireless local area networks, Proceedings of ICT'98 - International Conference on Telecommunications, Chalkidiki, Greece,pp. 115-119, 1998. 26. Nobles, P., Halsall, F., OFDM for high bit rate data transmission over measured indoor radio channels, IEE Colloquium on 'Radio LANs and MANs', London, UK, pp.5/1-5, 1995. 27. IEEE 802.11a draft: http://grouper.ieee.org/groups/802/11/ 28. Flarion Technologies, http://www.flarion.com/ 29. CISCO Systems, http://www.cisco.com 30. Ochiai, H, Imai, H., On the distribution of the peak-to-average power ratio in OFDM signals, IEEE Transactions on Communications, Vol.49, No.2, pp. 282-289, 2001. 31. Ochiai, H., Imai, H., Performance of the deliberate clipping with adaptive symbol selection for strictly band-limited OFDM systems, IEEE Journal on Selected Areas in Communications, Vol.18, No.11, pp. 2270-2277, 2000. 32. Tellambura, C., Improved phase factor computation for the PAR reduction of an OFDM signal using PTS, IEEE Communications Letters, Vol.5, No.4, pp. 135-137, 2001. 33. Jayalath, A., Tellambura, C., Adaptive PTS approach for reduction of peak-to-average power ratio of OFDM signal, Electronics Letters, Vol.36, No.14, pp. 1226-1228, 2000. 34. Hill, G., Faulkner, M., Singh, J., Reducing the peak-to-average power ratio in OFDM by cyclically shifting partial transmit sequences, Electronics Letters, Vol.36, No.6, pp. 560-561, 2000. 35. Jones, D., Peak power reduction in OFDM and DMT via active channel modification, Proceedings of 1999 Asilomar Conference, Pacific Grove, CA, pp. 1076-1079, 1999. 36. Mller, S., Huber J., A comparison of peak power reduction schemes for OFDM, Globecom97, Phoenix, AZ, pp. 1-5, 1997.

You might also like