You are on page 1of 24

1

HSDPA, HSUPA and MIMO-Aided Cross-Layer-Optimized FDDversus TDD Networking Tsinghua University, Beijing
Speaker: Prof. L. Hanzo School of Electronics and Computer Science, University of Southampton, SO17 1BJ, UK. http://www-mobile.ecs.soton.ac.uk

A very warm welcome to you all - it is a real pleasure to see so many friendly, familiar faces here. This is a great opportunity for us to do some joint thinking as well as reminiscing, but even more importantly, looking forward to the future, trying to see where our glorious but demanding industry is heading. Let us get going in earnest and, if the topic is of interest to you, please visit our website, where you will nd many related papers, and book chapters for your reading pleasure. Acknowledgements I cherish this opportunity to share my views with you and would like to thank you for inviting me. I would also very much like to thank the team back at base in Southampton, UK, with whom I have enjoyed an exhilarating ride - a ride which has been good and is getting better. I would also like to express my gratitute to the nancial sponsors of my team - the EPSRC, the EU, the Virtual Centre of Excellence and so on.

Fig. 1.

Evolution of science & its impact on wireless standardization

Figure 1: This roadmap portrays, how the diverse set of analogue national mobile phone systems evolved

into a joint digital standard, namely the GSM standard, at digital roundabout. Following this era, the success of GSM was so stunning - even those who were the daring futurists could not have predicted that it would be adopted right across the globe. Later on, it was renamed as the Global System of Mobile Communications, so this just shows the power of standardisation in terms of lining up behind a common worldwide solution. Things have moved on. There were less important national standards, such as the Digital Advanced Mobile Phone System (D-AMPS) and IS-95 in the US and then there were the Far Eastern proposals. However, these all ended up being either blind alleys in our roadmap or rejoining the mainstream. The march towards Third Generation (3G) Plaza, continued during the 1990s. Anothre evolutionary process, since the capabilities of GSM, in terms of bit rates, were rather limited. It was great and it was fantastic, with practically ubiquitous global roaming. However, any wireless Internet-style services that were trying to exploit GSM, such as the old Wireless Access Protocol (WAP) based services, for example, invariably failed because the bit pipe was simply not thick enough; i.e. the affordable bit rates were not high enough. The 3G research started out with a much more ambitious objective and ended up with about 384 kilobits per second or so, as the realistically achievable rate. The Code Division Multiple Access (CDMA) based 3G solution emerged partly from the experience gleaned from IS95, the Pan-American system. Owing to lack of capacity, or throughput rather, they introduced a three-carrier version of IS-95, namely cdma2000, which then had a commensurately higher throughput - but, nonetheless, it was not a fully-edged OFDM-Style multicarrier system. Despite the 40-year research history of OFDM, multicarrier cellular solutions are only emerging round about now in terms of the 3G Partnership Projects (3GPP) Long-Term Evolution (LTE) initiative, for example. Clearly, in recent years multicarrier solutions seem to have found their way into all the wireless standards in terms of local area networks, as well as in wide area coverage xed wireless access such as, WiMAX for example, and so on. What is so beautiful about multicarrier solutions is the incredible exibility that they are capable of providing. They have all these different parameters which allow us to tweak them and programme them, whatever the circumstances are - regardless of the propagation environment and regardless of the quality of service requirements, and so on. We are gradually approaching the capacity gate on our road map - we are approaching the limits, but these are really only the limits of the single-input/single-output systems. There is a great deal of headroom further beyond that, in terms of multiple-input, multiple-output MIMO systems. So MIMO street, and capacity gate, and turbo street all join here, at Next Generation Plaza. What we are looking at researching today is this Telepresence Avenue, constituted by the next generation systems, which are capable of providing all the proverbial Swiss-army-knife-type of services.
Resolution Sub-QCIF QCIF CIF CCIR 601 HDTV 1440 HDTV Dimension 128 96 176 144 352 288 720 480 1440 960 1920 1080 Pixel/sec at 30 f/s 0.37 M 0.76 M 3.04 M 10.40 M 47.00 M 62.70 M Signicance Hand-Held Mobile Video Video Conf. via Public Phone Network Consumer tape equivalent TV Consumer HDTV Studio HDTV

TABLE I Standardised video frame dimensions and their typical applications

Table I: What does wireless video communications and the related video frames size have to do with the world wide wait - or should I perhaps say world wide web? Our research community is endeavoring to provide awless telepresence-like services, which require the transmission of video and audio, all high quality and crisp, even for high-motion video. However, the bit rates are tremendous and hence the provision of DVB-H services to our mobiles seemed unthinkable until very recently. Yet, it has its commercial applications by now. Even a system with tiny little pictures, like a (176 x 144)-pixel system, would require nearly 1 Megabyte/sec transmission speed, if it is uncompressed, so compression is extremely important. I actually brought along a wireless camera, in the hope that I would be able to demonstrate the limitations of current technology but, unfortunately, I could not get onto the wireless internet here at the Royal Academy - so that will be the next lecture! Video communications and WWW demonstrations: To elaborate a little further on the wireless world-

wide web - it is not only the wireless transmission of point, shoot & share-style high-quality video clips, but browsing the worldwide web, IP-TV, etc which is likely to induce the world wide wait scenario, when we rolled out wireless internet services right across the country, and people become fond of it and start using it routinely on the move. To illustrate this a little further, let me take you on a trip to Tokyo, Japan, for example. [Demonstrates Google Earth] Imagine the amount of video information that you have to grep from the net, when nding the location you are looking for. Does anyone know what this great big green area is in the picture? [The Imperial Palace Garden] Exactly. Let me zoom in further. Can anyone guess the make of the Emperors car? [Laughter] Perhaps we cannot quite make that out owing to the limited resolution of Google Map... Shall we then impose further video compression complexity or shall we put up with the WWW? I was looking for a map of this region on the net but, unfortunately, I could not nd a street map with English script on it - it turns out that this moat surrounding the Emperors Palace is actually called the Hanzo Moat - believe it or not. My family goes back to Alsace, the often disputed territory between Germany and France, but Hanzo was also a medieval ninja, a heroic secret agent like 007 for us. Video communications history & standards - Figure 2: Here I simply wanted to demonstrate both the power and the transmission-rate requirements of the wireless world wide web. For example, we could go to any arbitrary location in London, but I will just carry on and come back to my little slide show here. Let us look at the power of video compression and highlight as to what degree we would benet from it. This is a piece of history and I will not be able to tell you how all of this works, given the amount of time available to us, but it really just portrays that we needed about 25 years or so to move from the rst video communication standard, namely H.120 to the H.264 and MPEG4 codecs of the present era. These codecs have been developed under the auspices of the International Telecommunications Union (ITU), the Joint Photographic Expert Group (JPEG), the Motion Pictures Expert Group (MPEG), and so on. Virtually all standard video codecs tend to use the Discrete Cosine Transform (DCT) for compression and then they combine this with high compression Huffman coding or, entropy coding methods, which are of course extremely vulnerable to transmission errors. The video therefore has to be protected extremely well and all of this has very interesting and grave ramications with regard to the lessons of the Shannonian source and channel separation theorem. The Shannonian source and channel separation theorem was really dened for transmission over the memoriless Gaussian channel, which exhibits random error statistics. Furthermore, it assumed potentially high-complexity, high-latency lossless compression using entropy coding, although all the above-mentioned standardised codecs exploit the psycho-visual and psycho-acoustic properties of the human eye and ear. Hence they constitute high-compression, low-delay lossy multimedia codecs maintaining lip-synchronisation. The individual bits representing the original video sequence inict different subjective or perceived video degradations and hence they exhibit different sensitivities to their corruption. I will not be able to talk much about this, but it is a whole new research area, all culminating very recently in the denition of the MPEG-4 and the H.264 standards, which require the joint design of source and channel coding. Interestingly, the complexity has gone up almost exponentially over the years - closely following Moores law of course, although that was originally surmised for predicting the pace of development in microelectronics. Nonetheless it is also true to a degree for the evolution of digital signal processing in wireless communications. The sort of features which differentiate, for example, the newer codecs from the older ones is the motion compensation - the recent ones use very high motion compensation resolution, with one-eighth of a pixel being the motion compensations resolution. It is a computationally demanding algorithm. A glimpse of post-Shannon history - Figure 3: Many of the advances achieved in wireless communications may be attributed in one way or another to those in channel coding and the turbo detection principle, as exemplied by joint coding and modulation, space-time coding, turbo equalization, synchronization, multiuser detection, etc. This illustration provides a whistle-stop tour of how information theory developed over the years, going back to Shannons classic paper - which is probably the best-cited paper in our area. The left-hand side indicates the development of block codes, while the right-hand side shows the advances in convolutional codes. All I wanted to say here about the evolution of coding, without being able to explain the philosophy of each individual code as they developed over the years, is that for example low density parity check codes (LDPC) invented in 1963 by Gallagher in his PhD thesis, were interestingly neglected until their

1984 1986 1988 1989 1990 1992 1993 1994 1996 1998 2003

CCITT H.120 JPEG codec MPEG1 research commenced H.261 rst draft MPEG2 research commenced MPEG1 completed MPEG4 concept conceived H.263 research commenced H.263 completed MPEG approved H.26L renamed as H.264

Fig. 2.

Video communications history & standards

renaissance during the early 1990s. It is actually stunning, how this ingenious code family could have been overlooked and it was probably only the interest in avoiding the turbo coding patent of Claude Berrou, which then drew the communitys attention to LDPC codes. Recently a plethora of new algorithms have been proposed, such as for example the so-called generalised low density parity check codes, which are capable of incorporating any arbitrary low complexity constituent codes for creating powerful iteratively detected systems - a philosophy, reminiscent of turbo codes. So, returning to the iterative detection era of Figure 3 - Claude Berrous pioneering conference paper came out in 1993, followed by the journal version in 1995. This sparked off a whole new era of turbo detection research - really amazingly powerful schemes. According to these lessons, what we have to do is jointly estimate the channel, jointly synchronise and detect the data. Nobody is interested, as such, in synchronisation for the sake of it, or channel estimation for the sake of it - however, high-integrity data detection is unfeasible without synchronisation, channel estimation and equalisation, which have to be carried out jointly and iteratively - an extremely powerful detection paradigm. Turbo detection in this broader sense became even more powerful and more important in the MIMO era, because we have to estimate a multiplicity of channels - just imagine, you have a 4 x 4 MIMO system, or an 8 x 8 MIMO system, which would require the estimation of 64 channels - a tremendous task, which is also prone to detection error propagation. Channel capacity - Figure 4: Just to give you a glimpse of the post-2000 history, let us move on. Briey taking a step back in history again to Shannon, Figure 4 portrays the classic Shannon-Hartley law - the best ever performance we could hope to achieve for transmission over AWGN channels. Naturally, this level of performance is not readily achievable for transmission over the more hostile family of wireless channels. Nonetheless, as you see in the gure, the adaptive modulation aided High Speed Downlink Packet Access (HSDPA) mode of the 3G systems may come fairly close to it - within about 5-7dBs, provided that you do not require a very high transmission integrity. If you do, invariably you will be further and further away from these capacity limits. You can inch closer again, if you can afford using more turbo iterations, which potentially requires more battery power and if you can afford more delay in terms of interleaving. How close we may approach capacity depends on a complex interplay of the system parameters and the propagation environment, which I will elaborate on in greater detail as we go along. Sufce to say that adaptivity is adamant, because the wireless channel is so time-variant that no xed-mode transceiver could ever be expected to perform adequately, whilst providing a constant grade of service (GOS). In fact, the GOS will always be characterised with the aid of a specic distribution. Let us now change direction and consider the new era of evolved signal processing required for sophisticated multiuser transmission and cooperative communications. Averting the WWW: MIMOs, cooperative communications and transmit processing Moving forward to the future, the sort of things I see in terms of powerful enabling techniques, are rst of all the employment of MIMOs, co-operation and transmit pre-processing. Our community has carried out an awful lot of research, designing powerful turbo detectors and sophisticated receivers, but we could actually shift much of the complexity to the base station, if you like, lumbering it with more processing - in the interests of ending up with implementationally simple mobile stations. For example, mobiles could collaborate and co-operate with each other because, after all, they all receive

5
Convolutional Codes Shannon limit (1948)

Block Codes

Hamming codes

1950

Elias, Convolutional codes

BCH codes Reed Solomon codes PGZ algorithm LDPC Codes Berlekamp-Massey algorithm RRNS codes

1960

Viterbi algorithm

1970 Chase algorithm Bahl, MAP algorithm

Wolf, trellis block codes 1980

Ungerboeck, TCM Hagenauer, SOVA algorithm 1990 Koch, Max-Log-MAP algorithm

Pyndiah, SISO Chase algorithm Hagenauer, turbo BCH code Nickl, turbo Hamming code Alamouti, space-time block code

Berrou, turbo codes Robertson, Log-MAP algorithm Robertson, TTCM Tarokh, space-time trellis code Acikel, punctured turbo code

2000

Fig. 3.

A glimpse of post-Shannon channel coding history

each others downlink signal from the base station. If the trafc is not extremely high, for example, then these mobiles have the chance to cooperatively exchange that information during the unallocated time-slots. Once again, this requires further study as to exactly how much side information we have to set aside for supporting transmit preprocessing under what conditions. This changes the whole broad picture in information theory and this is a very grave question. If you were a PhD student, and you could crack this information-theoretic problem in the context of realistic scenarios, you would become as highly acclaimed as Gallagher. Moving on, co-operative systems are important, but so are transmit pre-processors. Very simple manifestations of this are well-known, for example, from Nyquist ltering. We split the Nyquist lter into a square-root Nyquist lter at the transmitter, and a square-root Nyquist characteristic at the receiver - that is the simplest known example. Matched ltering is quite similar, but we could also think of other transmit preprocessing solutions, such as Tomlinson-Harashima precoding, from the University of Plymouth - we should celebrate these great British achievements. Instead of equalising the received signal at the receiver, provided the channel is known with the advent of accurate long-range prediction, we can pre-equalise it at the transmitter. The base-stations (BS) complexity is less limited than that of the mobile station (MS). Of course, there are crest-factor and amplier linearization problems and all sorts of other related complex issues to deal with, which I have to gloss over owning to our limited time. Hence I simply limit myself to highlighting the

10

TU Channel AQAM - mean BER 1% AQAM - mean BER 0.01% Fixed - mean BER 1% Fixed - mean BER 0.01% Shannon Limit

9 8

Normalised Capacity (Bits/ s/ Hz)

7 6 5

64QAM

16QAM

4QAM

BPSK

10

10

15

20

25

30

35

Eb / No(dB)

Fig. 4. Channel capacity upper bound of HSDPA-style AQAM and xed modulation schemes over the COST 207 TU Rayleigh Fading channel for BER=1% and BER=0.01%.

importance of transmit pre-processing. Sufce it to say that there is a whole new raft of research problems in designing these multi-user transmitters. As a simple illustration, I may argue that, provided I know the angular location of a particular receiver, I can - for example, with the aid of beamforming, directly cast the signal to the mobile, whilst avoiding the contamination of other mobiles signals. Of course, you would need a relatively high-order transmit beamformer at the BS. Transmit preprocessing may even become realistic in ad hoc networks, where we can accommodate transmit beamformers in the back plate of a laptop, provided that the beamformer has a sufciently high order to compensate for the potential propagation losses of the typically higher carrier frequencies of WLANs. We could go on about this topic endlessly, because it is very rich in terms of open problems, jointly designing for example transmit- and receive zero-forcing multi-user schemes, transmit- and receive Minimum Mean Squared Error (MMSE) schemes, transmit- and receive eigen-beamformers, etc, but we obviously do not have the luxury of time. Let us therefore consider in Figure 5, how we can circumvent the limitations of classic Shannonian information theory in the MIMO era. Capacity of MIMO systems and their recongurability - Figure 5: We briey considered the classic Shannon-Hartley law, but what is the equivalent of that for MIMO (multiple input, multiple output) systems. The idea is that you could create several individual bit streams from the individual transmit antennas to the mobile station. Every time you increase the number of transmitter and receiver antennas, you get an extra grade of freedom, and so the MIMO capacity curve seen in the illustration increases linearly upon increasing the number of transmit and receive antennas - more precisely, as a linear function of the smaller of the number of transmit and receive antennas. Clearly, there are limitations - if you migrate, for example, to higher frequencies, then the wavelength is reduced and hence you can potentially accommodate more appropriately phased /2-spaced antennas in the back-plane of a laptop, but that move requires a good deal more research into the propagation aspects. The curves labelled as Diversity in the gure represent the capacity, when designing for achieving the maximum possible diversity gain - in other words, when designing for attaining integrity improvements. Hence this design requires a relatively low signal-to-noise ratio. By

7
11
ray-capacity-mimo-16qam-3.gle

t The Full-multiplexing-gain system has a higher asymptotic capacity as a benet of its multiplexing gain. t The gap between the capacity curves of the Rayleigh fading channel and the AWGN channel is narrower for the full-diversity system as a benet of its diversity gain.

16-QAM, D=4 ........................... 10 ....... ...................... ... ......... 9 . .. .. 16-QAM, D=2 8 .. ..... .............................. ............. .. .. ... ............ . . . .. 7 .. .. .. .. 6 .. ... ... ..... . . 16-QAM, D=4 . 5 .. .. ... ... 16-QAM, D=2 ..... ........ 4 . ..... ........ .. 3 Diversity .... ...... . .... ........ 2 .. ..... Multiplexing ... .... . .. AWGN 1 .......... Rayleigh .. . .............. ........... 0

Nt = N r = 2

C (bit/symbol)

-10

10

20

30

The capacity of D = 2 and 4 dimensional 16QAM-based MIMO DCMC uncorrelated Rayleigh-fading channel and AWGN channel. Fig. 5. Capacity of MIMO systems Discrete-Input Continuous-Output Memoryless Channel (DCMC) IEEE TVT, 2006, Ng and Hanzo

SNR (dB)

contrast, the two capacity curves distinguished by the bold dots and printed at the top of the illustration indicate that a higher signal-to-noise ratio is required, but we benet from a higher MIMO throughput - so there is always this underlying diversity versus multiplexing gain trade-off. But can we perhaps combine these two benets? The answer is yes - however, it requires a good deal of further research, once again. Recongurability is already on its way in, both in terms of multi-standard and HSDPA-style adaptive operation, as well as in terms of providing multi-rate operation and diverse qualities of service. You could therefore imagine the envelope of these two curves near their cross-over point in the gure being switched from operating in a space/time coded mode, to a BLAST-type spatial multiplexing mode, which would allow you to use the available transmit antennas differently, exploiting their full potential, regardless of the instantaneous propagation and trafc conditions. Hence it may be instructive to think about it in this way: if the receiver already benets from fourth-order diversity, then it is a waste of resources to use the antennas for achieving further transmit diversity, because you are already very close to the Gaussian limits - fourthorder receiver diversity gets you very close to that. You might as well double up in terms of the achievable throughput. There are many related aspects and this whole new area is referred to as multi-functional MIMO research - a burgeoning area. We recently found the related capacity formulae, but I have decided not to use any analytical formulae here, since I would like to cover the broader aspects of the eld.

Type Type Type Type

I: Beamforming II - SDMA: Spatial Division Multiple Access III - SDM: Spatial Division Multiplexing IV - Space-Time Coding: STTC, STBC and STS

Fig. 6.

Evolution of the four MIMOs in wireless communications

Evolution of the four MIMOs in wireless communications - Figure 6: We briey discussed the space/time coding saga and mentioned the spatial division multiplexing story. These are what we may refer to as Type III and Type IV MIMOs. There is such a lack of clarity in the whole community - we keep talking about MIMOs, but whose MIMOs are we talking about? There is in fact still no authoritative taxonomy paper in the literature, which would set out unambiguously that this particular solution belongs to this or that category. The four types of solutions have completely different design objectives. Figure 6 portrays the brief roadmap of where we shall expound further, and we will consider also the Type I and II MIMOs in a little more detail, although obviously glossing over the hows and just trying to look at the benets and evolution of these different solutions. Beamforming and multipath diversity - Figures 7 and 8: In beamforming, the terminology is fairly plausible - we create an angularly selective beam in order to cast the information to the receiver. We do this with the aid of appropriately phased antenna array elements, where the elements tend to be /2-

Mobile Stations

Base Station

Fig. 7. Type I MIMOs: Beamforming - indicating how an antenna array can support many users on the same carrier frequency and timeslot with the advent of angular ltering; Hanzo, Blogh and Ni, Wiley and IEEE Press, 2007

spaced, although we are not restricted to this - you could have not only linear arrays but also arbitrary array geometries, although again, linear arrays are the simplest and they are well documented in the classic literature. Another benet of beamforming emerges from the slightly more rened diagram of Figure 8. The base station is capable of optimally combining the different multipath components. If, for example, the user load is not too high, then it is fairly feasible that we could coherently combine these multipath beams at the base station, creating a null towards the interfering mobiles, as you see here, while creating a maximum towards the desired mobile. The interesting open problems are in the area of so-called rank-decient systems. Forgive me for this very technical terminology, but it is important to mention, where the open problems are. In this scenario you have a signicantly higher number of transmitter antennas than the number of receiver antennas. Only very sophisticated, non-linear type receivers are capable of operating under such circumstances. Once again, in addition to the multifunctional MIMOs, this is an important area for further research. Space-time processing aided MIMO scenarios - Figure 9: This gure shows the same four MIMOs, - namely Space-Time Coding, (STC) Spatial Division Multiplexing (SDM), which is also often termed as BLAST, as well as beamforming and Spatial Division Multiple Access (SDMA), classifying them from a slightly different perspective. Rather different MIMO designs emerge, when we consider, for example, point-to-point, or point-to-multipoint communications - when we broadcast to a multiplicity of sensors or receivers. They require special design attention. The UpLink (UL) and DownLink (DL) are also rather different. Let me just mention one particular problem. When we consider, say, spatial division multiplexing, then per denition we are trying to assign the total throughput to a single user. The difculty with this is that the different antenna elements all experience very similar propagation environments - in other words, near-identical Channel Impulse Responses (CIRs). Hence the associated multi-antenna signal separation problem is difcult - more difcult than when you consider spatial division multiple access in the uplink. For the sake of illustration, when Professor Kumar is sitting in this corner and Professor Zhisheng Niu is sitting at the back, their impulse responses are sufciently different to allow me to separate their transmitted signals much more easily, than when they are sitting next to each other - think of their simultaneous speech utterances, as an analogy. A rich novel research area is that of distributed MIMOs, where we treat the individual mobiles as cooperating antenna elements - again, provided that we are able to set aside some capacity, especially near the cell edges for example, for their communications where the received signal quality uctuates most widely. Evolution of spatial division multiplexing detection (SDMD/MUD) - Figure 10: Again, unfortunately we have insufcient time to delve into all of the different and often rather exotic transmit pre-processors as well as receivers, but I will try and just mention the classic MMSE detector - the cheap and cheerful, if you

Multipath Mobile station LOS Beam pattern Basestation Multipath

Interference paths

Basestation

LOS Multipath

Mobile station

Fig. 8. The multipath environments of both the uplink and downlink, showing the multipath components of the desired signals, the line-of-sight interference and the associated base station antenna array beam patterns; Hanzo, Blogh and Ni, Wiley and IEEE Press, 2007

like, and the complex Ferrari-style solution when cost does not matter, namely the Maximum Likelihood (ML) type of receivers. As an illustration, try to imagine the full-search-based complexity of the ML detector, when we employ 64 QAM transmissions, like in the toolboxes of WiFi and WiMAX. Then we have 6 bits per symbol transmissions and there are eight antennas, consequently we have 64 to the power of 8 possible combinations of the superimposed 64 QAM transmitted symbols, and therefore an exhaustive search is just not on. This is where, for example, Optimum Hierarchy Reduced Search Algorithms (OHRSA), such as sphere decoders, come into play. Genetic algorithm assisted minimum bit error rate (MBER) multiuser detection - Figure 11: This is a powerful, but rather specic research highlight in the eld of detection, hence I will not linger much on this. However, I will briey come back to genetically inspired detection algorithms. Do not forget that the Darwinian motto is spanning the entire spine of the presentation. So I will say a little more about genetically inspired, Minimum Bit Error Rate (MBER) type Multi-User Detection (MUD), for example, in the context of SDMA MIMOs. Again, this is a somewhat specic piece of research, so if you are not a researcher in this eld, please look away now! Just to tell you what the underlying philosophy of SDMA systems is, we have already mentioned Professor Nius and Professor Kumars different channel impulse responses owing to their different positions in the room. For example, these in the gure are the impulse responses measured for a 2 x 2 MIMO, where there are four different channels. In the olden days, we used to say that we used a unique, user-specic, signature sequence, like in CDMA, and differentiated users in this way. Obviously, the channel impulses are convolved with these transmitted user signatures, and hence the orthogonality of these spreading codes is typically destroyed. Hence we need a complex multi-user detector in order to mitigate these effects.

10

Space-Time Processing Applications

Point-to-Point

Point-to-Multipoint

Downlink

Uplink

BLAST/SDM

STC

Beamforming
Detection methods

SDMA

D-BLAST

SDMD

MUD

Fig. 9. Evolution of space-time Processing and the four MIMOs; Akhtman and Hanzo, Chapter 10 in Hanzo & Keller: OFDM and MC-CDMA, Wiley, 2006

SDMD/MUD

Linear Detection

Non-Linear Detection

LS

MMSE

ML

SIC

GA-MMSE

OHRSA-ML

Log-MAP

OHRSA-Log-MAP

SOPHIE

Fig. 10. Evolution of Spatial Division Multiplexing Detection (SDMD/MUD); Akhtman and Hanzo, Chapter 10 in Hanzo & Keller: OFDM and MC-CDMA, Wiley, 2006

You might as well say that these impulse responses are actually sufciently unique for us, without their convolution with any spreading code - so could we just use these unique, user-specic impulse responses, for differentiating the SDMA users, provided that we sufciently accurately estimate them? And we should not write off the complexity of this MIMO channel estimation problem - we already mentioned that we have 16 or 64 channel impulse reponses to estimate. When you transform these CIRs to the frequency domain, since we are considering here 3GPP LTE-style multiuser SDMA OFDM modem, we arrive at the magnitude plotted as a function of the frequency in the illustration. You can see that it is indeed sufciently unique for us to differentiate the users on this basis. This is an important area of research to look at and it is becoming hot in the 3GPP long-term evolution project, for example. MSE and BER surface - Figure 13: Let us continue by considering the MUD design for the 3G Partnership Projects (3GPP) Long-Term Evolution (LTE)-style OFDM modem. So where is the classic

11

1.0 0.8

1.0 0.8

Amplitude

0.6 0.4 0.2 0.0

Amplitude
0 16 32 48 64 80 96 112 128

0.6 0.4 0.2 0.0

16

32

48

64

80

96 112 128

Symbol Index

Symbol Index

(a) CIR 1: user 1, antenna 1

(b) CIR 2: user 1, antenna 2

1.0 0.8

1.0 0.8

Amplitude

0.6 0.4 0.2 0.0

Amplitude
0 16 32 48 64 80 96 112 128

0.6 0.4 0.2 0.0

16

32

48

64

80

96 112 128

Symbol Index

Symbol Index

(c) CIR 3: user 2, antenna 1

(d) CIR 4: user 2, antenna 2

Fig. 11. Evolution from CDMA to SDMA: Four different channel impulse responses (CIR) recorded at the two receiver antennas for the two users supported; M.Y. Alias, S. Chen and L. Hanzo, Chapter 12 in Hanzo & Keller: OFDM and MC-CDMA, Wiley, 2006

2.5 2.0

2.5 2.0

Magnitude

1.5 1.0 0.5 0.0

Magnitude
0 16 32 48 64 80 96 112 128

1.5 1.0 0.5 0.0

16

32

48

64

80

96 112 128

Subcarriers

Subcarriers

(a) CTF 1: user 1, antenna 1

(b) CTF 2: user 1, antenna 2

2.5 2.0

2.5 2.0

Magnitude

1.5 1.0 0.5 0.0

Magnitude
0 16 32 48 64 80 96 112 128

1.5 1.0 0.5 0.0

16

32

48

64

80

96 112 128

Subcarriers

Subcarriers

(c) CTF 3: user 2, antenna 1

(d) CTF 4: user 2, antenna 2

Fig. 12. Channel Transfer Functions (CTF) for the CIRs seen in Figure 11 (a) CTF 1, (b) CTF 2, (c) CTF 3, and (d) CTF 4; M.Y. Alias, S. Chen and L. Hanzo, Chapter 12 in Hanzo & Keller: OFDM and MC-CDMA, Wiley, 2006

12

MSE

log10(BER)

14 12 10 8 6 4 2 0

0 -1 -2 -3 -4 -5 -6

-2 -1.5 -1 -0.5 0 Re{w1}0.5 1 1.5 2 -2 -1.5 -1 -0.5 0.5 0 Re{w2} 1 1.5

-0.5 0 0.5 2 Re{w1} 1 1.5 1 2 0.5 Re{w2} 0 2.5 -0.5 1.5 2 2.5

Fig. 13. MSE and BER surface. The error surfaces at the receivers output were calculated for ve BPSK modulated sources having equal received power and communicating over AWGN channels at SNR=10 dB. The imaginary part of both weights of the 2-element array was xed. Is MBER detection the next stage of evolution?

versus the unorthodox trade-off? Obviously, it must be the associated complexity and that is the snag here. At the left we see the mean squared error surface recorded at the output of a two-user spatial division multiple access scheme - as a function of the MUD array weights. So the obvious minimum of the MUD MSE surface is at the bottom of the related paraboloid. To nd the optimum MUD weights we have to set the derivatives of the MSE at the output of the array with respect to the MUD weights to zero. A realistic real-time solution may be initialized to commence its search for the MMSE MUD weights from a compromise weight set and then would adjust the weights according to a specic weight-stepsize in the direction of minimizing the MSE, until a near-optimum set is found. That is all very well but here comes the snag. If you look at the actual bit error rate surface recorded in the gure as a function of the array weights, you can see that the minimum of the BER surface is at a completely different point in comparison to the MMSE point. This is somewhat surprising and the reviewers of our related papers are often giving us a hard time, saying, Arent their minima directly linked? Well, they are if and only if the MUDs output is Gaussian distributed, which is not the case in the presence of a single dominant interferer, for example, which is often the case, when using realistic nite-delay, niteprecision power-control schemes. For this reason it might be a better idea to estimate the derivative of the BER surface and slide down on the BER surface to its minimum instead of the MMSE surface. Here comes the cardinal question: which is the more inuential in an optimisation problem? Is it the choice of the objective function that we are optimising, or is it the choice of the actual optimisation procedure? They are both inuential but the choice of the objective function is often more important. We have therefore opted here for a better objective function than the MSE, which is unorthodox and not as well-documented in the classic literature, as MSE optimisation. So how do we nd the MBER MUD weights? For example, you could use a genetic algorithm to nd this MBER point. Do not forget that, in general, the surface could be a multimodal surface, with lots of local minima in it. Thus, the global minimum is not actually easy to nd in an interference-infested multiuser environment. Graduation ball, 1976 - initialisation of a genetic algorithm - Figure 14: Moving on, I can tell you

13

Fig. 14.

Graduation Ball, 1976 - Initialisation of a Genetic Algorithm

that I had to request clearance for the next couple of pictures from a higher authority - namely my wife. This story starts to become extremely personal, because it casts my mind back to the Technical University of Budapest in 1976. This was the graduation ball - a fancy dress ball, where I was acting as Tarzan, and the young lady behind me in this photo became my wife. 1982 Genetically enhanced offspring - Figure 15: This is a long saga and I will not linger on it. A couple of years later, we ended up with a genetically enhanced offspring, namely Lajos the 2nd. He went on to inherit some of our motivation in terms of mathematics, physics and admiration for the beauty of engineering. Fancy dress ball 2004 - genetically enhanced offspring - Figure 16: So there was another fancy dress ball at the University of Southampton in 2004, namely the Grduation Ceremony where witnessed this young man graduating. He obviously combines the best of my wifes genes with my love for my profession, and hence he is likely to become more successful than either of us. So, that was my personal slant on the hitchhikers guide to genetic algorithms (GAs). A technical portrayal of GAs - Figure 17: I could also guide you through the slightly less intriguing technical perspective on GAs, which is portrayed in Figure 17, but again, you may not want me to plough through this at this late hour of the evening? The punch-line line is that we start by genetically combining 8bit patterns, for example each representing a specic MUD weight, in the interests of nding the optimum MUD weight set following a number of genetic operations. We may commence the search from the MMSE solution and may employ hill-climbing to tentatively invert all the bits of the MMSE weights in an attempt to create a so-called high-tness initial GA population. Then we may use genetic cross-over operations to improve the mother and father individuals, for example, taking the rst few bits from the mother and the rest from the father individual, representing the 8-bit array-weight values. The language tends to become politically incorrect, with all sorts of genetically inspired terms, such as cross-over, mutation, elitism and so on. These random guided GA-aided optimization techniques commence from a potentially random position,

14

Fig. 15.

1982 - Genetically Enhanced Offspring

Fig. 16.

Fancy Dress Ball 2004 - Genetically Enhanced Offspring

and step by step combine the meritorious MUD weight sequences into better and better weight sets or genes, if you like. There is rich experimental evidence that, biologically inspired random guided algorithms have a tremendous power in terms of capturing the maximum likelihood solution, without the full complexity of the exhaustive ML search. Ant-colony based random optimization techniques are equally important, for example. I do not have the time to discuss their pros and cons here in great detail - but simulated annealing, and particle ltering are similarly powerful, offering a range of open research problems. GA-aided SDMA-MUD performance - Figure 18: Allow me now to show you one or two performance curves. The square legends indicate the classic, implementationally low complexity MMSE solution, while the circles and triangles the genetic algorithm aided, or the conjugate gradient assisted MBER MUD algorithm. Please observe that there is quite a substantial gap of 15dBs or so between them and, once again, the ultimate limit of a single-user system communicating over the Gaussian channel is denoted by the dashed line - the ultimate dream, where we could hope to get to. Type III MIMOs: SDM ML sphere-detector for rank decient scenarios I will now refrain form lingering on the associated MUD complexity. Instead, I would like to show you a useful WWW facility at http://www-mobile.ecs.soton.ac.uk/newcomms/?q=research/anatron/ber which indicates the power of the wireless internet again - anywhere, anytime... On the one hand, we try to compress information and convey

15

Use the probability of error equation as the objective function Start GA Initialisation Evaluation

No Selection

Is termination criterion met?

Yes Decision taken

Crossover Finish GA Mutation Convert binary string to weight values

Evaluation
Fig. 17.

A technical portrayal of GAs: Flowchart of the BER optimisation using a GA.

it as efciently as we possibly can, but the convincing convenience of the Wireless Internet is expected to popularize it further, which is then likely to result in an increased teletrafc on the net. Google search, Google Earth, Google Scholar, Multimap, Currency converter, video- and audio Podcast, news, trains, planes - anywhere, anytime or the WWW? [Demonstrating website] I will minimise this slide-show for a moment and go to our website at the University of Southampton. We will evaluate the performance of a few SDMA-type MIMOs together. Would anyone care to specify a particular MIMO? Dear Professors, would you care telling me your favourite modulation scheme? [It must be QAM, I guess]. So would you be happy with 16 QAM? [Yes] Let us consider the 16QAM option then. Do you want a fading channel or a Gaussian channel? [Fading] Fading - let us be vicious! Signal bandwidth, 3MHz is ne, I presume. Interleaver, depth - 10 OFDM symbols. What about the signal to noise ratio range? I guess we aim for values up to 20dBs, that would do. Let us try to specify the choice of the MIMO. At the moment we have a single-transmitter/single-receiver system, and I suggest that we plot the related BER curve to start with - and this shows the attainable performance. Let us now continue by considering a 2 x 2 MIMO. Could we have a show of hands: how many of you think that the BER performance will improve - despite doubling the effective throughput? [Show of hands] Let us wait and see what happened: indeed, it did. If we move on to a radical cutting-edge 8 x 8 MIMO now - do not forget that this is an 8 x 8 SDMA MIMO designed for maximum multiplexing gain. Let us evaluate the effective throughput. We are using 16 QAM, so four bits per symbol - and, times 8 as a benet of the 8 x 8 MIMO, this yields 32 bits per symbol! Can you imagine the huge throughput that we are experiencing here? And yet, not only the throughput, but also the BER performance has improved. Again, the reason for this seemingly irrational improvement is that as a benet of the 8 x 8 antennas, we have 64 channels, but we really only want to exploit a grade of freedom of 8. So the remaining grade of freedom benets us in

16

10 10

-1

MMSE Detector CG MBER Detector GA MBER Detector 1-user 1-antenna AWGN

Average BER

10

-2 -3

10 10

-4 -5

10 10

-6 -7

10

10 15 20 25 30 35 40

Average SNR (dB)


Fig. 18. The average BER performance of the four different users characterised in an SDMA system employing four receiver antennas and 128-subcarrier OFDM for communicating over the OFDM symbol-invariant dispersive Gaussian channel given given by h(z) = 0.8854 + 0.3504z6 + 0.2881z11 .

terms of enhancing the achievable diversity gain. What more convincing evidence of the wonderful power of MIMOs? Channel variation in space-time coded OFDM - Figure 19: Let us now move on to a light-hearted demonstration and look at the space-time coded or Type 4 MIMOs. In the previous example, we were looking at MIMOs where we were aiming to maximise the multiplexing gain. By contrast, we are now aiming for improving the integrity - in other words, achieving diversity gain. As before, we are considering a 3GPP LTE-style OFDM system, and recorded the SNR as a function of both the frequency and time, when using different number of transmit and receive antennas. Specically, for a single-transmitter/single-receiver LTE-style scenario the channel is extremely hostile, but it can be improved with the aid of the STC MIMO schemes when we invoke two transmitters, for example, or two transmitters and two receivers, or even 2 x 6-element MIMOs. It then becomes more or less a at plateau. It is clear that as a benet of the increased diversity order, the system is now facing a more-or-less constant SNR, resulting in near-Gaussian error distribution. Therefore again, in this STC scenario we are not actually increasing the throughput, but improving the integrity. Type IV: Space-Time Block Coding Demos Please allow me to continue now with another little demonstration of the power of MIMOs augmented by the power of Mozart, just to entertain you on a less technical note, perhaps. [Plays Mozart] This is Mozarts Clarinet Concerto, Second Movement - Adagio for transmission over a perfect channel. Would anyone care to listen to Mozarts music over a Rayleigh channel? I do not think so. What about Mozart over a 2 x 2 MIMO channel, at 8dB signal to noise ratio? [Plays Mozart] Let us nally listen to the Clarinet Concerto when transmitting the audio signal using a 2 x 2 MIMO system and using an iterative turbo receiver. [Plays Mozart] I could just go away and leave you to listen to this, but it is so beautiful that you need your privacy for that. Quo vadis? - Figure 21 Let me go back to our main slide-show. We have looked at a number of open wireless communications problems that we can all try to solve but naturally, there are realistic, practical constraints that we have to obey. The most tangible trade-offs manifest themselves, for example, between the affordable implementational complexity, and the achievable effective throughput, as well as bit error ratio. There are many other contradictory design trade-offs, and this is probably why all of us survive in this very complex business. However, I can always improve the coding gain, for example, or the effective throughput, if I can afford more complexity - in other words smarter signal processing. However, do not forget, once again, that moving from a 2 x 2 MIMO to a 4 x 4 MIMO implies, that instead of four channels,

17
1 Tx 1 Rx
14 13 12 11 10 9 8 7 6 5 4 0 14 13 12 11 10 9 8 7 6 5 4 0

2 Tx 1 Rx

12 Sub 8 25 5 150 00 12 e) carr 6 3 75 1 Tim ier 84 ind 5120 25 50 ion frame ( ex miss Trans

Instantaneous SNR (dB)

Instantenous SNR (dB)

12 Sub 8 25 5 150 00 12 e) carr 6 3 75 1 Tim ier 84 ind 5120 25 50 ion frame ( ex miss Trans

2 Tx 2 Rx
14 13 12 11 10 9 8 7 6 5 4 0 14 13 12 11 10 9 8 7 6 5 4 0

2 Tx 6 Rx

Instantaneous SNR (dB)

12 150 Sub 8 25 0 125 e) carr 6 3 5 10 ier 84 50 7 rame (Tim ind 5120 25 nf ex missio Trans

Instantaneous SNR (dB)

12 150 Sub 8 25 0 125 e) carr 6 3 5 10 ier 84 50 7 rame (Tim ind 5120 25 nf ex missio Trans

Fig. 19. Instantaneous channel SNR of 512-subcarrier OFDM symbols for single-transmitter single-receiver and for the space-time block code G2 using one, two and six receivers over shortened WATM channel. The average channel SNR is 10 dB; Hanzo, Liew, Yeap: Turbo coding, turbo equalization and space-time coding, Wiley & IEEE Press 2002

you have 16 channels to estimate, and invoke the corresponding four-channel data detection schemes. The whole of the community is trying to improve the associated design trade-offs and there are very interesting implementational trade-offs to be considered - again revisiting for example, even Shannons source and channel separation theorem in the context of psycho-visually and psycho-acoustically optimized lossy source codecs. Evolution towards a generic physical layer architecture - Figure 21: Following from the abovementioned arguments, we may consider a range of interesting new architectures. Quite clearly, we are heading towards the design of next-generation wireless systems now, but any next-generation solutions should be capable of supporting backwards compatibility with GSM, GPRS, 3G etc. GPRS is already using adaptive modulation, as well as coding and so does the HSDPA/HSUPA mode of 3G. As argued before, adaptive modulation and coding is likely to nd its way into most future systems and in fact, it moves into MIMOs and inuences the way we design MIMOs. I have already given you a couple of illustrations as to how this might take place, for example by switching the baseband MIMO algorithm from STC mode to a BLAST-type SDM mode, when the receiver-diversity order is sufciently high for attaining a near-Gaussian performance. As exemplied in Figure 21, interesting novel transceiver schemes emerge. For example, with the aid of Interleaved Division Multiple Access (IDMA), where you have an interleaver controller and it is essentially the unique-user-specic channel interleaver that is used for distinguishing the users, - it is no longer the channel impulse response as in SDMA, and no longer the unique user-specic spreading code as in CDMA, but the interleaver. As a benet, very powerful turbo receivers emerge, but I have to speed up here a little, because I have touched upon the most important points in sufcient detail, noting that there are numerous open problems in the design of near-instantaneously adaptive modulation and coding schemes for IDMA,

18

Implementational complexity Channel characteristics System bandwidth Effective throughput Coding rate
Fig. 20. Factors affecting the design of MIMO-aided wireless communications schemes John Wiley & IEEE Press, 2002, Hanzo, Liew, Yeap.

Coding/interleaving delay Coding/ Modulation scheme Bit error rate Coding gain

Interleaver Control

Adaptive Modulation & Coding

TDSpreading or Partial Response

TDChip Interleaving

.
Adaptive BitSymbol Mapping

. . .
TDSpreading or Partial Response

. . .
TDChip Interleaving

f0

. . .
Freq Domain Spreading & Interleaving Linear RF Generic MTMR

. . .

. .
Adaptive Modulation & Coding

Side Information Cognitive Control Module

fu
FH Synthesizer

Fig. 21.

Evolution towards a generic physical layer architecture

which have to be solved. Indeed, some sophisticated IDMA solutions are also on the way. There is also a cognitive control module, in the schematic, which reects Ofcoms efforts these days. It is capable of exploring those snippets of the radio spectrum, where there are sufciently quiet slots in the frequency band to enable spectrum trading in the interest of further improving the overall spectral efciency. Figure 22 illustrates an interesting set of IDMA results, for example, where multicarrier interleave division multiple access (MC-IDMA) is shown to support three times as many users as the number of chips in the chip-interleaver sequence. More explicitly, a G = 16-chip interleaver sequence supports upto 48 users, which corresponds to a high system throughput. Of course, you need a powerful iterative turbo receiver in order to mitigate the effects of interferences but, in the turbo era this may be designed to become less complex than a relatively unsophisticated, one-shot-type MUD. Iterative turbo-receivers for MIMOs - Figure 23: Carrying out both channel and data estimation jointly and iteratively is, once again, a very important area to address. We have already mentioned that ideally synchronisation, channel estimation and data detection have to work in liaison, but the turbo concept may be further extended to multi-stage concatenated transceivers, where we have potentially a number of concatenated encoders and decoders. They are typically less complex than a conventional single-stage encoder and decoder pair, when aiming for a certain target BER. The concept of exchanging extrinsic information amongst the concatenated transceiver components is reminiscent of two knowledgeable people exchanging views and enhancing their understanding of a problem. By the same token, the concatenated

19
1

10

-1

10

-2

BER

10

-3

10

-4

1 User 32 Users 40 Users 44 Users 45 Users 46 Users 47 Users 48 Users


0 2 4 6 8 10 12 14 16 18 20

10

-5

Eb/N0(dB)
Fig. 22. BER of uncoded MC-IDMA, when communicating over a chip-spaced equal-power 4-path wideband Rayleigh fading channel for G = 16 and I = 10 iterations; Zhang & Hanzo, IEEE WCNC, 2007

joint detector-decoder turbo decoder y H

Channel Estimator

Space-Time Detector

Decoder 1 extr. info.

Decoder 2

extr. info. extr. info.


Fig. 23. Iterative turbo-teceivers for MIMOs: Joint data detection and channel estimation based turbo architecture may be developed, where a succession of detection modules constituting a MIMO-OFDM receiver iteratively exchange soft information, thus resulting in a substantial system performance improvement. Akhtman & Hanzo, IEEE WCNC, 2007

component decoders exchange their soft-decision-based data estimates and powerfully improve it in a number of iterative steps. The wireless internet: FDD versus TDD modes - Figure 24: As I draw closer to the conclusion, since this is the Vodafone lecture - I would like to elaborate a little bit on the networking aspects of the wireless internet. In the olden days, when the Frequency Division Duplex (FDD) 3G spectrum licences were auctioned, there was some moderate euphoria that the operators were given free Time Division Duplex (TDD) licences, which were thrown in as a bonus with their FDD frequency bands. Naturally, this raised the interest in the TDD mode, which has a number of interesting features and many benets, but also associated problems. Once again, this is not a sufciently well researched area and, to my knowledge, there is no fully-edged, large network, which would operate on this TDD basis - not for large-area-coverage, at any rate. All these points motivate a closer inspection of the TDD mode of the operational 3G and future 4G systems. Perhaps to underline the most dramatic benet of TDD - it is the fact that we can use, for example, 14 out of the 15 time slots in the downlink if, for example - just as I did - you click on a mouse button, and down comes a whopping big le from the wireless internet. It would be very wasteful to set aside seven time slots for the uplink, for clicking the mouse button, although this is potentially so in the FDD mode. However, there is a snag. Interference scenario in the TDD-based wireless internet - Figure 25: The problem is that the associated TDD-based interference patterns are so erratic. If you look at this mobile station here in the illustration,

20

Why Time Division Duplexing for the Wireless Internet? Guarantees exible and efcient resource utilisation Similar nature of the channel in the uplink and downlink renders it amenable to adaptive modulation More suitable for the employment of UL/DL beamforming Extensive study of FDD/CDMA has been carried out There is a paucity of capacity results in the literature for TDD
Initial TDD versus FDD considerations

Fig. 24.

roaming close to the edge of the cell, and receiving, for example, on time slot zero from its serving base station - this other mobile, which is also far from its serving BS might be transmitting on time slot zero to its distant base station. Hence the mobile receiving in time slot zero is now detecting its low-power signal potentially overwhelmed by the interference inicted by this particular mobile there. So, unless we use smart scheduling - which then inevitably requires an optical bre connection between the base stations or some other solution like a low-delay microwave link - the achievable TDD throughput remains signicantly lower than the FDD throughput. Other advanced techniques of mitigating this TDD-specic problem are constituted by HSDPA-style adaptive modulation and coding beamforming or genetic algorithm aided scheduling. The corresponding gure portrays the schedulers memory matrix, where the column-indices are given by the time slot index, and the row-indices represent the potential target cells, which the mobile could communicate with. Colloquially, we ought to throw all the time slots up in the air, if you like, and then ought to nd the ones that are best suited for supporting a particular uplink or downlink transmission request in terms of the total interference experienced by all mobile receivers, which we use as the GAs optimization cost-function. More explicitly, you would have to add up the interference imposed on all TDD time slots, in order to generate a system-wide objective function, and then nd the best UL/DL timeslot allocation for all mobiles with the aid of the genetic algorithm-assisted optimiser.

MS0

BS1
MS1

BS0

Cell 1

Cell 0

Desired Signals Mobile to Mobile Interference Base Station to Base Station Interference
Fig. 25. Inter-cell Interference in the TDD-aided wireless Internet; Wiley and IEEE Press, Hanzo, Blogh and Ni, 2007

GA - aided TDD Scheduling for the Wireless Internet - Figures 26 and 27: Let us now briey consider the attainable performance of a GA-aided TDD scheduler. The curve marked by the diamond-

21

shaped legends at the top of the forced termination probability versus mean carried teletrafc illustration characterizes the TDD wireless scenario using no sophisticated scheduling. More explicitly, this corresponds to allocating the TDD time slots to the mobiles on a random basis. By contrast, the group of three curves at the bottom characterize the benecial effects of employing GA-aided time slot scheduling using different GA congurations. More explicitly, P represents the number of GA individuals in each generation, where an individual corresponds to a specic UL/DL time slot allocation indicated by the logical 1/0 values seen in the previous gure at a specic value of the offered teletrafc, which is quantied on the horizontal axis. Furthermore, G indicates the number of consecutive generations used by the GA-aided scheduler. Please observe the substantial forced call termination probability reduction achieved by the GA-assisted TDD scheduler. You can also see that, regardless of the actual conguration of the genetic algorithm - for example, using 10 generations and a population size of 10 for each generation, i.e. a total of only 100 objective function evaluations - we, can increase the successfully conveyed teletrafc by a factor of four or so.
f ij
1 1 2 3 Cell Index 0 0 0 2 1 0 0 3 0 0 1 4 0 0 0 5 1 0 0 Timeslot Index

... ... ... ... ...


0 0 0 0 1 0 0 0 0 0 0 1 0 0 0

m 1 0 0

...
1 0 n 1 0 0 0 1 0 0 0 1 0 0 0 0

... ... ...

0 1 0

0 0 0

1 0 0

0 0 1

1 0 0

0 0 0

Fig. 26.

UL/DL timeslot scheduling matrix used by the GA

Luby-transform codes for the wireless internet and ad hoc networks - Figure 28: Continuing in this wireless Internet context, I should mention that the emerging wireless IP networks are extremely inefcient, because they have to concatenate a 320-bit IP header to every transmission packet. Now imagine that when you invoke a speech codec, or a video codec, operating at 20 Megaops per second complexity, in order to reduce the bit rate to 8 kbits for example, generating a 160 bit/20 ms speech packet. Then you end up with exactly 200 per cent inefciency in terms of the IP overhead. IP header-compression does not solve this problem, since any corrupted compressed header has to be retransmitted without compression, which is a frequent event in the hostile propagation environment of the wireless Internet. An alternative solution is to use Luby-Transform (LT) codes, for example. This appealingly simple solution allows us to generate a number of redundant packets which may facilitate the recovery of corrupted packets without retransmission. As seen in the illustration, you would only use very simple modulo-2 additions for example - so at the top we see the source packets, while those at the bottom are the LT-encoded packets. The 2nd packet at the bottom, for example, is given by the modulo-2 connection of three source packets, namely packets S1, S2, and S3. Similarly, the 3rd packet at the bottom is given by the modulo-2 connection of two packets, namely of S2 and S3, etc. A similarly simple modulo-2 based algorithm allows you to detect and decode the packets one by one, provided that you have successfully decoded a so-called degree-one packet, such as the rst LT-packet seen at the bottom of the rst box in the gure. We now detect this degree-one packet rst, remove its effect from all the others as seen in the gure again, with the aid of simple modulo 2 additions. More explicitly, since this degree-one packet becomes known, we can remove its effect from all the other LT-packets, which also contained the modulo-2 contribution of this degree-one packet. Following this simple procedure, we gradually end up detecting all the consecutive packets. It is a very simple solution, which is potentially quite powerful, operating without any Automatic Repeat Requests (ARQs) and without the associated ARQ backwards channel.

22

Forced Termination Probability, PFT

P=10 G=10 P=20 G=5 P=4 G=25 No GA


2

1%

10

-2

0.0

0.5

1.0

1.5
2

2.0

2.5

Mean Carried Teletrafc (Erlangs/km /MHz)


Fig. 27. Call dropping probability versus mean carried trafc of the UTRA-like TDD/CDMA based cellular network both with and without GA-assisted timeslots allocation as well as with shadowing having a standard deviation of 3 dB for SF=16.

S1

S2

S3

111

S2

S3

111

S2

S3

111

010

S3

111

010

S3

111

000

111

101

000

111

101

111

111

010

111

111

101

101

(a)

(b)

(c)

(d)

(e)

Fig. 28. Decoding an LT code having K=3 source packets and K =4 transmitted packets, each containing 3 bits; adopted from Luby et al.

Magic spreading sequences for ad hoc networks and their autocorrelations - Figure 29: Another thought is just to mention the power of so-called Large Area Synchronised (LAS) spreading codes in the context of both FDD and TDD wireless Internet scenarios. The ideal autocorrelation function is shown at the top left corner of the illustration. Of course, we cannot have such an ideal spreading code, unless we have an innite spreading code length. Figures (b) and (c) chracterize the popular practical spreading codes in the gure, namely Walsh codes or Gold codes. However, we also have the meritorious family of large area synchronised codes, which exhibit an interference-free window. As seen at the bottom left corner of the gure, the autocorrelation of the LAS sequences is specically designed to have a zero-valued section in the middle and hence any multipath or multi-user interference component, which would arrive within this window would impose no interference. You obviously need quasi-synchronous operations and relatively low propagation delays, i.e. small cells to ensure that indeed, the potential interference does arrive within this limited-duration Interference-Free Window (IFW). Fortunately, we could combine the employment of LAS codes with adaptive timing advance control, which is already used in GSM. Let us consider a wireless Internet-style laptop-based ad hoc network for a moment. The problems are related, for example, to being unable to provide any power control, since there is no central controller in ad hoc networks such as a BS. They have emerged for example in specialist applications such as military networks. They are also asynchronous, since owing to the lack of BS control you cannot readily provide central synchronisation, unless the cost of Global Positioning System (GPS) based synchronisation is affordable. Hence, the above-mentioned LAS spreading codes are also amenable to employment in quasi - synchronous ad hoc networks dispensing with power control, while rejecting the effects of power control,

23

300 Auto-corr
Auto-corr

300 200 100 0 -100 -30 -20 -10 0 Offset 10 20 30

200 100 0 -100 -30 -20 -10 0 Offset 10 20 30

(a) Perfect sequence


300 Auto-corr Auto-corr 200 100 0 -100 -30 -20 -10 0 Offset 10 20 30 300 200 100 0 -100 -30

(b) Walsh code

-20

-10

0 Offset

10

20

30

(c) Gold code Fig. 29. Magic spreading codes and their autocorrelations

(d) LAS code

provided that the interfering multiplath contributions arrive within the IWF of the LAS code. They can also assist in circumventing the severe capacity limitation of ad hoc networks, which was outlined in their well-cited Information Theory Transaction paper by Gupta and Kumar. Explicitly, they reported that the per-node capacity of large ad hoc networks tends to zero, because you end up just relaying someone elses messages most of the time. You therefore have to set aside too much capacity for relaying messages. You cannot entirely eliminate this per-node capacity constraint, but you can mitigate the gravity of this limitation with the aid of LAS codes, as well as adaptive modulation and coding.

Fig. 30.

Differential chain coded handwriting for texting and emailing over the wireless internet

Differential chain coded handwriting for texting, email and the wireless internet - Figure 30: Perhaps as the last idea to mention tonight, there is another interesting application to look at. The Short Message Service shown as SMS, is perhaps one of the most successful novel services nowadays, but it is really somewhat cumbersome for most people to type on small handsets. This has resulted in the development of a specic parlance for expediting typing in wireless texting, for example. On modern handsets, we could very easily use a little touch-sensitive writing pad, and we could employ for example the ITUs chain coding standard to encode natural handwriting, drawings and other graphical information quite efciently. Of course, SMS messages are short in any case, constituted by 300 characters, i.e. 300 bytes or so, but

24

again, this could be more efciently captured and transmitted. Hence this proposition also deserves some further research attention. Networking demo: Let me close with a conceptually appealing demo to show the power of the adaptive transceivers and networks, that we considered tonight. This demo is based on a 3.5G type HSDPA network. Just to put you in the picture, the green dots indicate mobiles that are in active communication - in other words, they are generating information and talking or transmitting data via the wireless internet. Please observe that some of the mobiles are simultaneously communicating with two or three base stations. This is the so-called soft handover principle, where you set up a link for a potential handover, whenever the signal quality becomes relatively low. Please also observe that three different-thickness active links are visible on the screen, corresponding to the three different link qualities. Accordingly, the HSDPA system activates three different-throughput adaptive modulation and coding modes, which allows the system to avoid dropping a call, when a conventional xed-mode system would have no other option. The last feature I would to draw your attention to is when you incorporate a beamformer MIMO. The beamforming pattern is indicating that you no longer radiate omni-directionally. You focus the beam with the aid of transmit pre-processing to the specic target mobile. This allows you to conserve valuable transmit power by creating nulls towards the mobiles where you do not intend to radiate energy. On that note, my time is up - please allow me thank you for your kind attention.

R EFERENCES
[1] L. Hanzo, S-X. Ng, W.T. Webb, T. Keller: Quadrature Amplitude Modulation: From Basics to Adaptive Trellis-Coded, TurboEqualised and Space-Time Coded OFDM, CDMA and MC-CDMA Systems, IEEE Press-John Wiley, 2nd edition, September 2004 1105 pages. [2] R. Steele, L. Hanzo (Ed): Mobile Radio Communications: Second and Third Generation Cellular and WATM Systems, John Wiley-IEEE Press, 2nd edition, July 1999, ISBN 0-471-97806-x, 1060 pages. [3] L. Hanzo, P. Cherriman, J. Streit: Wireless Video Communications: Second to Third Generation and Beyond, IEEE Press, February 2001 1 , ISBN 0-7803-6032-x, 1092 pages. [4] L. Hanzo, F.C.A. Somerville, J.P. Woodard: Voice Compression and Communications: Principles and Applications for Fixed and Wireless Channels; IEEE Press-John Wiley, August 2001 2 , ISBN 0-471-15039-8, 672 pages. [5] L. Hanzo, T.H. Liew, B.L. Yeap: Turbo Coding, Turbo Equalisation and Space-Time Coding, John Wiley, August 2002, ISBN 0-470-84726-3, 766 pages. [6] L. Hanzo, C.H. Wong, M.S. Yee: Adaptive Wireless Transceivers: Turbo-Coded, Turbo-Equalised and Space-Time Coded TDMA, CDMA and OFDM Systems, John Wiley, March 2002, ISBN 0-470-84689-5 752 pages. [7] J.S. Blogh, L. Hanzo: Third-Generation Systems and Intelligent Wireless Networking - Smart Antennas and Adaptive Modulation, John Wiley, April 2002, ISBN 0-470-84519-8 430 pages. [8] L. Hanzo, M. M nster, B.J. Choi and T. Keller: OFDM and MC-CDMA for Broadband Multi-user Communications, WLANs u and Broadcasting, John Wiley - IEEE Press, July 2003, 980 pages. [9] L. Hanzo, L-L. Yang, E-L. Kuan and K. Yen: Single- and Multi-Carrier DS-CDMA: Multi-User Detection, Space-Time Spreading, Synchronisation, Standards and Networking, IEEE Press - John Wiley, August 2003, 1060 pages. [10] L. Hanzo, T. Keller: An OFDM Primer, John Wiley - IEEE Press, April 2006, 426 pages. [11] L. Hanzo, F.C.A. Somerville, J.P. Woodard: Voice and Audio Compression for Wireless Communications, John Wiley and IEEE Press, August 2007, 880 pages. [12] L. Hanzo, P. Cherriman, J. Streit: Video Compression and Communications: H.261, H.263, H.264, MPEG4 and Proprietary Codecs as well as HSDPA-Style Adaptive Turbo-Transceivers, John Wiley and IEEE Press, September 2007, 702 pages. [13] L. Hanzo, J.S. Blogh, S. Ni: 3G Systems and HSDPA-Style FDD Versus TDD Networking: Smart Antennas and Adaptive Modulation, John Wiley and IEEE Press, February 2008, 596 pages. [14] L. Hanzo, O. Alamri, M. El-Hajjar, N. Wu: Near-Capacity Multi-Functional MIMO Systems, IEEE Press - John Wiley, 2009 [15] L. Hanzo, J. Akhtman, M. Jiang: MIMO-OFDM Turbo-Transceivers for LTE, WIFI and WIMAX, IEEE Press - John Wiley, to appear
1 For 2 For

detailed contents please refer to http://www-mobile.ecs.soton.ac.uk detailed contents please refer to http://www-mobile.ecs.soton.ac.uk

You might also like