You are on page 1of 100

3G LTE Long Term Evolution Tutorial & Basics

- developed by 3GPP, LTE, Long Term Evolution is the successor to 3G UMTS and HSPA
providing much higher data download speeds and setting the foundations for 4G LTE Advanced.
Discover more about LTE basics in this tutorial.
IN THIS SECTION
LTE Introduction
OFDM, OFDMA, SC-FDMA
LTE MIMO
TDD & FDD
Frame & subframe
Physical logical & transport channels
Bands and spectrum
UE categories
SAE architecture
LTE SON
VoLTE
SRVCC
LTE-M
LTE-U / LAA
Security
See also
4G LTE Advanced

LTE, Long Term Evolution, the successor to UMTS and HSPA is now being deployed and is the way
forwards for high speed cellular services.
In its first forms it is a 3G or as some would call it a 3.99G technology, but with further additions the
technology can be migrated to a full 4G standard and here it is known as LTE Advanced.
There has been a rapid increase in the use of data carried by cellular services, and this increase will
only become larger in what has been termed the "data explosion". To cater for this and the increased
demands for increased data transmission speeds and lower latency, further development of cellular
technology have been required.

The UMTS cellular technology upgrade has been dubbed LTE - Long Term Evolution. The idea is
that 3G LTE will enable much higher speeds to be achieved along with much lower packet latency (a
growing requirement for many services these days), and that 3GPP LTE will enable cellular
communications services to move forward to meet the needs for cellular technology to 2017 and well
beyond.
Many operators have not yet upgraded their basic 3G networks, and 3GPP LTE is seen as the next
logical step for many operators, who will leapfrog straight from basic 3G straight to LTE as this will
avoid providing several stages of upgrade. The use of LTE will also provide the data capabilities that
will be required for many years and until the full launch of the full 4G standards known as LTE
Advanced.

3G LTE evolution
Although there are major step changes between LTE and its 3G predecessors, it is nevertheless
looked upon as an evolution of the UMTS / 3GPP 3G standards. Although it uses a different form of
radio interface, using OFDMA / SC-FDMA instead of CDMA, there are many similarities with the
earlier forms of 3G architecture and there is scope for much re-use.

In determining what is LTE and how does it differ from other cellular systems, a quick look at the
specifications for the system can provide many answers. LTE can be seen for provide a further
evolution of functionality, increased speeds and general improved performance.
WCDMA
(UMTS)

HSPA
HSDPA / HSUPA

HSPA+

LTE

Max downlink speed


bps

384 k

14 M

28 M

100M

Max uplink speed


bps

128 k

5.7 M

11 M

50 M

Latency
round trip time
approx

150 ms

100 ms

50ms (max)

~10 ms

3GPP releases

Rel 99/4

Rel 5 / 6

Rel 7

Rel 8

Approx years of initial roll out

2003 / 4

2005 / 6 HSDPA
2007 / 8 HSUPA

2008 / 9

2009 / 10

Access methodology

CDMA

CDMA

CDMA

OFDMA / SC-FDMA

In addition to this, LTE is an all IP based network, supporting both IPv4 and IPv6. Originally there
was also no basic provision for voice, although Voice over LTE, VoLTE was added was chosen by
GSMA as the standard for this. In the interim, techniques including circuit switched fallback, CSFB
are expected to be used

LTE basics:- specification overview


It is worth summarizing the key parameters of the 3G LTE specification. In view of the fact that there
are a number of differences between the operation of the uplink and downlink, these naturally differ
in the performance they can offer.
LTE BASIC SPECIFICATIONS
PARAMETER

DETAILS

Peak downlink speed


64QAM
(Mbps)

100 (SISO), 172 (2x2 MIMO), 326 (4x4 MIMO)

Peak uplink speeds


(Mbps)

50 (QPSK), 57 (16QAM), 86 (64QAM)

Data type

All packet switched data (voice and data). No circuit switched.

LTE BASIC SPECIFICATIONS


PARAMETER

DETAILS

Channel bandwidths
(MHz)

1.4, 3, 5, 10, 15, 20

Duplex schemes

FDD and TDD

Mobility

0 - 15 km/h (optimised),
15 - 120 km/h (high performance)

Latency

Idle to active less than 100ms


Small packets ~10 ms

Spectral efficiency

Downlink: 3 - 4 times Rel 6 HSDPA


Uplink: 2 -3 x Rel 6 HSUPA

Access schemes

OFDMA (Downlink)
SC-FDMA (Uplink)

Modulation types supported

QPSK, 16QAM, 64QAM (Uplink and downlink)

These highlight specifications give an overall view of the performance that LTE will offer. It meets the
requirements of industry for high data download speeds as well as reduced latency - a factor
important for many applications from VoIP to gaming and interactive use of data. It also provides
significant improvements in the use of the available spectrum.

Main LTE technologies


LTE has introduced a number of new technologies when compared to the previous cellular systems.
They enable LTE to be able to operate more efficiently with respect to the use of spectrum, and also
to provide the much higher data rates that are being required.

OFDM (Orthogonal Frequency Division Multiplex):

OFDM technology has been

incorporated into LTE because it enables high data bandwidths to be transmitted efficiently
while still providing a high degree of resilience to reflections and interference. The access
schemes differ between the uplink and downlink: OFDMA (Orthogonal Frequency Division
Multiple Access is used in the downlink; while SC-FDMA(Single Carrier - Frequency Division
Multiple Access) is used in the uplink. SC-FDMA is used in view of the fact that its peak to
average power ratio is small and the more constant power enables high RF power amplifier
efficiency in the mobile handsets - an important factor for battery power equipment. Read
more about LTE OFDM / OFDMA / SCFMDA

MIMO (Multiple Input Multiple Output):

One of the main problems that previous

telecommunications systems has encountered is that of multiple signals arising from the

many reflections that are encountered. By using MIMO, these additional signal paths can be
used to advantage and are able to be used to increase the throughput.
When using MIMO, it is necessary to use multiple antennas to enable the different paths to
be distinguished. Accordingly schemes using 2 x 2, 4 x 2, or 4 x 4 antenna matrices can be
used. While it is relatively easy to add further antennas to a base station, the same is not
true of mobile handsets, where the dimensions of the user equipment limit the number of
antennas which should be place at least a half wavelength apart. Read more about LTE
MIMO

SAE (System Architecture Evolution):

With the very high data rate and low latency

requirements for 3G LTE, it is necessary to evolve the system architecture to enable the
improved performance to be achieved. One change is that a number of the functions
previously handled by the core network have been transferred out to the periphery.
Essentially this provides a much "flatter" form of network architecture. In this way latency
times can be reduced and data can be routed more directly to its destination. Read more
about LTE SAE
A fuller description of what LTE is and the how the associated technologies work is all addressed in
much greater detail in the following pages of this tutorial.

LTE OFDM, OFDMA SC-FDMA & Modulation


- LTE, Long term Evolution uses the modulation format, OFDM - orthogonal frequency division
multiplex, adapted to provide a mulple access scheme using OFDMA and SC-FDMA.
LTE TUTORIAL INCLUDES
LTE Introduction
OFDM, OFDMA, SC-FDMA
LTE MIMO
TDD & FDD
Frame & subframe
Physical logical & transport channels
Bands and spectrum

UE categories
SAE architecture
LTE SON
VoLTE
SRVCC
LTE-M
LTE-U / LAA
Security
See also
4G LTE Advanced
One of the key elements of LTE is the use of OFDM, Orthogonal Frequency Division Multiplex, as
the signal bearer and the associated access schemes, OFDMA (Orthogonal Frequency Division
Multiplex) and SC-FDMA (Single Frequency Division Multiple Access).
OFDM is used in a number of other of systems from WLAN, WiMAX to broadcast technologies
including DVB and DAB. OFDM has many advantages including its robustness to multipath fading
and interference. In addition to this, even though, it may appear to be a particularly complicated form
of modulation, it lends itself to digital signal processing techniques.
In view of its advantages, the use of ODFM and the associated access technologies, OFDMA and
SC-FDMA are natural choices for the new LTE cellular standard.

LTE modulation & OFDM basics


The use of OFDM is a natural choice for LTE. While the basic concepts of OFDM are used, it has
naturally been tailored to meet the exact requirements for LTE. However its use of multiple carrier
each carrying a low data rate remains the same.

Note on OFDM:
Orthogonal Frequency Division Multiplex (OFDM) is a form of transmission that uses a large number of close spaced
carriers that are modulated with low rate data. Normally these signals would be expected to interfere with each other,
but by making the signals orthogonal to each other there is no mutual interference. The data to be transmitted is split
across all the carriers to give resilience against selective fading from multi-path effects..

Click on the link for an OFDM tutorial

The actual implementation of the technology will be different between the downlink (i.e. from base
station to mobile) and the uplink (i.e. mobile to the base station) as a result of the different
requirements between the two directions and the equipment at either end. However OFDM was
chosen as the signal bearer format because it is very resilient to interference. Also in recent years a
considerable level of experience has been gained in its use from the various forms of broadcasting
that use it along with Wi-Fi and WiMAX. OFDM is also a modulation format that is very suitable for
carrying high data rates - one of the key requirements for LTE.
In addition to this, OFDM can be used in both FDD and TDD formats. This becomes an additional
advantage.
s

LTE channel bandwidths and characteristics


One of the key parameters associated with the use of OFDM within LTE is the choice of bandwidth.
The available bandwidth influences a variety of decisions including the number of carriers that can
be accommodated in the OFDM signal and in turn this influences elements including the symbol
length and so forth.
LTE defines a number of channel bandwidths. Obviously the greater the bandwidth, the greater the
channel capacity.
The channel bandwidths that have been chosen for LTE are:
1. 1.4 MHz
2. 3 MHz
3. 5 MHz

4. 10 MHz
5. 15 MHz
6. 20 MHz
In addition to this the subcarriers spacing is 15 kHz, i.e. the LTE subcarriers are spaced 15 kHz apart
from each other. To maintain orthogonality, this gives a symbol rate of 1 / 15 kHz = of 66.7 s.
Each subcarrier is able to carry data at a maximum rate of 15 ksps (kilosymbols per second). This
gives a 20 MHz bandwidth system a raw symbol rate of 18 Msps. In turn this is able to provide a raw
data rate of 108 Mbps as each symbol using 64QAM is able to represent six bits.
It may appear that these rates do not align with the headline figures given in the LTE specifications.
The reason for this is that actual peak data rates are derived by first subtracting the coding and
control overheads. Then there are gains arising from elements such as the spatial multiplexing, etc.

LTE OFDM cyclic prefix, CP


One of the primary reasons for using OFDM as a modulation format within LTE (and many other
wireless systems for that matter) is its resilience to multipath delays and spread. However it is still
necessary to implement methods of adding resilience to the system. This helps overcome the intersymbol interference (ISI) that results from this.
In areas where inter-symbol interference is expected, it can be avoided by inserting a guard period
into the timing at the beginning of each data symbol. It is then possible to copy a section from the
end of the symbol to the beginning. This is known as the cyclic prefix, CP. The receiver can then
sample the waveform at the optimum time and avoid any inter-symbol interference caused by
reflections that are delayed by times up to the length of the cyclic prefix, CP.
The length of the cyclic prefix, CP is important. If it is not long enough then it will not counteract the
multipath reflection delay spread. If it is too long, then it will reduce the data throughput capacity. For
LTE, the standard length of the cyclic prefix has been chosen to be 4.69 s. This enables the system
to accommodate path variations of up to 1.4 km. With the symbol length in LTE set to 66.7 s.
The symbol length is defined by the fact that for OFDM systems the symbol length is equal to the
reciprocal of the carrier spacing so that orthogonality is achieved. With a carrier spacing of 15 kHz,
this gives the symbol length of 66.7 s.

LTE OFDMA in the downlink


The OFDM signal used in LTE comprises a maximum of 2048 different sub-carriers having a spacing
of 15 kHz. Although it is mandatory for the mobiles to have capability to be able to receive all 2048
sub-carriers, not all need to be transmitted by the base station which only needs to be able to

support the transmission of 72 sub-carriers. In this way all mobiles will be able to talk to any base
station.
Within the OFDM signal it is possible to choose between three types of modulation for the LTE
signal:
1. QPSK (= 4QAM) 2 bits per symbol
2. 16QAM 4 bits per symbol
3. 64QAM 6 bits per symbol

Note on QAM, Quadrature Amplitude Modualtion:


Quadrature amplitude modulation, QAM is widely sued for data transmission as it enables better elvels of spectral
efficiency than other forms of modulation. QAM uses two carriers on the same frequency shifted by 90 which are
modulated by two data streams - I or Inphase and Q - Quadrature elements.

The exact LTE modulation format is chosen depending upon the prevailing conditions. The lower
forms of modulation, (QPSK) do not require such a large signal to noise ratio but are not able to
send the data as fast. Only when there is a sufficient signal to noise ratio can the higher order
modulation format be used.

Downlink carriers and resource blocks


In the downlink, the subcarriers are split into resource blocks. This enables the system to be able to
compartmentalize the data across standard numbers of subcarriers.
Resource blocks comprise 12 subcarriers, regardless of the overall LTE signal bandwidth. They
also cover one slot in the time frame. This means that different LTE signal bandwidths will have
different numbers of resource blocks.

Channel bandwidth
(MHz)

1.4

10

15

20

Number of resource blocks

15

25

50

75

100

LTE SC-FDMA in the uplink


For the LTE uplink, a different concept is used for the access technique. Although still using a form of
OFDMA technology, the implementation is called Single Carrier Frequency Division Multiple Access
(SC-FDMA).
One of the key parameters that affects all mobiles is that of battery life. Even though battery
performance is improving all the time, it is still necessary to ensure that the mobiles use as little
battery power as possible. With the RF power amplifier that transmits the radio frequency signal via
the antenna to the base station being the highest power item within the mobile, it is necessary that it
operates in as efficient mode as possible. This can be significantly affected by the form of radio
frequency modulation and signal format. Signals that have a high peak to average ratio and require
linear amplification do not lend themselves to the use of efficient RF power amplifiers. As a result it is
necessary to employ a mode of transmission that has as near a constant power level when
operating. Unfortunately OFDM has a high peak to average ratio. While this is not a problem for the
base station where power is not a particular problem, it is unacceptable for the mobile. As a result,
LTE uses a modulation scheme known as SC-FDMA - Single Carrier Frequency Division Multiplex
which is a hybrid format. This combines the low peak to average ratio offered by single-carrier
systems with the multipath interference resilience and flexible subcarrier frequency allocation that
OFDM provides.
By Ian Poole

LTE MIMO: Multiple Input Multiple Output


Tutorial
- MIMO is used within LTE to provide better signal performance and / or higher data rates by the
use of the radio path reflections that exist.
IN THIS SECTION
LTE Introduction
OFDM, OFDMA, SC-FDMA
LTE MIMO
TDD & FDD

Frame & subframe


Physical logical & transport channels
Bands and spectrum
UE categories
SAE architecture
LTE SON
VoLTE
SRVCC
LTE-M
LTE-U / LAA
Security
See also
4G LTE Advanced
MIMO, Multiple Input Multiple Output is another of the LTE major technology innovations used to
improve the performance of the system. This technology provides LTE with the ability to further
improve its data throughput and spectral efficiency above that obtained by the use of OFDM.
Although MIMO adds complexity to the system in terms of processing and the number of antennas
required, it enables far high data rates to be achieved along with much improved spectral efficiency.
As a result, MIMO has been included as an integral part of LTE.

LTE MIMO basics


The basic concept of MIMO utilizes the multipath signal propagation that is present in all terrestrial
communications. Rather than providing interference, these paths can be used to advantage.

General Outline of MIMO system


The transmitter and receiver have more than one antenna and using the processing power available
at either end of the link, they are able to utilize the different paths that exist between the two entities
to provide improvements in data rate of signal to noise.

Note on MIMO:
Two major limitations in communications channels can be multipath interference, and the data throughput limitations
as a result of Shannon's Law. MIMO provides a way of utilising the multiple signal paths that exist between a
transmitter and receiver to significantly improve the data throughput available on a given channel with its defined
bandwidth. By using multiple antennas at the transmitter and receiver along with some complex digital signal
processing, MIMO technology enables the system to set up multiple data streams on the same channel, thereby
increasing the data capacity of a channel.

Click on the link for a MIMO tutorial

MIMO is being used increasingly in many high data rate technologies including Wi-Fi and other
wireless and cellular technologies to provide improved levels of efficiency. Essentially MIMO
employs multiple antennas on the receiver and transmitter to utilise the multi-path effects that always
exist to transmit additional data, rather than causing interference.

LTE MIMO

The use of MIMO technology has been introduced successively over the different releases of the
LTE standards.
MIMO has been a cornerstone of the LTE standard, but initially, in releases 8 and 9 multiple transmit
antennas on the UE was not supported because in the interested of power reduction, only a single
RF power amplifier was assumed to be available.
It was in Rel. 10 that a number of new schemes were introduced. Closed loop spatial multiplexing for
SU-MIMO as well as multiple antennas on the UE.

LTE MIMO modes


There are several ways in which MIMO is implemented in LTE. These vary according to the
equipment used, the channel function and the equipment involved in the link.

Single antenna:

This is the form of wireless transmission used on most basic wireless

links. A single data stream is transmitted on one antenna and received by one or more
antennas. It may also be referred to as SISO: Single In Single Out or SIMO Single In Multiple
Out dependent upon the antennas used. SIMO is also called receive diversity.

Transmit diversity: This form of LTE MIMO scheme utilizes the transmission of the same
information stream from multiple antennas. LTE supports two or four for this technique.. The
information is coded differently using Space Frequency Block Codes. This mode provides an
improvement in signal quality at reception and does not improve the data rate. Accordingly
this form of LTE MIMO is used on the Common Channels as well as the Control and
Broadcast channels.

Open loop spatial multiplexing: This form of MIMO used within the LTE system involves
sending two information streams which can be transmitted over two or more antennas.
However there is no feedback from the UE although a TRI, Transmit Rank Indicator
transmitted from the UE can be used by the base station to determine the number of spatial
layers.

Close loop spatial multiplexing :

This form of LTE MIMO is similar to the open loop

version, but as the name indicates it has feedback incorporated to close the loop. A PMI,
Pre-coding Matrix Indicator is fed back from the UE to the base station. This enables the
transmitter to pre-code the data to optimize the transmission and enable the receiver to more
easily separate the different data streams.

Closed loop with pre-coding: This is another form of LTE MIMO, but where a single code
word is transmitted over a single spatial layer. This can be sued as a fall-back mode for
closed loop spatial multiplexing and it may also be associated with beam forming as well.

Multi-User MIMO, MU-MIMO:

This form of LTE MIMO enables the system to target

different spatial streams to different users.

Beam-forming: This is the most complex of the MIMO modes and it is likely to use linear
arrays that will enable the antenna to focus on a particular area. This will reduce
interference, and increase capacity as the particular UE will have a beam formed in their
particular direction. In this a single code word is transmitted over a single spatial layer. A
dedicated reference signal is used for an additional port. The terminal estimates the channel
quality from the common reference signals on the antennas.

There is a growing number of LTE frequency bands that are being designated as possibilities for use
with LTE. Many of the LTE frequency bands are already in use for other cellular systems, whereas
other LTE bands are new and being introduced as other users are re-allocated spectrum elsewhere.

FDD and TDD LTE frequency bands


FDD spectrum requires pair bands, one of the uplink and one for the downlink, and TDD
requires a single band as uplink and downlink are on the same frequency but time separated .
As a result, there are different LTE band allocations for TDD and FDD. In some cases these bands
may overlap, and it is therefore feasible, although unlikely that both TDD and FDD transmissions
could be present on a particular LTE frequency band.
The greater likelihood is that a single UE or mobile will need to detect whether a TDD or FDD
transmission should be made on a given band. UEs that roam may encounter both types on the
same band. They will therefore need to detect what type of transmission is being made on
that particular LTE band in its current location.
The different LTE frequency allocations or LTE frequency bands are allocated numbers. Currently
the LTE bands between 1 & 22 are for paired spectrum, i.e. FDD, and LTE bands between 33 & 41
are for unpaired spectrum, i.e. TDD.

LTE frequency band definitions

FDD LTE frequency band allocations


There are a large number of allocations or radio spectrum that has been reserved for FDD,
frequency division duplex, LTE use.
The FDD LTE frequency bands are paired to allow simultaneous transmission on two
frequencies. The bands also have a sufficient separation to enable the transmitted signals not to
unduly impair the receiver performance. If the signals are too close then the receiver may be
"blocked" and the sensitivity impaired. The separation must be sufficient to enable the roll-off of the
antenna filtering to give sufficient attenuation of the transmitted signal within the receive band.
FDD LTE BANDS & FREQUENCIES
LTE
BAND
NUMBER

UPLINK
(MHZ)

DOWNLINK
(MHZ)

WIDTH
OF
BAND
(MHZ)

DUPLEX
SPACIN
G (MHZ)

BAND
GAP
(MHZ)

1920 - 1980

2110 - 2170

60

190

130

1850 - 1910

1930 - 1990

60

80

20

1710 - 1785

1805 -1880

75

95

20

1710 - 1755

2110 - 2155

45

400

355

824 - 849

869 - 894

25

45

20

830 - 840

875 - 885

10

35

25

2500 - 2570

2620 - 2690

70

120

50

880 - 915

925 - 960

35

45

10

1749.9 - 1784.9

1844.9 - 1879.9

35

95

60

10

1710 - 1770

2110 - 2170

60

400

340

11

1427.9 - 1452.9

1475.9 - 1500.9

20

48

28

12

698 - 716

728 - 746

18

30

12

13

777 - 787

746 - 756

10

-31

41

14

788 - 798

758 - 768

10

-30

40

15

1900 - 1920

2600 - 2620

20

700

680

16

2010 - 2025

2585 - 2600

15

575

560

17

704 - 716

734 - 746

12

30

18

18

815 - 830

860 - 875

15

45

30

19

830 - 845

875 - 890

15

45

30

20

832 - 862

791 - 821

30

-41

71

21

1447.9 - 1462.9

1495.5 - 1510.9

15

48

33

22

3410 - 3500

3510 - 3600

90

100

10

23

2000 - 2020

2180 - 2200

20

180

160

24

1625.5 - 1660.5

1525 - 1559

34

-101.5

135.5

25

1850 - 1915

1930 - 1995

65

80

15

FDD LTE BANDS & FREQUENCIES


LTE
BAND
NUMBER

UPLINK
(MHZ)

DOWNLINK
(MHZ)

WIDTH
OF
BAND
(MHZ)

DUPLEX
SPACIN
G (MHZ)

BAND
GAP
(MHZ)

26

814 - 849

859 - 894

30 / 40

27

807 - 824

852 - 869

17

45

28

28

703 - 748

758 - 803

45

55

10

29

n/a

717 - 728

11

10

30

2305 - 2315

2350 - 2360

10

45

35

31

452.5 - 457.5

462.5 - 467.5

10

TDD LTE frequency band allocations


With the interest in TDD LTE, there are several unpaired frequency allocations that are being
prepared for LTR TDD use. The TDD LTE bands are unpaired because the uplink and downlink
share the same frequency, being time multiplexed.
TDD LTE BANDS & FREQUENCIES
LTE BAND
NUMBER

ALLOCATION (MHZ)

WIDTH OF BAND (MHZ)

33

1900 - 1920

20

34

2010 - 2025

15

35

1850 - 1910

60

36

1930 - 1990

60

37

1910 - 1930

20

38

2570 - 2620

50

39

1880 - 1920

40

40

2300 - 2400

100

41

2496 - 2690

194

42

3400 - 3600

200

43

3600 - 3800

200

44

703 - 803

100

There are regular additions to the LTE frequency bands / LTE spectrum allocations as a result of
negotiations at the ITU regulatory meetings. These LTE allocations are resulting in part from the
digital dividend, and also from the pressure caused by the ever growing need for mobile
communications. Many of the new LTE spectrum allocations are relatively small, often 10 - 20MHz in
bandwidth, and this is a cause for concern. With LTE-Advanced needing bandwidths of 100 MHz,
channel aggregation over a wide set of frequencies many be needed, and this has been recognised
as a significant technological problem. . . . . . . . .

In the same way that a variety of other systems adopted different categories for the handsets or user
equipment, so too there are 3G LTE UE categories. These LTE categories define the standards to
which a particular handset, dongle or other equipment will operate.

LTE UE category rationale


The LTE UE categories or UE classes are needed to ensure that the base station, or eNodeB, eNB
can communicate correctly with the user equipment. By relaying the LTE UE category information to
the base station, it is able to determine the performance of the UE and communicate with it
accordingly.
As the LTE category defines the overall performance and the capabilities of the UE, it is possible for
the eNB to communicate using capabilities that it knows the UE possesses. Accordingly the eNB will
not communicate beyond the performance of the UE.

LTE UE category definitions


there are five different LTE UE categories that are defined. As can be seen in the table below, the
different LTE UE categories have a wide range in the supported parameters and performance. LTE
category 1, for example does not support MIMO, but LTE UE category five supports 4x4 MIMO.
It is also worth noting that UE class 1 does not offer the performance offered by that of the highest
performance HSPA category. Additionally all LTE UE categories are capable of receiving
transmissions from up to four antenna ports.
A summary of the different LTE UE category parameters is given in the tables below.
HEADLINE DATA RATES FOR LTE UE CATEGORIES
CATEGORY
LINK

Downlink

10

50

100

150

300

Uplink

25

50

50

75

While the headline rates for the different LTE UE categories or UE classes show the maximum data
rates achievable, it is worth looking in further detail at the underlying performance characteristics.
UL AND DL PARAMETERS FOR LTE UE CATEGORIES
CATEGORY
PARAMETER

CAT 1

CAT 2

CAT 3

CAT 4

CAT 5

Max number of DL-SCH


transport block bits received
in a TTI

10 296

51 024

102 048

150 752

302 752

Max number of bits of a DLSCH block received in a TTI

10 296

51 024

75 376

75 376

151 376

Total number of soft channel


bits

250 368

1 237 248

1 237 248

1 827 072

3 667 200

Maximum number of
supported layers for spatial
multiplexing in DL

Max number of bits of an


UL-SCH transport block
received in a TTI

5 160

25 456

51 024

51 024

75 376

Support for 64-QAM in UL

No

No

No

No

Yes

From this it can be seen that the peak downlink data rate for a Category 5 UE using 4x4 MIMO is
approximately 300 Mbps, and 150 Mbps for a Category 4 UE using 2x2 MIMO. Also in the Uplink,
LTE UE category 5 provides a peak data rate of 75 Mbps using 64-QAM.

Note:
DL-SCH
UL-SCH
TTI = Transmission Time Interval

=
=

Downlink
Uplink

shared
shared

channel
channel

LTE Category 0
With the considerable level of development being undertaken into the Internet of Things, IoT and
general machine to machine, M2M communications, there has been a growing need to develop an
LTE category focussed on these applications. Here, much lower data rates are needed, often only in
short bursts and an accompanying requirement is for the remote device or machine to be able to
draw only low levels of current.
To enable the requirements of these devices to be met using LTE, and new LTE category was
developed. Referred to as LTE Category 0, or simply Cat 0, this new category has a reduced
performance requirement that meets the needs of many machines while significantly reducing

complexity and current consumption. Whilst Category 0 offered a reduced specification, it still
complied with the LTE system requirements.

LTE CATEGORY 0 PERFORMANCE SUMMARY


PARAMETER

CATEGORY 0
PERFORMANCE

Peak downlink rate

1 Mbps

Peak uplink rate

1 Mbps

Max number of downlink spatial layers

Number of UE RF chains

Duplex mode

Half duplex

UE receive bandwidth

20 MHz

Maximum UE transmit power

23 dBm

The new LTE Category 0 was introduced in Rel 12 of the 3GPP standards. And it is being advanced
in further releases.
One major advantage of LTE Category 0 is that the modem complexity is considerably reduced
when compared to other LTE Categories. It is expected that the modem complexity for a Cat 0
modem will be around 50% that of a Category 1 modem.

LTE UE category summary


In the same way that category information is used for virtually all cellular systems from GPRS
onwards, so the LTE UE category information is of great importance. While users may not be
particularly aware of the category of their UE, it will match the performance an allow the eNB to
communicate effectively with all the UEs that are connected to it.

Along with 3G LTE - Long Term Evolution that applies more to the radio access technology of the
cellular telecommunications system, there is also an evolution of the core network. Known as SAE System Architecture Evolution. This new architecture has been developed to provide a considerably
higher level of performance that is in line with the requirements of LTE.
As a result it is anticipated that operators will commence introducing hardware conforming to the
new System Architecture Evolution standards so that the anticipated data levels can be handled
when 3G LTE is introduced.
The new SAE, System Architecture Evolution has also been developed so that it is fully compatible
with LTE Advanced, the new 4G technology. Therefore when LTE Advanced is introduced, the
network will be able to handle the further data increases with little change.

Reason for SAE System Architecture Evolution


The SAE System Architecture Evolution offers many advantages over previous topologies and
systems used for cellular core networks. As a result it is anticipated that it will be wide adopted by
the cellular operators.
SAE System Architecture Evolution will offer a number of key advantages:
1. Improved data capacity: With 3G LTE offering data download rates of 100 Mbps, and
the focus of the system being on mobile broadband, it will be necessary for the network to be
able to handle much greater levels of data. To achieve this it is necessary to adopt a system
architecture that lends itself to much greater levels of data transfer.
2. All IP architecture: When 3G was first developed, voice was still carried as circuit switched
data. Since then there has been a relentless move to IP data. Accordingly the new SAE,
System Architecture Evolution schemes have adopted an all IP network configuration.
3. Reduced latency: With increased levels of interaction being required and much faster
responses, the new SAE concepts have been evolved to ensure that the levels of
latency have been reduced to around 10 ms. This will ensure that applications using 3G
LTE will be sufficiently responsive.
4. Reduced OPEX and CAPEX: A key element for any operator is to reduce costs. It is
therefore essential that any new design reduces both the capital expenditure
(CAPEX)and the operational expenditure (OPEX). The new flat architecture used for SAE
System Architecture Evolution means that only two node types are used. In addition to this
a high level of automatic configuration is introduced and this reduces the set-up and
commissioning time.

SAE System Architecture Evolution basics


The new SAE network is based upon the GSM / WCDMA core networks to enable simplified
operations and easy deployment. Despite this, the SAE network brings in some major changes, and
allows far more efficient and effect transfer of data.

There are several common principles used in the development of the LTE SAE network:

a common gateway node and anchor point for all technologies.

an optimized architecture for the user plane with only two node types.

an all IP based system with IP based protocols used on all interfaces.

a split in the control / user plane between the MME, mobility management entity and
the gateway.

a radio access network / core network functional split similar to that used on
WCDMA / HSPA.

integration of non-3GPP access technologies (e.g. cdma2000, WiMAX, etc) using


client as well as network based mobile-IP.

The main element of the LTE SAE network is what is termed the Evolved Packet Core or EPC. This
connects to the eNodeBs as shown in the diagram below.

LTE SAE Evolved Packet Core


As seen within the diagram, the LTE SAE Evolved Packet Core, EPC consists of four main
elements as listed below:

Mobility Management Entity, MME: The MME is the main control node for the LTE SAE
access network, handling a number of features:
o

Idle mode UE tracking

Bearer activation / de-activation

Choice of SGW for a UE

Intra-LTE handover involving core network node location

Interacting with HSS to authenticate user on attachment and implements roaming


restrictions

It acts as a termination for the Non-Access Stratum (NAS)

Provides temporary identities for UEs

The SAE MME acts the termination point for ciphering protection for NAS signaling.
As part of this it also handles the security key management. Accordingly the MME is
the point at which lawful interception of signalling may be made.

Paging procedure

The S3 interface terminates in the MME thereby providing the control plane function
for mobility between LTE and 2G/3G access networks.

The SAE MME also terminates the S6a interface for the home HSS for roaming UEs.

It can therefore be seen that the SAE MME provides a considerable level of overall control
functionality.

Serving Gateway, SGW: The Serving Gateway, SGW, is a data plane element within the
LTE SAE. Its main purpose is to manage the user plane mobility and it also acts as the main
border between the Radio Access Network, RAN and the core network. The SGW also
maintains the data paths between the eNodeBs and the PDN Gateways. In this way the
SGW forms a interface for the data packet network at the E-UTRAN.
Also when UEs move across areas served by different eNodeBs, the SGW serves as a
mobility anchor ensuring that the data path is maintained.

PDN Gateway, PGW:

The LTE SAE PDN gateway provides connectivity for the UE to

external packet data networks, fulfilling the function of entry and exit point for UE data. The
UE may have connectivity with more than one PGW for accessing multiple PDNs.

Policy and Charging Rules Function, PCRF: This is the generic name for the entity within
the LTE SAE EPC which detects the service flow, enforces charging policy. For applications
that require dynamic policy or charging control, a network element entitled the Applications
Function, AF is used.

LTE SAE PCRF Interfaces

LTE SAE Distributed intelligence


In order that requirements for increased data capacity and reduced latency can be met, along with
the move to an all-IP network, it is necessary to adopt a new approach to the network structure.
For 3G UMTS / WCDMA the UTRAN (UMTS Terrestrial Radio Access Network, comprising the Node
B's or basestations and Radio Network Controllers) employed low levels of autonomy. The Node Bs
were connected in a star formation to the Radio Network Controllers (RNCs) which carried out the
majority of the management of the radio resource. In turn the RNCs connected to the core network
and connect in turn to the Core Network.
To provide the required functionality within LTE SAE, the basic system architecture sees the removal
of a layer of management. The RNC is removed and the radio resource management is devolved to
the base-stations. The new style base-stations are called eNodeBs or eNBs.
The eNBs are connected directly to the core network gateway via a newly defined "S1 interface". In
addition to this the new eNBs also connect to adjacent eNBs in a mesh via an "X2 interface". This
provides a much greater level of direct interconnectivity. It also enables many calls to be routed very
directly as a large number of calls and connections are to other mobiles in the same or adjacent
cells. The new structure allows many calls to be routed far more directly and with only
minimum interaction with the core network.

In addition to the new Layer 1 and Layer 2 functionality, eNBs handle several other functions. This
includes the radio resource control including admission control, load balancing and radio mobility
control including handover decisions for the mobile or user equipment (UE).
The additional levels of flexibility and functionality given to the new eNBs mean that they are more
complex than the UMTS and previous generations of base-station. However the new 3G LTE SAE
network structure enables far higher levels of performance. In addition to this their flexibility enables
them to be updated to handle new upgrades to the system including the transition from 3G LTE to
4G LTE Advanced.
The new System Architecture Evolution, SAE for LTE provides a new approach for the core network,
enabling far higher levels of data to be transported to enable it to support the much higher data rates
that will be possible with LTE. In addition to this, other features that enable the CAPEX and OPEX to
be reduced when compared to existing systems, thereby enabling higher levels of efficiency to be
achieved.
With LTE requiring smaller cell sizes to enable the much greater levels of data traffic to be handled,
there networks have become considerably more complicated and trying to plan and manage the
network centrally is not as viable. Coupled with the need to reduce costs by reducing manual input,
there has been a growing impetus to implement self-organizing networks.
Accordingly LTE can be seen as one of the major drivers behind the self-organizing network, SON
philosophy.
Accordingly 3GPP developed many of the requirements for LTE SON to sit alongside the basic
functionality of LTE. As a result the standards for LTE SON are embedded within the 3GPP
standards.

LTE SON development


The term SON came into frequent use after the term was adopted by the Next Generation Mobile
Networks, NGMN alliance. The idea came about as result of the need within LTE to be able to
deploy many more cells. Femtocells and other microcells are an integral part of the LTE deployment
strategy. With revenue per bit falling, costs for deployment must be kept to a minimum as well as
ensuring the network is operating to its greatest efficiency.
3GPP, the Third Generation Partnership Programme has created the standards for SON and as they
are generally first to be deployed with LTE, they are often referred to as LTE SON.

While 3GPP has generated the standards, they have been based upon long term objectives for a
'SON-enabled broadband mobile network' set out by the NGMN.
NGMN has defined the necessary use cases, measurements, procedures and open interfaces to
ensure that multivendor offerings are available. 3GPP has incorporated these aspirations into
useable standards.

Major elements of LTE SON ( SELF ORGANIZING NETWORK )


Although LTE SON self-optimising networks is one of the major drivers for the generic SON
technology, the basic requirements remain the same whatever the technology to which it will be
applied.
The main elements of SON include:

Self configuration: The aim for the self configuration aspects of LTE SON is to enable new
base stations to become essentially "Plug and Play" items. They should need as little manual
intervention in the configuration process as possible. Not only will they be able to organize
the RF aspects, but also configure the backhaul as well.

Self optimisation: Once the system has been set up, LTE SON capabilities will enable the
base station to optimise the operational characteristics to best meet the needs of the overall
network.

Self-healing: Another major feature of LTE SON is to enable the network to self-heal. It will
do this by changing the characteristics of the network to mask the problem until it is fixed.
For example, the boundaries of adjacent cells can be increased by changing antenna
directions and increasing power levels, etc..

Typically an LTE SON system is a software package with relevant options that is incorporated into an
operator's network.

Note on SON, Self Organizing Networks:


SON mainly came out of the requirements of LTE and the more complicated networks that will arise. However the
concepts behind SON can be applied at any network enabling its efficiency to be increased while keeping costs low.

Accordingly, it is being used increasingly to reduce operational and capital expenditure by adding software to the
network to enable it to organise and run itself.

Click on the link for further information about Self Organising Networks, SON

LTE SON and 3GPP standards


LTE Son has been standardised in the various 3GPP standards. It was first incorporated into 3GPP
release 8, and further functionality has been progressively added in the further releases of the
standards.
One of the major aims of the 3GPP standardization is the support of SON features is to ensure that
multi-vendor network environments operate correctly with LTE SON. As a result, 3GPP has defined a
set of LTE SON use cases and the associated SON functions.
As the functionality of LTE advances, the LTE SON standardisation effectively track the LTE network
evolution stages. In this way SON will be applicable to the LTE networks.

The Voice over LTE, VoLTE scheme was devised as a result of operators seeking a standardized
system for transferring traffic for voice over LTE.
Originally LTE was seen as a completely IP cellular system just for carrying data, and operators
would be able to carry voice either by reverting to 2G / 3G systems or by using VoIP in one form or
another.

From around 2014 Phones like this iPhone6 incorporated VoLTE as standard

However it was seen that this would lead to fragmentation and incompatibility not allowing all phones
to communicate with each other and this would reduce voice traffic. Additionally SMS services are
still widely used, often proving a means of set-up for other applications.
Even though revenue from voice calls and SMS is falling, a format for voice over LTE and
messaging, it was as necessary to have a viable and standardized scheme to provide the voice and
SMS services to protect this revenue.

Options for LTE Voice


When looking at the options for ways of carrying voice over the LTE system, a number of possible
solutions were investigated. A number of alliances were set up to promote different ways of providing
the service. A number of systems were prosed as outlined below:

CSFB, Circuit Switched Fall Back: The circuit switched fall-back, CSFB option for
providing voice over LTE has been standardized under 3GPP specification 23.272.
Essentially LTE CSFB uses a variety of processes and network elements to enable the
circuit to fall back to the 2G or 3G connection (GSM, UMTS, CDMA2000 1x) before a circuit
switched call is initiated.
The specification also allows for SMS to be carried as this is essential for very many set-up
procedures for cellular telecommunications. To achieve this the handset uses an interface
known as SGs which allows messages to be sent over an LTE channel.

SV-LTE - Simultaneous Voice LTE: SV-LTE allows packet switched LTE services to run
simultaneously with a circuit switched voice service. SV-LTE facility provides the facilities of
CSFB at the same time as running a packet switched data service. It has the disadvantage
that it requires two radios to run at the same time within the handset which has a serious
impact on battery life which is already a major issue.

VoLGA, Voice over LTE via GAN: The VoLGA standard was based on the existing 3GPP
Generic Access Network (GAN) standard, and the aim was to enable LTE users to receive a
consistent set of voice, SMS (and other circuit-switched) services as they transition between
GSM, UMTS and LTE access networks. For mobile operators, the aim of VoLGA was to
provide a low-cost and low-risk approach for bringing their primary revenue generating
services (voice and SMS) onto the new LTE network deployments.

One Voice / later called Voice over LTE, VoLTE: The Voice over LTE, VoLTE scheme for
providing voice over an LTE system utilises IMS enabling it to become part of a rich media
solution. It was the option chosen by the GSMA for use on LTE and is the standardised
method for providing SMS and voice over LTE.

Voice over LTE, VoLTE formation


Originally the concept for an SMS and voice system over LTE using IMS had been opposed by many
operators because of the complexity of IMS. They had seen it as far too expensive and burdensome
to introduce and maintain.
However, the One Voice profile for Voice over LTE was developed by collaboration between over
forty operators including: AT&T, Verizon Wireless, Nokia and Alcatel-Lucent.
At the 2010 GSMA Mobile World Congress, GSMA announced that they were supporting the One
Voice solution to provide Voice over LTE.
To achieve a workable system, a cut down variant of IMS was used. It was felt that his would be
acceptable to operators while still providing the functionality required.
The VoLTE system is based on the IMS MMTel concepts that were previously in existence. It has
been specified in the GSMA profile IR 92.

Voice over LTE, VoLTE basics


VoLTE, Voice over LTE is an IMS-based specification. Adopting this approach, it enables the system
to be integrated with the suite of applications that will become available on LTE.

Note on IMS:
The IP Multimedia Subsystem or IP Multimedia Core Network Subsystem, IMS is an architectural framework
for delivering Internet Protocol, IP multimedia services. It enables a variety of services to be run seemlessly
rather than having several disparate applications operating concurrently.

Click for an IMS tutorial

In order that IMS was implemented in fashion that would be acceptable to operators, a cut down
version was defined. This not only reduced the number of entities required in the IMS network, but it
also simplified the interconnectivity - focussing on the elements required for VoLTE.

Reduced IMS network for VoLTE


As can be seen there are several entities within the reduced IMS network used for VoLTE:

IP-CAN IP, Connectivity Access Network: This consists of the EUTRAN and the MME.

P-CSCF, Proxy Call State Control Function: The P-CSCF is the user to network proxy. In
this respect all SIP signalling to and from the user runs via the P-CSCF whether in the home
or a visited network.

I-CSCF, Interrogating Call State Control Function: The I-CSCF is used for forwarding an
initial SIP request to the S-CSCF. When the initiator does not know which S-CSCF should
receive the request.

S-CSCF, Serving Call State Control Function:

The S-CSCF undertakes a variety of

actions within the overall system, and it has a number of interfaces to enable it to
communicate with other entities within the overall system.

AS, Application Server:

It is the application server that handles the voice as an

application.

HSS, Home Subscriber Server:

The IMS HSS or home subscriber server is the main

subscriber database used within IMS. The IMS HSS provides details of the subscribers to
the other entities within the IMS network, enabling users to be granted access or not
dependent upon their status.
The IMS calls for VoLTE are processed by the subscriber's S-CSCF in the home network. The
connection to the S-CSCF is via the P-CSCF. Dependent upon the network in use and overall
location within a network, the P-CSCF will vary, and a key element in the enablement of voice calling
capability is the discovery of the P-CSCF.
An additional requirement for VoLTE enabled networks is to have a means to handing back to circuit
switched legacy networks in a seamless manner, while only having one transmitting radio in the
handset to preserve battery life. A system known as SRVCC - Single Radio Voice Call Continuity
is required for this. Read more about SRVCC - Single Radio-Voice Call Continuity

VoLTE codecs

As with any digital voice system, a codec must be used. The VoLTE codec is that specified by 3GPP
and is the adaptive multi-rate, AMR codec that is used in many other cellular systems from GSM
through UMTS and now to LTE. The AMR-wideband codec may also be used.
The used of the AMR codec for VoLTE also provides advantages in terms of interoperability with
legacy systems. No transcoders are needed as most legacy systems now are moving towards the
AMR codec.
In addition to this, support for dual tone multi-frequency, DTMF signalling is also mandatory as this is
widely used for many forms of signalling over analogue telephone lines.

VoLTE IP versions
With the update from IPv4 to IPv6, the version of IP used in any system is of importance.
VoLTE devices are required to operate in dual stack mode catering for both IPv4 and IPv6.
If the IMS application profile assigns and IPv6 address, then the device is required to prefer that
address and also to specifically use it during the P-CSCF discovery phase.
One of the issues with voice over IP type calls is the overhead resulting from the IP header. To
overcome this issue VoLTE requires that IP header compression is used along with RoHC, Robust
Header Compression, protocol for voice data packet headers.

SRVCC - Single Radio Voice call Continuity is a level of functionality that is required within
VoLTE systems to enable the packet domain calls on LTE to be handed over to legacy circuit
switched voice systems like GSM, UMTS and CDMA 1x in a seamless manner.
As LTE systems deploy VoLTE coverage will be limited and it is anticipated that it will be many years
before complete LTE coverage will be available.
As a result it is necessary for operators to have a system whereby this complicated handover can be
accommodated in a seamless fashion. This scheme needs to be in place as soon as they start to
deploy VoLTE.

What is SRVCC?
SRVCC, Single radio Voice Call Continuity, is a scheme that enables Inter Radio Access Technology,
Inter RAT handover as well as a handover from packet data to circuit switched data voice calls.
By using SRVCC operators are able to make the handovers while maintaining existing quality of
service, QoS and also ensuring that call continuity meets the critical requirements for emergency
calls.

Some ideas for handover require that the handset has two active radios to facilitate handover. This is
not ideal because it requires additional circuitry to enable the two radios to be active simultaneously
and it also adds considerably to battery drain.
The SRVCC requires only a single active radio in the handset and requires some upgrades to the
supporting network infrastructure.

SRVCC network architecture


The concept for SRVCC was originally included in the 3GPP specification Release 8. Since then it
has evolved to take account of the various issues and changing requirements. As a result GSMA
recommends that 3GPP Rel 10 or later is implemented as this ensures a considerably lower level of
voice interruption and dropped calls.
The network upgrades required to the cellular network are needed in both the LTE network and that
of the legacy network or networks. SRVCC requires that software upgrades are required to the MSS
- Mobile SoftSwitch subsystem in the legacy MSC - Mobile Switching Centre, the IMS subsystem
and the LTE/EPC subsystem. No upgrades are required for the radio access network of the legacy
system, meaning that the majority of the legacy system remains unaffected.
The upgrades required for the MSC are normally relatively easy to manage. The MSC is normally
centrally located and not dispersed around the network, and this makes upgrades easier to manage.
If they are not easily accessible then a new dedicated MSC can be used that has been upgraded to
handles the SRVCC requirements.

How SRVCC works


The SRVCC implementation controls the transfer of calls in both directions.

LTE to legacy network handover


Handover from LTE to the legacy network is required when the user moves out of the LTE coverage
area. Using SRVCC, the handover is undertaken in two stages.

Radio Access Technology transfer: The handover for the radio access network and this
is a well-established protocol that is in use for transfers from 3G to 2G for example.

Session transfer: The session transfer is the new element that is required for SRVCC. It is
required to move the access control and voice media anchoring from the Evolved Packet
Core, EPC of the packet switched LTE network to the legacy circuit switched network.

During the handover process the CSCF within the IMS architecture maintains the control of the
whole operation.

Voice handover using SRVCC on LTE


The SRVCC handover process takes place in a number of steps:
1. The handover process is initiated by a request for session transfer from the IMS CSCF.
2. The IMS CSCF responds simultaneously with two commands, one to the LTE network, and
the other to the legacy network.
3. the LTE network receives a radio Access Network handover execution command through the
MME and LTE RAN. This instructs the user device to prepare to move to a circuit switched
network for the voice call.
4. The destination legacy circuit switched network receives a session transfer response
preparing it to accept the call from the LTE network.
5. After all the commands have been executed and acknowledged the call is switched to the
legacy network with the IMS CSCF still in control of the call.

Legacy network to LTE


When returning a call to the LTE network much of the same functionality is again used.
To ensure the VoLTE device is able to return to the LTE RAN from the legacy RAN, there are two
options the legacy RAN can implement to provide a swift and effective return:

Allow LTE information to be broadcast on the legacy RAN so the LTE device is able to
perform the cell reselection more easily.

Simultaneously release the connection to the user device and redirect it to the LTE RAN.

SRVCC interruption performance


One of the key issues with VoLTE and SRVCC is the interruption time when handing over from an
LTE RAN to a legacy RAN.
The key methodology behind reducing the time is to simultaneous perform the redirections of RAN
and session. In this way the user experience is maintained and the actual interruption time is not
unduly noticeable.
It has been found that the session redirection is the faster of the two handovers, and therefore it is
necessary for the overall handover methodology to accommodate the fact that there are difference
between the two.
By Ian Poole

M2M
The Internet of Things, IoT and machine to machine, M2M communications are growing rapidly.
LTE, the Long Term Evolution cellular system is well placed to carry a lot of the traffic for machine to
machine communications.
The issue is that LTE is a complex system capable of carrying high data rates.
To overcome this issue a "variant" of LTE, often referred to as LTE-M has been developed for LTE
M2M communications.

LTE-M key issues


There are several requirements for LTE M2M applications if the cellular system is to be viable in
these scenarios:

Wide spectrum of devices: Any LTE machine to machine system must be able to support
a wide variety of different types of devices. These may range from smart meters to vending
machines and automotive fleet management to security and medical devices. These different
devices have many differing requirements, so any LTE-M system needs to be able to be
flexible.

Low cost of devices: Most M2M devices need to be small and fit into equipment that is
very cost sensitive. With many low cost M2M systems already available, LTE-M needs to
provide the benefits of a cellular system, but at low cost.

Long battery life : Many M2M devices will need to be left unattended for long periods of
time in areas where there may be no power supply. Maintaining batteries is a costly business
and therefore any devices should be able to have a time between battery changes of up to
ten years. This means that the LTE-M system must be capable of draining very little battery
power.

Enhanced coverage : LTE-M applications will need to operate within a variety of locations
- not just where reception is good. They will need to operate within buildings, often in
positions where there is little access and where reception may be poor. Accordingly LTE-M
must be able to operate under all conditions.

Large volumes - low data rates: As it is anticipated that volumes of remote devices will be
enormous, the LTE-M must be structured so that the networks are be able to accommodate
vast numbers of connected devices that may only require small amounts of data to be
carried, often in short peaks but with low data rates.

Rel 12 updates for LTE-M


A number of updates were introduced in 3GPP Rel 12 to accommodate LTE-M requirements.
These updates mean that the cost of a low cost M2M modem could be 40 to 50% that of a regular
LTE devices, making them comparable with EGPRS ones.
To accommodate these requirements a new a new UE category has been implemented LTE
Category 0. These categories define the broad capabilities of the device so that the base station is
able to communicate properly. Read more about LTE UE categories.
These low cost LTE-M, M2M modems have limited capability and are:

Antennas: There is the capability for only one receive antenna compared to two receive
antennas for other device categories.

Transport Block Size: There is a restriction on the transport block size These low cost
LTE-M devices are allowed to send or receive up to 1000 bits of unicast data per sub-frame.
This reduces the maximum data rate to 1 Mbps in both the uplink and the downlink.

Duplex: Half duplex FDD devices are supported as an optional feature - this provides cost
savings because it enables RF switches and duplexers that are needed for the full
performance modems to be removed. It also means there is no need for a second phase
locked loop for the frequency conversion, although having only one PLL means that
switching times between receive and transmit are longer.

LTE-M features planned for Rel 13

There are several features that are being proposed and prepared for the next release of the 3GPP
standards in terms of LTE M2M capabilities. These include some of the following capabilities:

Reduce bandwidth to 1.4 MHz for uplink and downlink

Reduce transmit power to 20dBm

Reduce support for downlink transmission modes

Relax the requirements that require high levels of processing, e.g. downlink modulation
scheme, reduce downlink HARQ timeline

It should be stated that these last points for Rel 13 are currently only proposals and are not
implemented.

With a number of cellular style M2M wireless communication systems like LoRa and SIGFOX being
deployed, LTE needs its own M2M capability to ensure that it is able to compete with these growing
standards. Otherwise LTE may not be suitable for carrying this form of low data rate date from
devices that require long battery life, etc. LTE-M is the cellular operators' answer to this.
By Ian Poole

LTE-U Unlicensed, LTE-LAA


- LTE-U (LTE-Unlicensed), or as it is also known LTE-LAA (LTE-License Assisted Access)
utilises unlicensed spectrum, typically in the 5GHz band to provide additional radio spectrum.
LTE TUTORIAL INCLUDES
LTE Introduction
OFDM, OFDMA, SC-FDMA
LTE MIMO
TDD & FDD
Frame & subframe
Physical logical & transport channels
Bands and spectrum

UE categories
SAE architecture
LTE SON
VoLTE
SRVCC
LTE-M
LTE-U / LAA
Security
See also
4G LTE Advanced
LTE networks are carrying an increasing amount of data. Although cells can be made smaller to help
accommodate this, it is not the complete solution and more spectrum is needed.
One approach is to use unlicensed spectrum alongside the licensed bands. Known in 3GPP as LTELAA - LTE License Assisted Access or more generally as LTE U - LTE Unlicensed, it enables access
to unlicensed spectrum especially in the 5GHz ISM band.

LTE-U background
There is a considerable amount of unlicensed spectrum available around the globe. These bands
are used globally to provide unlicensed access for short range radio transmissions. These bands,
called ISM - Industrial, Scientific and Medical bands are allocated in different parts of the
spectrum and are used for a wide variety of applications including microwave ovens, Wi-Fi,
Bluetooth, and much more.
The frequency band of most interest for LTE-U, Unlicensed / LTE-LAA, License Assisted Access is
the 5GHz band. Here there are several hundred MHz of spectrum bandwidth available, although the
exact bands available depend upon the country in question.

5GHz bands for LTE-U / LTE-LAA


In addition to the basic frequency limits, the use of the 5GHz bands for applications such as LTE-U
or LTE-LAA carries some regulatory requirements.
One of the main requirements for access to these frequencies is that of being able to coexist with
other users of the band - a method of Clear Channel Assessment, CCA, or Listen Before Talk, LBT is
required. This often means that instantaneous access may not always be available when LTE-U is
being implemented.
Another requirements is that there are different power levels allowed dependent upon the country
and the area of the band being used. Typically between 5150 and 5350 MHz there is a maximum
power limit of 200 mW and operation is restricted to indoor use only, and the upper frequencies often
allow power levels up to 1 W.

LTE-U / LTE-LAA basics


The use of LTE-U (Unlicensed) / LTE-LAA (License Assisted Access) was first introduced in Rel13 of
the 3GPP standards. Essentially, LTE-U is built upon the carrier aggregation capability of LTEAdvanced that has been deployed since around 2013. Essentially Carrier aggregation seeks to
increase the overall bandwidth available to a user equipment by enabling it to use more than one
channel, either in the same band, or within another band.
There are several ways in which LTE-U can be deployed:

Downlink only: This is the most basic form of LTE-U and it is similar in approach to some
of the first LTE carrier aggregation deployments. In this the primary cell link is always located
in
the
licensed
spectrum
bands.

Also when operating in this mode, the LTE eNodeB performs most of the necessary
operations to ensure reliable operation is maintained and interference is not caused to other
users by ensuring the channel is free.

Uplink and downlink: Full TDD LTE-U operation with the user equipment having an uplink
and downlink connection in the unlicensed spectrum requires the inclusion of more features.

FDD / TDD aggregation:

LTE-CA allows the use of carrier aggregation mixes between

FDD and TDD. This provides for much greater levels of flexibility when selecting the band to
be used with in unlicensed spectrum for LTE-LAA operation.
LTE-U relies on the existing core network for the backhaul, and other capabilities like security and
authentication. As such no changes are needed to the core network. Some changes are needed to
the base station so that it can accommodate the new frequencies and also incorporate the

capabilities required to ensure proper sharing of the unlicensed frequencies. In addition to this, the
handsets or UEs will need to have the new LTE-U / LTE-LAA capability incorporated into them so
they can access LTE on these additional frequencies.

LTE-U / Wi-Fi coexistence


One of the great fears that many have is that the use of LTE-U will swamp the 5GHz unlicensed
band and that Wi-Fi using these frequencies will suffer along with other users.
The LTE-U system is being designed to overcome this issue and using an listen before transmit, LBT
solution, all users should be able to coexist without any undue levels of interference.
There will be cases where LTE-U operation and Wi-Fi use different channels and under these
circumstances there will be only minimal levels of interference.
It is also possible to run LTE-U and Wi-Fi on the same channel. Under these circumstances both are
able to operate, although with a lower data throughput. It is also possible to place a "fairness"
algorithm into the eNodeB to ensure that the Wi-Fi signal is not unduly degraded and is still able to
support a good data thro0ughput.

LTE Security
- overview, about the basics of LTE security including the techniques used for LTE
authentication, ciphering, encryption, and identity protection.
IN THIS SECTION
LTE Introduction
OFDM, OFDMA, SC-FDMA
LTE MIMO
TDD & FDD
Frame & subframe
Physical logical & transport channels
Bands and spectrum

UE categories
SAE architecture
LTE SON
VoLTE
SRVCC
LTE-M
LTE-U / LAA
Security
See also
4G LTE Advanced
LTE security is an issue that is of paramount importance. It is necessary to ensure that LTE security
measures provide the level of security required without impacting the user as this could drive users
away.
Nevertheless with the level of sophistication of security attacks growing, it is necessary to ensure
that LTE security allows users to operate freely and without fear of attack from hackers. Additionally
the network must also be organised in such a way that it is secure against a variety of attacks.

LTE security basics


When developing the LTE security elements there were several main requirements that were borne
in mind:

LTE security had to provide at least the same level of security that was provided by 3G
services.

The LTE security measures should not affect user convenience.

The LTE security measures taken should provide defence from attacks from the Internet.

The security functions provided by LTE should not affect the transition from existing 3G
services to LTE.

The USIM currently used for 3G services should still be used.

To ensure these requirements for LTE security are met, it has been necessary to add further
measures into all areas of the system from the UE through to the core network.
The main changes that have been required to implement the required level of LTE security are
summarised below:

A new hierarchical key system has been introduced in which keys can be changed for
different purposes.

The LTE security functions for the Non-Access Stratum, NAS, and Access Stratum, AS have
been separated. The NAS functions are those functions for which the processing is
accomplished between the core network and the mobile terminal or UE. The AS functions
encompass the communications between the network edge, i.e. the Evolved Node B, eNB
and the UE.

The concept of forward security has been introduced for LTE security.

LTE security functions have been introduced between the existing 3G network and the LTE
network.

LTE USIM
One of the key elements within the security of GSM, UMTS and now LTE was the concept of the
subscriber identity module, SIM. This card carried the identity of the subscriber in an encrypted
fashion and this could allow the subscriber to keep their identity while transferring or upgrading
phones.
With the transition form 2G - GSM to 3G - UMTS, the idea of the SIM was upgraded and a USIM UMTS Subscriber Identity Module, was used. This gave more functionality, had a larger memory, etc.
For LTE, only the USIM may be used - the older SIM cards are not compatible and may not be used.
By Ian Poole

4G LTE Advanced Tutorial


- overview, information, tutorial about the basics of LTE Advanced, the 4G technology being
called IMT Advanced being developed under 3GPP.
IN THIS SECTION
LTE Advanced Tutorial
Carrier Aggregation
Coordinated Multipoint - CoMP
LTE Relay
LTE D2D
LTE HetNet
See also
3G LTE
With the standards definitions now available for LTE, the Long Term Evolution of the 3G services,
eyes are now turning towards the next development, that of the truly 4G technology named IMT
Advanced. The new technology being developed under the auspices of 3GPP to meet these
requirements is often termed LTE Advanced.

In order that the cellular telecommunications technology is able to keep pace with technologies that
may compete, it is necessary to ensure that new cellular technologies are being formulated and
developed. This is the reasoning behind starting the development of the new LTE Advanced
systems, proving the technology and developing the LTE Advanced standards.

In order that the correct solution is adopted for the 4G system, the ITU-R (International
Telecommunications Union - Radiocommunications sector) has started its evaluation process to
develop the recommendations for the terrestrial components of the IMT Advanced radio interface.
One of the main competitors for this is the LTE Advanced solution.
One of the key milestones is October 2010 when the ITU-R decides the framework and key
characteristics for the IMT Advanced standard. Before this, the ITU-R will undertake the evaluation of
the various proposed radio interface technologies of which LTE Advanced is a major contender.

Key milestones for ITU-R IMT Advanced evaluation


The ITU-R has set a number of milestones to ensure that the evaluation of IMT Advanced
technologies occurs in a timely fashion. A summary of the main milestones is given below and this
defines many of the overall timescales for the development of IMT Advanced and in this case LTE
Advanced as one of the main technologies to be evaluated.

KEY MILESTONES ON THE DEVELOPMENT OF 4G LTE-ADVANCED


MILESTONE

DATE

Issue invitation to propose Radio Interface Technologies.

March 2008

ITU date for cut-off for submission of proposed Radio Interface


Technologies.

October
2009

Cutoff date for evaluation report to ITU.

June 2010

KEY MILESTONES ON THE DEVELOPMENT OF 4G LTE-ADVANCED


MILESTONE

DATE

Decision on framework of key characteristics of IMT Advanced


Radio Interface Technologies.

October
2010

Completion of development of radio interface specification


recommendations.

February
2011

LTE Advanced development history


With 3G technology established, it was obvious that the rate of development of cellular technology
should not slow. As a result initial ideas for the development of a new 4G system started to be
investigated. In one early investigation which took place on 25 December 2006 with information
released to the press on 9 February 2007, NTT DoCoMo detailed information about trials in which
they were able to send data at speeds up to approximately 5 Gbit/s in the downlink within a 100MHz
bandwidth to a mobile station moving at 10km/h. The scheme used several technologies to achieve
this including variable spreading factor spread orthogonal frequency division multiplex, MIMO,
multiple input multiple output, and maximum likelihood detection. Details of these new 4G trials were
passed to 3GPP for their consideration
In 2008 3GPP held two workshops on IMT Advanced, where the "Requirements for Further
Advancements for E-UTRA" were gathered. The resulting Technical Report 36.913 was then
published in June 2008 and submitted to the ITU-R defining the LTE-Advanced system as their
proposal for IMT-Advanced.
The development of LTE Advanced / IMT Advanced can be seen to follow and evolution from the 3G
services that were developed using UMTS / W-CDMA technology.

COMPARISON OF LTE-A WITH OTHER CELLULAR TECHNOLOGIES


WCDM
A
(UMTS)

HSPA
HSDPA /
HSUPA

HSPA+

LTE

LTE ADVANCED
(IMT ADVANCED)

Max downlink speed


bps

384 k

14 M

28 M

100M

1G

Max uplink speed


bps

128 k

5.7 M

11 M

50 M

500 M

COMPARISON OF LTE-A WITH OTHER CELLULAR TECHNOLOGIES


WCDM
A
(UMTS)

HSPA
HSDPA /
HSUPA

HSPA+

LTE

LTE ADVANCED
(IMT ADVANCED)

Latency
round trip time
approx

150 ms

100 ms

50ms
(max)

~10 ms

less than 5 ms

3GPP releases

Rel 99/4

Rel 5 / 6

Rel 7

Rel 8

Rel 10

Approx years of initial roll


out

2003 / 4

2005 / 6
HSDPA
2007 / 8
HSUPA

2008 / 9

2009 / 10

2014 / 15

Access methodology

CDMA

CDMA

CDMA

OFDMA / SCFDMA

OFDMA / SCFDMA

LTE Advanced is not the only candidate technology. WiMAX is also there, offering very high data
rates and high levels of mobility. However it now seems less likely that WiMAX will be adopted as the
4G technology, with LTE Advanced appearing to be better positioned.

LTE Advanced key features


With work starting on LTE Advanced, a number of key requirements and key features are coming to
light. Although not fixed yet in the specifications, there are many high level aims for the new LTE
Advanced specification. These will need to be verified and much work remains to be undertaken in
the specifications before these are all fixed. Currently some of the main headline aims for LTE
Advanced can be seen below:
1. Peak data rates: downlink - 1 Gbps; uplink - 500 Mbps.
2. Spectrum efficiency: 3 times greater than LTE.
3. Peak spectrum efficiency: downlink - 30 bps/Hz; uplink - 15 bps/Hz.
4. Spectrum use: the ability to support scalable bandwidth use and spectrum aggregation
where non-contiguous spectrum needs to be used.
5. Latency: from Idle to Connected in less than 50 ms and then shorter than 5 ms one way for
individual packet transmission.
6. Cell edge user throughput to be twice that of LTE.

7. Average user throughput to be 3 times that of LTE.


8. Mobility: Same as that in LTE
9. Compatibility: LTE Advanced shall be capable of interworking with LTE and 3GPP legacy
systems.
These are many of the development aims for LTE Advanced. Their actual figures and the actual
implementation of them will need to be worked out during the specification stage of the system.

LTE Advanced technologies


There are a number of key technologies that will enable LTE Advanced to achieve the high data
throughput rates that are required. MIMO and OFDM are two of the base technologies that will be
enablers. Along with these there are a number of other techniques and technologies that will be
employed.

Orthogonal Frequency Division Multiplex, OFDM

OFDM forms the basis of the radio

bearer. Along with it there is OFDMA (Orthogonal Frequency Division Multiple Access) along
with SC-FDMA (Single Channel Orthogonal Frequency Division Multiple Access). These will
be used in a hybrid format. However the basis for all of these access schemes is OFDM.

Note on OFDM:
Orthogonal Frequency Division Multiplex (OFDM) is a form of transmission that uses a large number of
close spaced carriers that are modulated with low rate data. Normally these signals would be expected to
interfere with each other, but by making the signals orthogonal to each other there is no mutual interference.
The data to be transmitted is split across all the carriers to give resilience against selective fading from multipath effects..

Click on the link for an OFDM tutorial

Multiple Input Multiple Output, MIMO: One of the other key enablers for LTE Advanced
that is common to LTE is MIMO. This scheme is also used by many other technologies
including WiMAX and Wi-Fi - 802.11n. MIMO - Multiple Input Multiple Output enables the
data rates achieved to be increased beyond what the basic radio bearer would normally
allow.

Note on MIMO:
Two major limitations in communications channels can be multipath interference, and the data throughput
limitations as a result of Shannon's Law. MIMO provides a way of utilising the multiple signal paths that exist
between a transmitter and receiver to significantly improve the data throughput available on a given channel
with its defined bandwidth. By using multiple antennas at the transmitter and receiver along with some
complex digital signal processing, MIMO technology enables the system to set up multiple data streams on
the same channel, thereby increasing the data capacity of a channel.

Click on the link for a MIMO tutorial

For LTE Advanced, the use of MIMO is likely to involve further and more advanced
techniques including the use of additional antennas in the matrix to enable additional paths
to be used, although as the number of antennas increases, the overhead increases and the
return
per
additional
path
is
less.
In additional to the numbers of antennas increasing, it is likely that techniques such as
beamforming may be used to enable the antenna coverage to be focused where it is
needed.

Carrier Aggregation, CA: As many operators do not have sufficient contiguous spectrum
to provide the required bandwidths for the very high data rates, a scheme known as carrier
aggregation has been developed. Using this technology operators are able to utilise multiple
channels either in the same bands or different areas of the spectrum to provide the required
bandwidth. Read more about Carrier Aggregation, CA

Coordinated Multipoint : One of the key issues with many cellular systems is that of poor
performance at the cell edges. Interference from adjacent cells along with poor signal quality
lead to a reduction in data rates. For LTE-Advanced a scheme known as coordinated
multipoint has been introduced. Read more aboutCoordinated Multipoint, CoMP

LTE Relaying: LTE relaying is a scheme that enables signals to be forwarded by remote
stations from a main base station to improve coverage. Read more about LTE Relaying

Device to Device, D2D:

LTE D2D is a facility that has been requested by a number of

users, in particular the emergency services. It enables fast swift access via direct
communication - a facility that is essential for the emergency services when they may be on
the scene of an incident. Read more about Device to Device communications

With data rates rising well above what was previously available, it will be necessary to ensure that
the core network is updated to meet the increasing requirements. It is therefore necessary to further
improve the system architecture.
These and other technologies will be used with LTE Advanced to provide the very high data rates
that are being sought along with the other performance characteristics that are needed. . . . . . . . . . .
By Ian Poole

LTE CA: Carrier Aggregation Tutorial


- 4G LTE Advanced CA, carrier aggregation or channel aggregation enables multiple LTE
carriers to be used together to provide the high data rates required for 4G LTE Advanced.
4G LTE ADVANCED INCLUDES:
LTE Advanced Tutorial
Carrier Aggregation
Coordinated Multipoint - CoMP
LTE Relay
LTE D2D
LTE HetNet
See also
3G LTE
LTE Advanced offers considerably higher data rates than even the initial releases of LTE. While the
spectrum usage efficiency has been improved, this alone cannot provide the required data rates that
are being headlined for 4G LTE Advanced.
To achieve these very high data rates it is necessary to increase the transmission bandwidths over
those that can be supported by a single carrier or channel. The method being proposed is termed
carrier aggregation, CA, or sometimes channel aggregation. Using LTE Advanced carrier
aggregation, it is possible to utilise more than one carrier and in this way increase the overall
transmission bandwidth.

These channels or carriers may be in contiguous elements of the spectrum, or they may be in
different bands.
Spectrum availability is a key issue for 4G LTE. In many areas only small bands are available, often
as small as 10 MHz. As a result carrier aggregation over more than one band is contained within the
specification, although it does present some technical challenges.
Carrier aggregation is supported by both formats of LTE, namely the FDD and TDD variants. This
ensures that both FDD LTE and TDD LTE are able to meet the high data throughput requirements
placed upon them.

LTE carrier aggregation basics


The target figures for data throughput in the downlink is 1 Gbps for 4G LTE Advanced. Even with the
improvements in spectral efficiency it is not possible to provide the required headline data
throughput rates within the maximum 20 MHz channel. The only way to achieve the higher data
rates is to increase the overall bandwidth used. IMT Advanced sets the upper limit at 100 MHz, but
with an expectation of 40 MHz being used for minimum performance. For the future it is possible the
top limit of 100 MHz could be extended.
It is well understood that spectrum is a valuable commodity, and it takes time to re-assign it from one
use to another in view - the cost of forcing users to move is huge as new equipment needs to be
bought. Accordingly as sections of the spectrum fall out of use, they can be re-assigned. This leads
to significant levels of fragmentation.
To an LTE terminal, each component carrier appears as an LTE carrier, while an LTE-Advanced
terminal can exploit the total aggregated bandwidth.

RF aspects of carrier aggregation


There are a number of ways in which LTE carriers can be aggregated:

Types of LTE carrier aggregation

Intra-band:
formats

This form of carrier aggregation uses a single band. There are two main
for

this

type

of

carrier

aggregation:

Contiguous: The Intra-band contiguous carrier aggregation is the easiest form of


LTE carrier aggregation to implement. Here the carriers are adjacent to each other.

Contiguous aggregation of two uplink component carriers

The aggregated channel can be considered by the terminal as a single enlarged


channel from the RF viewpoint. In this instance, only one transceiver is required
within the terminal or UE, whereas more are required where the channels are not

adjacent. However as the RF bandwidth increases it is necessary to ensure that the


UE in particular is able to operate over such a wide bandwidth without a reduction in
performance. Although the performance requirements are the same for the base
station, the space, power consumption, and cost requirements are considerably less
stringent, allowing greater flexibility in the design. Additionally for the base station,
multi-carrier operation, even if non-aggregated, is already a requirement in many
instances, requiring little or no change to the RF elements of the design. Software
upgrades would naturally be required to cater for the additional capability.
o

Non-contiguous: Non-contiguous intra-band carrier aggregation is somewhat more


complicated than the instance where adjacent carriers are used. No longer can the
multi-carrier signal be treated as a single signal and therefore two transceivers are
required. This adds significant complexity, particularly to the UE where space, power
and cost are prime considerations.

Inter-band non-contiguous: This form of carrier aggregation uses different bands. It will
be of particular use because of the fragmentation of bands - some of which are only 10 MHz
wide. For the UE it requires the use of multiple transceivers within the single item, with the
usual impact on cost, performance and power. In addition to this there are also additional
complexities resulting from the requirements to reduce intermodulation and cross modulation
from the two transceivers

The current standards allow for up to five 20 MHz carriers to be aggregated, although in practice two
or three is likely to be the practical limit. These aggregated carriers can be transmitted in parallel to
or from the same terminal, thereby enabling a much higher throughput to be obtained.

Carrier aggregation bandwidths


When aggregating carriers for an LTE signal, there are several definitions required for the bandwidth
of the combined channels. As there as several bandwidths that need to be described, it is necessary
to define them to reduce confusion.

LTE Carrier Aggregation Bandwidth Definitions for Intra-Band Case

LTE carrier aggregation bandwidth classes


There is a total of six different carrier aggregation, CA bandwidth classes which are being defined.

CARRIER AGGREGATION
BANDWIDTH CLASS

AGGREGATED
TRANSMISSION
BW CONFIGURATION

NUMBER OF COMPONENT
CARRIERS

100

100

100 - 200

NB: classes D, E, & F are in the study phase.

LTE aggregated carriers

When carriers are aggregated, each carrier is referred to as a component carrier. There are two
categories:

Primary component carrier: This is the main carrier in any group. There will be a primary
downlink carrier and an associated uplink primary component carrier.

Secondary component carrier:

There may be one or more secondary component

carriers.
There is no definition of which carrier should be used as a primary component carrier - different
terminals may use different carriers. The configuration of the primary component carrier is terminal
specific and will be determined according to the loading on the various carriers as well as other
relevant parameters.
In addition to this the association between the downlink primary carrier and the corresponding uplink
primary component carrier is cell specific. Again there are no definitions of how this must be
organised. The information is signalled to the terminal of user equipment as part of the overall
signalling between the terminal and the base station.

Carrier aggregation cross carrier scheduling


When LTE carrier aggregation is used, it is necessary to be able to schedule the data across the
carriers and to inform the terminal of the DCI rates for the different component carriers. This
information may be implicit, or it may be explicit dependent upon whether cross carrier scheduling is
used.
Enabling of the cross carrier scheduling is achieved individually via the RRC signalling on a per
component carrier basis or a per terminal basis.
When no cross carrier scheduling is arranged, the downlink scheduling assignments achieved on a
per carrier basis, i.e. they are valid for the component carrier on which they were transmitted.
For the uplink, an association is created between one downlink component carrier and an uplink
component carrier. In this way when uplink grants are sent the terminal or UE will know to which
uplink component carrier they apply.
Where cross carrier scheduling is active, the PDSCH on the downlink or the PUSCH on the uplink is
transmitted on an associate component carrier other than the PDCCH, the carrier indicator in the
PDCCH provides the information about the component carrier used for the PDSCH or PUSCH.

It is necessary to be able to indicate to which component carrier in any aggregation scheme a grant
relates. To facilitate this, component carriers are numbered. The primary component carrier is
numbered zero, for all instances, and the different secondary component carriers are assigned a
unique number through the UE specific RRC signalling. This means that even if the terminal or user
equipment and the base station, eNodeB may have different understandings of the component
carrier numbering during reconfiguration, transmissions on the primary component carrier can be
scheduled.

4G LTE CoMP, Coordinated Multipoint Tutorial


- 4G LTE Advanced CoMP, coordinated multipoint is used to send and receive data to and from
a UE from several points to ensure the optimum performance is achieved even at cell edges.
4G LTE ADVANCED INCLUDES:
LTE Advanced Tutorial
Carrier Aggregation
Coordinated Multipoint - CoMP
LTE Relay
LTE D2D
LTE HetNet
See also
3G LTE
LTE CoMP or Coordinated Multipoint is a facility that is being developed for LTE Advanced - many of
the facilities are still under development and may change as the standards define the different
elements of CoMP more specifically.
LTE Coordinated Multipoint is essentially a range of different techniques that enable the dynamic
coordination of transmission and reception over a variety of different base stations. The aim is to
improve overall quality for the user as well as improving the utilisation of the network.
Essentially, LTE Advanced CoMP turns the inter-cell interference, ICI, into useful signal, especially at
the cell borders where performance may be degraded.

Over the years the importance of inter-cell interference, ICI has been recognised, and various
techniques used from the days of GSM to mitigate its effects. Here interference averaging
techniques such as frequency hopping were utilised. However as technology has advanced, much
tighter and more effective methods of combating and utilising the interference have gained support.

LTE CoMP and 3GPP


The concepts for Coordinated Multipoint, CoMP, have been the focus of many studies by 3GPP for
LTE-Advanced as well as the IEEE for their WiMAX, 802.16 standards. For 3GPP there are studies
that have focussed on the techniques involved, but no conclusion has been reached regarding the
full implementation of the scheme. However basic concepts have been established and these are
described below.
CoMP has not been included in Rel.10 of the 3GPP standards, but as work is on-going, CoMP is
likely to reach a greater level of consensus. When this occurs it will be included in future releases of
the standards.
Despite the fact that Rel.10 does not provide any specific support for CoMP, some schemes can be
implemented in LTE Rel.10 networks in a proprietary manner. This may enable a simpler upgrade
when standardisation is finally agreed.

LTE CoMP - the advantages


Although LTE Advanced CoMP, Coordinated Multipoint is a complex set of techniques, it brings
many advantages to the user as well as the network operator.

Makes better utilisation of network: By providing connections to several base stations at


once, using CoMP, data can be passed through least loaded base stations for better
resource utilisation.

Provides enhanced reception performance: Using several cell sites for each connection
means that overall reception will be improved and the number of dropped calls should be
reduced.

Multiple site reception increases received power: The joint reception from multiple base
stations or sites using LTE Coordinated Multipoint techniques enables the overall received
power at the handset to be increased.

Interference reduction: By using specialised combining techniques it is possible to utilise


the interference constructively rather than destructively, thereby reducing interference levels.

What is LTE CoMP? - the basics


Coordinated multipoint transmission and reception actually refers to a wide range of techniques that
enable dynamic coordination or transmission and reception with multiple geographically separated
eNBs. Its aim is to enhance the overall system performance, utilise the resources more effectively
and improve the end user service quality.
One of the key parameters for LTE as a whole, and in particular 4G LTE Advanced is the high data
rates that are achievable. These data rates are relatively easy to maintain close to the base station,
but as distances increase they become more difficult to maintain.
Obviously the cell edges are the most challenging. Not only is the signal lower in strength because
of the distance from the base station (eNB), but also interference levels from neighbouring eNBs are
likely to be higher as the UE will be closer to them.
4G LTE CoMP, Coordinated Multipoint requires close coordination between a number of
geographically separated eNBs. They dynamically coordinate to provide joint scheduling and
transmissions as well as proving joint processing of the received signals. In this way a UE at the
edge of a cell is able to be served by two or more eNBs to improve signals reception / transmission
and increase throughput particularly under cell edge conditions.

Concept of LTE Advanced CoMP - Coordinated Multipoint


In essence, 4G LTE CoMP, Coordinated Multipoint falls into two major categories:

Joint processing: Joint processing occurs where there is coordination between multiple
entities - base stations - that are simultaneously transmitting or receiving to or from UEs.

Coordinated scheduling or beamforming: This often referred to as CS/CB (coordinated


scheduling / coordinated beamforming) is a form of coordination where a UE is transmitting
with a single transmission or reception point - base station. However the communication is
made with an exchange of control among several coordinated entities.

To achieve either of these modes, highly detailed feedback is required on the channel properties in a
fast manner so that the changes can be made. The other requirement is for very close coordination
between the eNBs to facilitate the combination of data or fast switching of the cells.
The techniques used for coordinated multipoint, CoMP are very different for the uplink and downlink.
This results from the fact that the eNBs are in a network, connected to other eNBs, whereas the
handsets or UEs are individual elements.

Downlink LTE CoMP


The downlink LTE CoMP requires dynamic coordination amongst several geographically separated
eNBs transmitting to the UE. The two formats of coordinated multipoint can be divided for the
downlink:

Joint processing schemes for transmitting in the downlink : Using this element of LTE
CoMP, data is transmitted to the UE simultaneously from a number of different eNBs. The
aim is to improve the received signal quality and strength. It may also have the aim of
actively cancelling interference from transmissions that are intended for other UEs.
This form of coordinated multipoint places a high demand onto the backhaul network
because the data to be transmitted to the UE needs to be sent to each eNB that will be
transmitting it to the UE. This may easily double or triple the amount of data in the network
dependent upon how many eNBs will be sending the data. In addition to this, joint
processing data needs to be sent between all eNBs involved in the CoMP area.

Coordinated scheduling and or beamforming: Using this concept, data to a single UE is


transmitted from one eNB. The scheduling decisions as well as any beams are coordinated
to
control
the
interference
that
may
be
generated.
The advantage of this approach is that the requirements for coordination across the
backhaul
network
are
considerably
reduced
for
two
reasons:

UE data does not need to be transmitted from multiple eNBs, and therefore only
needs to be directed to one eNB.

Only scheduling decisions and details of beams needs to be coordinated between


multiple eNBs.

Uplink LTE CoMP

Joint reception and processing:

The basic concept behind this format is to utilise

antennas at different sites. By coordinating between the different eNBs it is possible to form
a virtual antenna array. The signals received by the eNBs are then combined and processed
to produce the final output signal. This technique allows for signals that are very low in
strength, or masked by interference in some areas to be receiving with few errors.
The main disadvantage with this technique is that large amounts of data need to be
transferred between the eNBs for it to operate.

Coordinated scheduling: This scheme operates by coordinating the scheduling decisions


amongst

the

ENBs

to

minimise

interference.

As in the case of the downlink, this format provides a much reduced load in the backhaul
network because only the scheduling data needs to be transferred between the different
eNBs that are coordinating with each other.

Overall requirements for LTE CoMP


One of the key requirements for LTE is that it should be able to provide a very low level of latency.
The additional processing required for multiple site reception and transmission could add
significantly to any delays. This could result from the need for the additional processing as well as
the communication between the different sites.
To overcome this, it is anticipated that the different sites may be connected together in a form of
centralised RAN, or C-RAN.
By Ian Poole

4G LTE Advanced Relay


- 4G LTE Advanced relay technology, how LTE relaying works and details about relay nodes,
RNs.
4G LTE ADVANCED INCLUDES:
LTE Advanced Tutorial
Carrier Aggregation
Coordinated Multipoint - CoMP
LTE Relay
LTE D2D
LTE HetNet
See also
3G LTE
Relaying is one of the features being proposed for the 4G LTE Advanced system. The aim of LTE
relaying is to enhance both coverage and capacity.
The idea of relays is not new, but LTE relays and LTE relaying is being considered to ensure that the
optimum performance is achieved to enable the expectations of the users to be met while still
keeping OPEX within the budgeted bounds.

Need for LTE relay technology


One of the main drivers for the use of LTE is the high data rates that can be achieved. However all
technologies suffer from reduced data rates at the cell edge where signal levels are lower and
interference levels are typically higher.
The use of technologies such as MIMO, OFDM and advanced error correction techniques improve
throughput under many conditions, but do not fully mitigate the problems experienced at the cell
edge.

As cell edge performance is becoming more critical, with some of the technologies being pushed
towards their limits, it is necessary to look at solutions that will enhance performance at the cell edge
for a comparatively low cost. One solution that is being investigated and proposed is that of the use
of LTE relays.

LTE relay basics


LTE relaying is different to the use of a repeater which re-broadcasts the signal. A relay will actually
receive, demodulates and decodes the data, apply any error correction, etc to it and then retransmitting a new signal. In this way, the signal quality is enhanced with an LTE relay, rather than
suffering degradation from a reduced signal to noise ratio when using a repeater.
For an LTE relay, the UEs communicate with the relay node, which in turn communicates with a
donor eNB.
Relay nodes can optionally support higher layer functionality, for example decode user data from the
donor eNB and re-encode the data before transmission to the UE.
The LTE relay is a fixed relay - infrastructure without a wired backhaul connection, that relays
messages between the base station (BS) and mobile stations (MSs) through multihop
communication.
There are a number of scenarios where LTE relay will be advantageous.

Increase network density:

LTE relay nodes can be deployed very easily in situations

where the aim is to increase network capacity by increasing the number of eNBs to ensure
good signal levels are received by all users. LTE relays are easy to install as they require no
separate backhaul and they are small enabling them to be installed in many convenient
areas,
e.g.
on
street
lamps,
on
walls,
etc.

LTE relay used to increase network density

Network coverage extension : LTE relays can be used as a convenient method of filling
small holes in coverage. With no need to install a complete base station, the relay can be
quickly
installed
so
that
it
fills
in
the
coverage
blackspot.

LTE relay coverage extension - filling in coverage hole

Additionally LTE relay nodes may be sued to increase the coverage outside main area. With
suitable high gain antennas and also if antenna for the link to the donor eNB is placed in a
suitable location it will be able to maintain good communications and provide the required
coverage
extension.

LTE relay coverage extension - extending coverage

It can be noted that relay nodes may be cascaded to provide considerable extensions of the
coverage.

Rapid network roll-out:

Without the need to install backhaul, or possibly install large

masts, LTE relays can provide a very easy method of extending coverage during the early
roll-out of a network. More traditional eNBs may be installed later as the traffic volumes
increase.

LTE relay to provide fast rollout & deployment

LTE relaying full & half duplex


LTE relay nodes can operate in one of two scenarios:

Half-Duplex:

A half-duplex system provides communication in both directions, but not

simultaneously - the transmissions must be time multiplexed. For LTE relay, this requires
careful scheduling. It requires that the RN coordinates its resource allocation with the UEs in
the uplink and the assigned donor eNB in the downlink. This can be achieved using static
pre-assigned solutions, or more dynamic ones requiring more intelligence and
communication for greater flexibility and optimisation.

Full Duplex: For full duplex, the systems are able to transmit and receive at the same time.
For LTE relay nodes this is often on the same frequency. The relay nodes will receive the
signal, process it and then transmit it on the same frequency with a small delay, although this
will be small when compared to the frame duration. To achieve full duplex, there must be
good isolation between the transmit and receive antennas.

When considering full or half duplex systems for LTE relay nodes, there is a trade-off between
performance and the relay node cost. The receiver performance is critical, and also the antenna
isolation must be reasonably high to allow the simultaneous transmission and reception when only
one channel is used.

LTE relay types


There is a number of different types of LTE relay node that can be used. However before defining
the relay node types, it is necessary to look at the different modes of operation.
One important feature or characteristic of an LTE relay node is the carrier frequency it operates on.
There are two methods of operation:

Inband: An LTE relay node is said to be "Inband" if the link between the base station and
the relay node are on the same carrier frequency as the link between the LTE relay node and
the user equipment, UE, i.e. the BS-RN link and the BS-UE link are on the same carrier
frequency.

Outband: For Outband LTE relay nodes, RNs, the BS-RN link operates of a different carrier
frequency to that of the RN-UE link.

For the LTE relay nodes themselves there are two basic types that are being proposed, although
there are subdivisions within these basic types:

Type 1 LTE relay nodes: These LTE relays control their cells with their own identity
including the transmission of their own synchronisation channels and reference symbols.
Type 1 relays appear as if they are a Release 8 eNB to Release 8 UEs. This ensures
backwards compatibility. The basic Type 1 LTE relay provides half duplex with Inband
transmissions.
There

are

two

further

sub-types

within

this

category:

Type 1.a: These LTE relay nodes are outband RNs which have the same properties
as the basic Type 1 relay nodes, but they can transmit and receive at the same time,
i.e. full duplex.

Type 1.b: This form of LTE relay node is an inband form. They have a sufficient
isolation between the antennas used for the BS-RN and the RN-UE links. This
isolation can be achieved by antenna spacing and directivity as well as specialised
digital signal processing techniques, although there are cost impacts of doing this.
The performance of these RNs is anticipated to be similar to that of femtocells.

Type 2 LTE relay nodes: These LTE relaying nodes do not have their own cell identity and
look just like the main cell. Any UE in range is not able to distinguish a relay from the main
eNB within the cell. Control information can be transmitted from the eNB and user data from
the LTE relay.
LTE RELAY CLASS

CELL ID

DUPLEX FORMAT

Type 1

Yes

Inband half duplex

Type 1.a

Yes

Outband full duplex

Type 1.b

Yes

Inband full duplex

Type 2

No

Inband full duplex

Summary of Relay Classifications & Features in 3GPP Rel.10


There is still much work to be undertaken on LTE relaying. The exact manner of LTE relays is to be
included in Release 10 of the 3GPP standards and specifications.
By Ian Poole

4G LTE Device to Device, D2D


- 4G LTE Advanced device to device, D2D communication for high data rate local direct
communications using LTE devices.
4G LTE ADVANCED INCLUDES:
LTE Advanced Tutorial
Carrier Aggregation

Coordinated Multipoint - CoMP


LTE Relay
LTE D2D
LTE HetNet
See also
3G LTE
One of the schemes that is being researched and considered for 4G LTE Advanced is the concept of
Device to Device communications.
This form of communication using the LTE system is used where direct communications are needed
within a small area.
LTE D2D communications is a peer to peer link which does not use the cellular network
infrastructure, but enables LTE based devices to communicate directly with one another when they
are in close proximity.
One of the particular applications where LTE device to device communications is for the emergency
services. With proprietary systems like TETRA being expensive to maintain because of the separate
infrastructure required, the LTE is becoming increasingly attractive as a result of cost, and
performance. The main issue is that of reliability.
LTE device to device communication is also being investigated for applications where peer discovery
is required for commercial applications in the presence of network support.
LTE D2D was a feature that appeared in LTE REl 12.

Benefits of D2D communications


Direct communications between devices can provide several benefits to users in various applications
where the devices are in close proximity:

Data rates: Devices may be remote from cellular infrastructure and may therefore not be
able to support high data rate transmission that may be required

Reliable communications:

LTE Device to Device can be sued to communicate locally

between devices to provide high reliability communications especially if the LTE network has
failed for any reason - even as a result of the disaster.

Instant communications:

As the D2D communications does not rely on the network

infrastructure the devices could be used for instant communications between a set number
of devices in the same way that walkie-talkies are used. This is particularly applicable to t e
way communications may be used by the emergency services.

Use of licensed spectrum:

Unlike other deveice to device systems including Wi-Fi,

Bluetooth, etc, LTE would use licensed spectrum and this would enable the frequencies to be
used to be less subject to interference, thereby allowing more reliable communications.

Interference reduction: By not having to communicate directly with a base station, fewer
links are required (i.e. essentially only between devices) and this has an impact of the
amount of data being transmitted within a given spectrum allocation. This reduces the overall
level of interference.

Power saving : Using device to device communication provides energy saving for a variety
of reasons. One major area is that if the two0 devices are in close proximity then lower
transmission power levels are required.

LTE D2D basics


4G LTE device to device, D2D would enable the direct link of a device, user equipment UE, etc to
another device using the cellular spectrum. This could allow large volumes of media or other data to
be transferred from one device to another over short distances and using a direct connection.. This
form of device to device transfer would enable the data to be transferred without the need to run it
via the cellular network itself, thereby avoiding problems with overloading the network.
Other examples of direct communication include Wi-Fi Direct, Bluetooth, etc. Networks can be
formed in many ways.

LTE device to device, D2D concept


The D2D system would operate in a manner where devices within a locality would be able to provide
direct communications rather than transmitting via the network. The cellular infrastructure, if present,

may assist with issues like peer discovery, synchronisation, and the provision of identity and security
information.

LTE D2D issues


The addition of the LTE D2D or device to device communication capability impacts the whole of the
network and is therefore not a trivial addition, Issues like authorisation and authentication are
currently handled by the network and the overall LTE system would need to be extended to
accommodate device to device to communication without the essential presence of the network.
Another issue would be that of direct communication between devices that are under subscriptions
with different operators, although this is unlikely to occur in the event of public service or emergency
services.
By Ian Poole

LTE Advanced Heterogeneous Networks,


HetNet
- LTE heterogeneous network, HetNet technology, how LTE HetNets work and details about
their operation and deployment..
4G LTE ADVANCED INCLUDES:
LTE Advanced Tutorial
Carrier Aggregation
Coordinated Multipoint - CoMP
LTE Relay
LTE D2D
LTE HetNet
See also
3G LTE
LTE heterogeneous networks, HetNet are fast becoming a reality.

Within LTE and LTE Advanced, operators see the need to very significantly increase the data
capacity of all areas of the network while also reducing the costs as cost per bit rates are falling.
Whilst LTE HetNet technology is starting to be defined, many operators are seeking to utilise the
concepts to ensure that the delivery of service to the users meets expectations under the very
varying conditions and scenarios that users are placing on the networks.

LTE heterogeneous network basics


To achieve this LTE and LTE Advanced operators need to adopt a variety of approaches to meet the
needs of a host of scenarios that will occur within the network.
Different types of user will need use the network in different places and for different applications.
Coupled to this operators introducing LTE and LTE Advanced networks will have many legacy
systems available. In any LTE heterogeneous network it will be necessary to accommodate other
radio access technologies including HSPA, UMTS and even EDGE and GPRS. In addition to this
other technologies including Wi-Fi also need to be accommodated.
These solutions for LTE heterogeneous networks need to incorporate not only the radio access
network solutions, but also the core network as well. In this way a truly heterogeneous network can
become functional.
To ensure the best use is made of the available capabilities, all the various elements need to be
operated in a manner that is truly seamless to the user. The user should be given the best
experience using the best available technology at any given time. The performance and hence the
user experience should also be very much the same whatever the location and whatever the
application.

Note on Heterogeneous Networks, HetNet:


The concept of the Heterogeneous Network or HetNet has arisen out of the need for cellular telecommunications
operators to be able to operate networks consisting of a variety of radio access technologies, formats of cells and
many other aspects, and combining them to operate in a seamless fashion.

Click on the link for further information about Heterogeneous Networks, HetNet

LTE HetNet features


There are a number of features for LTE that can be incorporated into an LTE heterogeneous network
above and beyond some of what may be termed the basic wireless heterogeneous network
techniques..Although they could conceivably be used with other forms of wireless heterogeneous
network, they are currently found in LTE.

Carrier aggregation: With spectrum allocated for 4G networks, operators often find they
have a variety of small bands that they have to piece together to provide the required overall
bandwidth needed for 4G LTE. Making these bands work seamlessly is a key element of the
LTE heterogeneous network operation.

Coordinated multipoint: In order to provide the proper coverage at the cell edges, signal
from two or more base stations may be needed. Again, providing the same level of service
regardless of network technology and areas within the cell can prove to be challenging.
Adopting a heterogeneous network approach can assist in providing he same service quality
regardless of the position within the cell, and the possibly differing cell and backhaul
technologies used for the different base stations.

Heterogeneous networks are now an established concept within LTE networks. The requirement to
provide a better level of coverage and performance in a greater variety of situations means that a
greater variety of techniques is required. Making all the different technologies from radio access
networks to base station technologies and backhaul paths all come together needs careful planning.
Early cellular systems had a far more standard approach, where base stations were characterised
by the mast and antennas. Now a much greater variety of approaches is needed.
By Ian Poole

LTE Physical, Logical and Transport Channels


- overview, information, tutorial about the physical, logical, control and transport channels used
within 3GPP, 3G LTE and the LTE channel mapping.
IN THIS SECTION
LTE Introduction
OFDM, OFDMA, SC-FDMA
LTE MIMO
TDD & FDD
Frame & subframe
Physical logical & transport channels
Bands and spectrum
UE categories

SAE architecture
LTE SON
VoLTE
SRVCC
LTE-M
LTE-U / LAA
Security
See also
4G LTE Advanced
In order that data can be transported across the LTE radio interface, various "channels" are used.
These are used to segregate the different types of data and allow them to be transported across the
radio access network in an orderly fashion.
Effectively the different channels provide interfaces to the higher layers within the LTE protocol
structure and enable an orderly and defined segregation of the data.

3G LTE channel types


There are three categories into which the various data channels may be grouped.

Physical channels:

These are transmission channels that carry user data and control

messages.

Transport channels:

The physical layer transport channels offer information transfer to

Medium Access Control (MAC) and higher layers.

Logical channels: Provide services for the Medium Access Control (MAC) layer within the
LTE protocol structure.

3G LTE physical channels

The LTE physical channels vary between the uplink and the downlink as each has different
requirements and operates in a different manner.

Downlink:

Physical Broadcast Channel (PBCH): This physical channel carries system


information for UEs requiring to access the network. It only carries what is
termed Master Information Block, MIB, messages. The modulation scheme is
always QPSK and the information bits are coded and rate matched - the bits are then
scrambled using a scrambling sequence specific to the cell to prevent confusion with
data from other cells.
The MIB message on the PBCH is mapped onto the central 72 subcarriers or six
central resource blocks regardless of the overall system bandwidth. A PBCH
message is repeated every 40 ms, i.e. one TTI of PBCH includes four radio
frames.
The PBCH transmissions has 14 information bits, 10 spare bits, and 16 CRC
bits.

Physical Control Format Indicator Channel (PCFICH) : As the name implies the
PCFICH informs the UE about the format of the signal being received. It indicates the
number of OFDM symbols used for the PDCCHs, whether 1, 2, or 3. The information
within the PCFICH is essential because the UE does not have prior information
about the size of the control region.
A PCFICH is transmitted on the first symbol of every sub-frame and carries a
Control Format Indicator, CFI, field. The CFI contains a 32 bit code word that
represents 1, 2, or 3. CFI 4 is reserved for possible future use.
The PCFICH uses 32,2 block coding which results in a 1/16 coding rate, and it
always uses QPSK modulation to ensure robust reception.

Physical Downlink Control Channel (PDCCH) : The main purpose of this physical
channel is to carry mainly scheduling information of different types:

Downlink resource scheduling

Uplink power control instructions

Uplink resource grant

Indication for paging or system information

The PDCCH contains a message known as the Downlink Control Information, DCI
which carries the control information for a particular UE or group of UEs. The
DCI format has several different types which are defined with different sizes. The
different format types include: Type 0, 1, 1A, 1B, 1C, 1D, 2, 2A, 2B, 2C, 3, 3A, and 4.
o

Physical Hybrid ARQ Indicator Channel (PHICH) : As the name implies, this
channel is used to report the Hybrid ARQ status. It carries the HARQ ACK/NACK
signal indicating whether a transport block has been correctly received. The
HARQ indicator is 1 bit long - "0" indicates ACK, and "1" indicates NACK.
The PHICH is transmitted within the control region of the subframe and is typically
only transmitted within the first symbol. If the radio link is poor, then the PHICH is
extended to a number symbols for robustness.

Uplink:

Physical Uplink Control Channel (PUCCH) : The Physical Uplink Control


Channel, PUCCH provides the various control signalling requirements. There are a
number of different PUCCH formats defined to enable the channel to carry the
required information in the most efficient format for the particular scenario
encountered. It includes the ability to carry SRs, Scheduling Requests.
The basic formats are summarised below:
PUCCH
FORMAT

UPLINK CONTROL
INFORMATION

MODULATION
SCHEME

BITS PER
SUB-FRAME

N/A

N/A

Format 1

SR

Format 1a

1 bit HARQ ACK/NACK with


or without SR

BPSK

Format 1b

2 bit HARQ ACK/NACK with


or without SR

QPSK

Format 2

CQI/PMI or RI

QPSK

20

Format 2a

CQI/PMI or RI and 1 bit


HARQ ACK/NACK

QPSK + BPSK

21

NOTES

PUCCH
FORMAT

UPLINK CONTROL
INFORMATION

MODULATION
SCHEME

BITS PER
SUB-FRAME

Format 2b

CQI/PMI or RI and 2 bit


HARQ ACK/NACK

QPSK + BPSK

22

Format 3

NOTES

Provides support for


carrier aggregation.

Physical Uplink Shared Channel (PUSCH) : This physical channel found on the
LTE uplink is the Uplink counterpart of PDSCH

Physical Random Access Channel (PRACH) :

This uplink physical channel is

used for random access functions. This is the only non-synchronised transmission
that the UE can make within LTE. The downlink and uplink propagation delays are
unknown when PRACH is used and therefore it cannot be synchronised.
The PRACH instance is made up from two sequences: a cyclic prefix and a guard
period. The preamble sequence may be repeated to enable the eNodeB to decode
the preamble when link conditions are poor.

LTE transport channels


The LTE transport channels vary between the uplink and the downlink as each has different
requirements and operates in a different manner. Physical layer transport channels offer information
transfer to medium access control (MAC) and higher layers.

Downlink:

Broadcast Channel (BCH) : The LTE transport channel maps to Broadcast Control
Channel (BCCH)

Downlink Shared Channel (DL-SCH) : This transport channel is the main channel
for downlink data transfer. It is used by many logical channels.

Paging Channel (PCH) : To convey the PCCH

Multicast Channel (MCH) :

This transport channel is used to transmit MCCH

information to set up multicast transmissions.

Uplink:

Uplink Shared Channel (UL-SCH) : This transport channel is the main channel for
uplink data transfer. It is used by many logical channels.

Random Access Channel (RACH) : This is used for random access requirements.

LTE logical channels


The logical channels cover the data carried over the radio interface. The Service Access Point, SAP
between MAC sublayer and the RLC sublayer provides the logical channel.

Control channels: these LTE control channels carry the control plane information:

Broadcast Control Channel (BCCH) :

This control channel provides system

information to all mobile terminals connected to the eNodeB.


o

Paging Control Channel (PCCH) :

This control channel is used for paging

information when searching a unit on a network.


o

Common Control Channel (CCCH) :

This channel is used for random access

information, e.g. for actions including setting up a connection.


o

Multicast Control Channel (MCCH) : This control channel is used for Information
needed for multicast reception.

Dedicated Control Channel (DCCH) :

This control channel is used for carrying

user-specific control information, e.g. for controlling actions including power control,
handover, etc..

Traffic channels:These LTE traffic channels carry the user-plane data:

Dedicated Traffic Channel (DTCH) :

This traffic channel is used for the

transmission of user data.


o

Multicast Traffic Channel (MTCH) : This channel is used for the transmission of
multicast data.

It will be seen that many of the LTE channels bear similarities to those sued in previous generations
of mobile telecommunications.

TE Frame and Subframe Structure


- information, overview, or tutorial about the LTE frame and subframe structure including LTE
Type 1 and LTE Type 2 frames.
IN THIS SECTION
LTE Introduction
OFDM, OFDMA, SC-FDMA
LTE MIMO
TDD & FDD
Frame & subframe
Physical logical & transport channels
Bands and spectrum
UE categories
SAE architecture
LTE SON
VoLTE
SRVCC
LTE-M

LTE-U / LAA
Security
See also
4G LTE Advanced
In order that the 3G LTE system can maintain synchronisation and the system is able to manage the
different types of information that need to be carried between the base-station or eNodeB and the
User Equipment, UE, 3G LTE system has a defined LTE frame and subframe structure for the
E-UTRA or Evolved UMTS Terrestrial Radio Access, i.e. the air interface for 3G LTE.
The frame structures for LTE differ between the Time Division Duplex, TDD and the
Frequency Division Duplex, FDD modes as there are different requirements on segregating
the transmitted data.
There are two types of LTE frame structure:
1. Type 1: used for the LTE FDD mode systems.

2. Type 2: used for the LTE TDD systems.

Type 1 LTE Frame Structure


The basic type 1 LTE frame has an overall length of 10 ms. This is then divided into a total of
20 individual slots. LTE Subframes then consist of two slots - in other words there are ten
LTE subframes within a frame.

Type 1 LTE Frame Structure ( 10ms )

Type 2 LTE Frame Structure


The frame structure for the type 2 frames used on LTE TDD is somewhat different. The 10 ms frame
comprises two half frames, each 5 ms long. The LTE half-frames are further split into five
subframes, each 1ms long.

Type 2 LTE Frame Structure


(shown for 5ms switch point periodicity).
The subframes may be divided into standard subframes of special subframes. The special
subframes consist of three fields;

DwPTS - Downlink Pilot Time Slot

GP - Guard Period

UpPTS - Uplink Pilot Time Slot.

These three fields are also used within TD-SCDMA and they have been carried over into LTE TDD
(TD-LTE) and thereby help the upgrade path. The fields are individually configurable in terms of
length, although the total length of all three together must be 1ms.

LTE TDD / TD-LTE subframe allocations


One of the advantages of using LTE TDD is that it is possible to dynamically change the up and
downlink balance and characteristics to meet the load conditions. In order that this can be achieved
in an ordered fashion, a number of standard configurations have been set within the LTE standards.

A total of seven up / downlink configurations have been set, and these use either 5 ms or 10 ms
switch periodicities. In the case of the 5ms switch point periodicity, a special subframe exists
in both half frames. In the case of the 10 ms periodicity, the special subframe exists in the first
half frame only. It can be seen from the table below that the subframes 0 and 5 as well as DwPTS
are always reserved for the downlink. It can also be seen that UpPTS and the subframe immediately
following the special subframe are always reserved for the uplink transmission.
UPLINKDOWNLINK
CONFIGURATION

DOWNLINK TO
UPLINK SWITCH
PERIODICITY

SUBFRAME NUMBER

5 ms

5 ms

5 ms

10 ms

10 ms

10 ms

5 ms

Where:
D is a subframe for downlink transmission
S is a "special" subframe used for a guard time
U is a subframe for uplink transmission
Uplink / Downlink subframe configurations for LTE TDD (TD-LTE)
By Ian Poole

LTE Frequency Band Notes


- additional notes and information about the LTE frequency bands.
IN THIS SECTION
LTE Introduction
OFDM, OFDMA, SC-FDMA

LTE MIMO
TDD & FDD
Frame & subframe
Physical logical & transport channels
Bands and spectrum
UE categories
SAE architecture
LTE SON
VoLTE
SRVCC
LTE-M
LTE-U / LAA
Security
See also
4G LTE Advanced
There are many different bands that are being allocated for use with LTE. These bands are defined
on the previous page.
On this page, additional notes and information are given about these different LTE bands.

LTE bands overview


The number of bands allocated for use has increased as the pressure increases on spectrum.

It has not been possible for all LTE band allocations to be the same across the globe because of the
different regulatory positions in different countries. It has not been possible to gain global allocations.
In some cases bands appear to overlap. This is because of the different levels of availability around
the globe.
This means that roaming with LTE may have some limitations as not all handsets or UEs will
be able to access the same frequencies.

Notes accompanying LTE band tabulations


There are a few notes that can give some background to the LTE bands defined in the table on the
previous page.

LTE Band 1: This is one of the paired bands that was defined for the 3G UTRA and 3GPP
rel 99.

LTE Band 4: This LTE band was introduced as a new band for the Americas at the World
(Administrative) Radio Conference, WRC-2000. This international conference is where
international spectrum allocations are agreed. The downlink of band 4 overlaps with the
downlink for Band 1. This facilitates roaming.

LTE Band 9: This band overlaps with Band 3 but has different band limits and it is also only
intended for use in Japan. This enables roaming to be achieved more easily, and many
terminals are defined such that that are dual band 3 + 9

LTE Band 10: This band is an extension to Band 4 and may not be available everywhere. It
provides an increase from 45 MHz bandwidth (paired) to 60 MHz paired.

LTE Band 11: This "1500 MHz" band is identified by 3GPP as a Japanese band, but it is
allocated globally to the mobile service on a "co-primary basis".

LTE Band 12: This band was previously used for broadcasting and has been released as a
result of the "Digital Dividend."

LTE Band 13: This band was previously used for broadcasting and has been released as a
result of the "Digital Dividend." The duplex configuration is reversed from the standard,
having the uplink higher in frequency than the downlink.

LTE Band 14: This band was previously used for broadcasting and has been released as a
result of the "Digital Dividend." The duplex configuration is reversed from the standard,
having the uplink higher in frequency than the downlink.

LTE Band 15: This LTE band has been defined by ETSI for use in Europe, but this has not
been adopted by 3GPP. This band combines two nominally TDD bands to provide one FDD
band.

LTE Band 16: This LTE band has been defined by ETSI for use in Europe, but this has not
been adopted by 3GPP. This band combines two nominally TDD bands to provide one FDD
band.

LTE Band 17: This band was previously used for broadcasting and has been released as a
result of the "Digital Dividend."

LTE Band 20: The duplex configuration is reversed from the standard, having the uplink
higher in frequency than the downlink.

LTE Band 21: This "1500 MHz" band is identified by 3GPP as a Japanese band, but it is
allocated globally to the mobile service on a "co-primary basis".

LTE Band 24: The duplex configuration is reversed from the standard, having the uplink
higher in frequency than the downlink.

LTE Band 33: This was one of the bands defined for unpaired spectrum in Rel 99 of the
3GPP specifications.

LTE Band 34: This was one of the bands defined for unpaired spectrum in Rel 99 of the
3GPP specifications.

LTE Band 38: This band is in the centre band spacing between the uplink and downlink
pairs of LTE band 7.

Although 3GPP can defined bands for use in LTE or any other mobile service, the actual allocations
are made on an international basis by the ITU are World radio Conferences, and then the individual
country administrations can allocate spectrum use in their own countries. 3GPP has no legal basis,
and can only work with the various country administrations.
Frequency bands may be allocated on a primary and secondary basis. Primary users have the first
access to a band, secondary users, in general, may use the band provided they do not cause
interference to the primary users.

LTE and LTE Advance Concept and


Question
What is PUCCH Mixed Mode in LTE and PUCCH Format Types in
LTE

What are different PUCCH formats and PUCCH Mixed Mode in LTE?
We should know what all PUCCH formats are available in LTE or LTE-A, before exploring PUCCH Mixed
Mode.
Basically PUCCH formats are of two types Format 1 and Format 2 (Format 3 is introduced in LTE
advance release 10, which uses modulation scheme QPSK and number of bits used as 48 per subframe).

PUCCH Format 1 (Rel 8):

Format Type

Control
Information

Modulation
Scheme

No. of bits /
Subframe

SR (Scheduling
Request)

Not Applicable

Not Applicable

1a

HARQ ACK/NACK

BPSK

1 bit

1b

HARQ ACK/NACK
(for MIMO)

QPSK

2 bits

PUCCH Format 2:

Format Type

Control
Information

Modulation
Scheme

No. of bits /
Subframe

CSI (Channel State


Info.)

QPSK

20 bits

2a

CSI+HARQ
ACK/NACK

QPSK+BPSK

21 bits

2b

CSI+HARQ
ACK/NACK (for
MIMO)

QPSK + BPSK

22 bits

Location of PUCCH resources are on the edge of bandwidth allocated. To provide frequency diversity,
PUCCH frequency resources are frequency hopping on the slot boundary (mentioned in below figure).

Mapping of modulation symbols for the physical uplink control channel

Why the location of PUCCH resources are on the edge of bandwidth? Here is the answer, to assign the
contiguous RBs to single terminal for PUSCH data transmission along with increased frequency diversity
experience by control signaling.

You might be thinking that what could be the maximum value of m. The value of m depends on the
number of UEs in the eNB or Macro eNB coverage area. To control more UEs, more control signaling
with more PUCCH RBs would be required and hence value of m will be more. Maximum value of m
could be equivalent to the maximum number of RBs (in case of 10MHz bandwidth, it is 50), but it is not
practical.

Now how do we derive the value of m?


Index m is derived from higher layer parameter, Refer 36.211 section 5.4.3 ( N1_PUCCH, N_RB_SC,
N_UL_RB, N2_RB, c, Ncs, delta_pucch_shift) for Format 1.

Index m is derived from higher layer parameter, Refer 36.211 section 5.4.3 (N2_PUCCH, N_RB_SC,
N_UL_RB) for Format 2.
Where,

N1_PUCCH is Resource index for PUCCH formats 1/1a/1b.


N_RB_SC is Resource block size in the frequency domain, expressed as a number of subcarriers.
N_UL_RB is Uplink bandwidth configuration, expressed in multiples of N_RB_SC.
N2_RB is Bandwidth available for use by PUCCH formats 2/2a/2b, expressed in multiples of N_RB_SC.
Ncs is Number of cyclic shifts used for PUCCH formats 1/1a/1b in a resource block with a mix of
formats 1/1a/1b and 2/2a/2b.
N2_PUCCH is Resource index for PUCCH formats 2/2a/2b.

You can explore more about the calculation of m here .

So, what is PUCCH Mixed Mode? In my view, PUCCH mixed mode occurs, if same resource block is
shared between two or more UEs to transmit the PUCCH format 1 by first (or second ) UE and the
PUCCH format 2 by second (or first) UE.

The actual meaning of PUCCH Mixed Mode is some UE are transmitting either SR or HARQ ACK/NACK in
the same resource block whiles other transmitting CQI/PMI/RI with or without HARQ ACK/NACK in the
same resource block.

To enable PUCCH mixed mode, Ncs parameter value should not be set as 0 (should be in between 1..7)
and resource index parameter should be same for both UE profile configuration. Also, at most one
resource block in each slot can support mix of format 1 and 2 (Example: m=0 in slot 1 and m=0 in slot 2
of subframe, in above figure).

What is the benefit of using this PUCCH Mixed Mode in LTE?

It would not be suffice to allocate different RBs for different format type for smaller cell bandwidth
(Example 1.4MHz, out of 6 RBs 2 RBs will be used for PUCCH for different formats). To minimize this
overhead , it would be preferred to mix the format 1 and format 2 in same resource block. However to
achieve this some phase rotation are used for guard to separate ACK/NACK and CQI , hence the
efficiency in this mixed mode is slightly lower.

Questions are welcome.


Posted by Abhishek Kumar at 11:36 5 comments:

Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest


Labels: LTE, LTE advance, PUCCH, PUCCH Formats types, PUCCH MIXED MODE, Resource allocation

F ri da y, 1 2 Ju l y 2 01 3

What is Rank Indicaton in LTE


Rank Indication is one of the important input to eNB , in selection of the transmission layer in downlink
data transmission. Even though the system is configured in transmission mode 3 (or open loop spatial
multiplexing) for a particular UE and if the same UE report the Rank Indication value 1 to eNB, eNB will
start sending the data in Tx diversity mode to UE . If UE report Rank Indication 2 , eNB will start
sending the downlink data in MIMO mode (Transmission Mode 3).

Why we need this RI in LTE concept? When UE experience bad SNR and it would be difficult (error
prone) to decode transmitted downlink data it gives early warning to eNB by stating Rank Indication
value as 1. When UE experience good SNR it pass this information to eNB by indicating rank value as 2.

Because of this reason, you might have observed that some time data transmitted by eNB is in Tx
diversity mode, though MIMO was configured and hence you may have observed less downlink
throughput than expected one.

However, it is not necessary that eNB will always change the transmission mode based on RI value, it
could be implementation specific decision.

Questions are welcome.


Posted by Abhishek Kumar at 11:16 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: LTE, LTE advance, Rank Indicator

T hu rsd a y, 11 Jul y 20 1 3

What is CQI PMI RI in LTE?

Well, we had discussed about uplink channel state information Difference between SRS and DMRS by
reference signals (SRS and DMRS). Now to achieve 1Gbps or more downlink speed in LTE with effective
resource utilization of full bandwidth available, CQI PMI RI and many more parameter play very
important role. So what are CQI, PMI and RI in LTE?

CQI (Channel Quality Indicator), reported by UE to eNB. UE indicates modulation scheme and coding
scheme to eNB , if used I would be able to demodulate and decode the transmitted downlink data with
maximum block error rate 10%. To predict the downlink channel condition, CQI feedback by the UE is
an input. CQI reporting can be based on PMI and RI. Higher the CQI value (from 0 to 15) reported by
UE, higher the modulation scheme (from QPSK to 64QAM ) and higher the coding rate will be used by
eNB to achieve higher efficiency.

PMI (Precoding Matrix Indicator), UE indicates to eNB , which precoding matrix should be used for
downlink transmission which is determined by RI.

RI (Rank Indicator), UE indicates to eNB, the number of layers that should be used for downlink
transmission to the UE.

RI and PMI can be configured to support MIMO operation (closed loop and open loop spatial
multiplexing). These both transmission modes use precoding from a well defined codebook (the lookup
table of cross coupling factors used for precoding shared between UE and eNB) to form the
transmission layers. In case of transmit diversity PMI and RI need not to be reported to eNB.

In wideband CQI reporting UE report one wideband CQI for the full system bandwidth region. However,
UE can also report CQI value for sub band also.

Now, what about periodicity of CQI, PMI and RI and its values. Yes these can be periodic and aperiodic .
eNB configure type of CQI reporting by RRC signaling. Aperiodic reporting is on request based (by eNB ),
which always go with PUSCH.

Periodic CQI reporting can go on both PUCCH and PUSCH (along with data). The minimum periodicity
could be 2 ms. Periodicity are defined in 36.213 for different values of CQI-PMI-ConfigIndex (Table
7.2.2-1A for FDD). The range of CQI-PMI-ConfigIndex is 0 to 1023. Also the periodicity of RI is based
on riconfig-index (Table 7.2.2-1B for FDD) and periodicity of CQI-PMI. The range of riconfig-index is 0
to 1023.

Example: From Table 7.2.2-1A of 36.213, for the value of CQI-PMI-ConfigIndex 17 the periodicity of
CQI reporting is 20 ms (say X). From Table 7.2.2-1B of 36.213, for the value of riconfig-index 483 the
Y is 8 and the periodicity of RI will be 8 times of X (20ms) =160ms.

What about if CQI/PMI/RI collides with either ACK/NACK or SR on the same subframe? If CQI/PMI/RI
collides with positive SR the CQI/PMI/RI will be dropped. If CQI/PMI/RI collides with ACK/NACK and
simultaneousACKNACKandCQI is false CQI/PMI/RI will be dropped otherwise CQI/PMI/RI will be
multiplexed with ACK/NACK.

It is only the eNB which decide the time and frequency on which UE can transmit the CQI, PMI and RI.

Questions are welcome.


Posted by Abhishek Kumar at 12:17
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: CQI, CSI, LTE, PMI, RI

W ednesday, 10 July 2013

Difference between SRS and DMRS


There are two types of reference signals used in LTE uplink, to estimate uplink channel quality. Which
allow eNB to take smart decisions for resource allocation for uplink transmission, link adaptation and to
decode transmitted data from UE .

So to take first smart decision by eNB Sounding Reference Signal (SRS) is being used. SRS is being
transmitted by UE on the last symbol of subframe (in which subframe will come to know later). This
SRS report the channel quality of over all bandwidth and using this information eNB assign the resource
(to UE for uplink transmission )has better channel quality comparing to other bandwidth region.

So is SRS optional in LTE? Yes. SRS is configurable and infact we do not need SRS at all in case eNB
assign all resource block or full bandwidth or have no choice.

Now on the basis of configuration and node wise there are two types of SRS (refer 36.211), cell specific
(Common SRS) and UE specific (Dedicated SRS). eNB notify UE about the configuration of SRS
parameter by RRC messages.

There are two types of SRS on the basis of periodicity. Periodic and Aperiodic (In Rel. 10 LTE Advance).
The minimum periodicity of SRS is 2ms (1ms=1subframe) and the maximum is 320ms (it is even more
than 320ms which is reserved according to specs 36.213).

Now you might be thinking what if all UEs transmit the SRS with same interval and periodicity or in
other words how eNB distinguish the UE specific SRS in case of overlapped SRS transmission. Well in
that case using transmission_comb and cyclic shift parameters present in RRC Connection setup and
RRC Connection Reconfiguration, eNB distinguish and decode different UE specific SRS.

Demodulation reference signal (DMRS) in uplink transmission is used for channel estimation and for
coherent demodulation which comes along with PUSCH and PUCCH. If DMRS is bad or by some reason
not decoded properly by base station , PUSCH or PUCCH will be not decoded as well. Hence DMRS is not
optional like SRS.

DMRS only state channel quality of frequency region in which PUSCH or PUCCH is being transmitted. So
what about positioning of DMRS in resource grid, is this fixed ? Answer is Yes and No both. So, when
DMRS sent by UE with PUCCH, position of reference signal vary according to PUCCH format indicator.
But in case of PUSCH it is always the center symbol of a slot (3rd symbol of slot0 and 10th symbol of
slot1).

To support a large number of UEs (User terminal), a large number of DMRS sequences needed and it is
achieved by cyclic shifts of a base sequence. As we know in LTE -Advance we will have concept of MIMO
in uplink as well, hence DMRS have to enhance for MIMO transmission and each UE will use different
DMRS sequences.

DMRS is always mapped to PUSCH in multiple of 12 sub-carriers , however DMRS mapped to PUCCH is
always in terms of 12 sub-carriers only.

The only similarity in between SRS and DMRS is both uses Constant Amplitude Zero Autocorrelation
(CAZAC) sequences.

You may observe less throughput in case of SRS enabled data transmission , because to report SRS
during uplink data transmission , eNB schedule some RBs to UE which could have been used for actual
data.

Guys you may have multiple questions roaming in your mind , so please post those question here. We
will try to learn and explore more.
> What if SRS and CQI coincide on the same subframe?
Well in that case a UE shall not transmit SRS whenever SRS and PUCCH format 2/2a/2b transmissions
(CQI, CQI with 1 or 2 bit HARQ ACK/NACK) happen to coincide in the same subframe [3GPP 36.213

Section 8.2]. Having said that, does it mean L2 scheduler will not schedule SRS and CQI on the same
subframe ?

Tue sd a y, 2 3 Se pt em b er 2 0 14

Positioning Reference Signal PRS LTE


Positioning Reference Signal (PRS) is taken into consideration in one of the LTE
release 9 features to determine the location of User Equipment (UE) based on radio
access network information. Now you might be thinking that what is the necessity of
PRS, if we have a GPS technology already built in smartphones and in other cellular
equipment. Just think of it (GPS may not be accurate always and GPS services
may not be available all around the geographical areas , also the accuracy of
functioning GPS depends on money you have paid for the services and the quality
of GPS device). The end user application of this PRS feature could be supporting
location based services which can be navigation (direction to hotel etc.), emergency
call etc.

Process of finding UE location using PRS:

The overall process of finding UE locations are based on three major steps.

Step 1. UE receive PRS from cells (Reference cell and Neighbor cells)

Step 2. Based on received PRS, UE may measure observed time difference of arrival
(OTDA) and report RSTD (mentioned in my previous article of LTE UE Measurement
RSRP RSSI RSRQ RSTD) to cell.

Step 3. Based on UE reported reference signal time difference (RSTD), eNodeB may
calculate the longitude and latitude of the UE (which can be based on any specific
algorithm, not standardized).

Positioning Reference Signal is transmitted in downlink subframes (as per higher


layer configuration, discussed later in this article) on antenna port 6. The PRS
should not be sent on resource element used for PBCH, PSS or SSS. The PRS
sequence will be generated on the basis of slot number, OFDM symbol number, cell
ID, normal CP or extended CP.

Position of PRS in terms of OFDM symbol (Resource Element):

If both MBSFN (Multicast Broadcast Single Frequency Network) and normal downlink
subframes are configured for PRS, the OFDM symbol configured for PRS uses the
same cyclic prefix as subframe 0.
If only MBSFN subframe is configured for PRS, the OFDM symbol configured for PRS
will use extended cyclic prefix.
The starting position of PRS OFDM symbol in a subframe will be identical to those in
a subframe in which all OFDM symbols have the same CP length as PRS OFDM

symbols; iff the subframe is configured for PRS transmission (PRS subframe
configuration explained below). For more detail of mapping PRS resource element
into resource grid, please refer 3GPP 36.211 section 6.10.4.2 for both normal CP and
extended CP.

PRS subframe configuration (PRS periodicity, PRS subframe, PRS


Configuration Index, Number of Consecutive PRS subframe):

From specs 36.211 of release 9, the configurations of PRS subframe are explained
below where:
Nprs is number of consecutive downlink subframe with PRS
(Configured by higher layers may be 1,2,4 or 6 subframes)
Iprs is the PRS Configuration Index (can be any value between 0-2399, values
2400 to 4095 are reserved)
Tprs is the periodicity of PRS in terms of subframes. This could be one value among
160, 320, 640 or 1280 depending on configuration of Iprs.
Dprs is the delta PRS subframe offset (can be Iprs, Iprs-160, Iprs-480 or Iprs-1120),
depending on Iprs configuration index.

Please refer below table for composed values of Iprs, Tprs, Dprs:

Now take one example to understand the PRS subframe configuration:

Suppose the Nprs is configured by higher layers is 2 and Iprs is configured as 160.
Hence from the above table the value of Dprs will be 0. The PRS instances for the
first DL subframe of Nprs shall satisfy below formula:

(10*Nf+ ceiling_func_of Ns/2-Dprs) mod Tprs=0 --------Equation 1

Where Nf is system frame number and Ns is slot number.


Hence put the values of Tprs and Dprs in the above equation:

(10*Nf+ ceiling_func_of Ns/2-0) mod 320=0------------Equation 2

Hence for all the values of Nf and Ns which satisfy the Equation 2, will be the first
downlink subframe which carry PRS. Hence Equation 2 satisfied for Nf value 32 and
Ns (slot) value 0 (that is subframe #0). Hence the first subframe which carries PRS
will be subframe 0 of system frame 32 and subframe 1 of system frame 32
(becauseNprs is configured as 2 from higher layer for accuracy of consecutive PRS).
Questions are welcome.

Posted by Abhishek Kumar at 09:39 No comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: lte blog, MBSFN, Positioning Reference Signal, PRS LTE, PRS Subframe

F ri da y, 1 9 Se pt em b er 2 01 4

LTE 4G Smartphones
Company

Model

Apple

iPhone6
iPhone6 Plus
iPhone5c
iPhone5s
iPhone5

Samsung

Galaxy Note3 Neo


Galaxy Note3
Galaxy Express2
Galaxy Golden

Galaxy Mega 2
Motorola

PHOTON Q
MOTO G 4G

Nokia

Lumia 635
Lumia 930
Lumia 1320
Lumia 1520
Lumia 1020
Lumia 925

LG

LG D722K
LG G3
LG Pro 2
LG G Flex
LG nexus 5

Posted by Abhishek Kumar at 12:29 No comments:


Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: LTE 4G mobile

W ednesday, 17 Septem ber 2014

LTE UE Measurement RSRP RSSI RSRQ RSTD


In LTE or any other cellular radio network, UE report some sort of signal to base station for
various decision making. It could be used for better downlink scheduling (using CSI), uplink
scheduling (using SRS), cell selection , handover, cell reselection , calculation of uplink and
downlink path loss for power control, multipath propagation, Uplink interference and for
location based services.

All of these achieved by parameter called RSRP, RSSI, RSRQ and RSTD.

RSRP:
RSRP Reference Signal Received Power is the average power received by UE from a single
cell specific reference signal resource element spread over the full bandwidth. It is

calculated by UE for cell selection, handover, cell reselection and for path loss calculation for
power control. The power measurement is the energy of the OFDMA symbol excluding the
energy of the cyclic prefix. The measurement of RSRP may be based on energy of reference
signal transmitted by antenna port 1 or 1 and 2. UE comes to know which antenna port can
be used for measurement, when it decodes SIB3.
The range of RSRP reported by UE are between -140 dBm to -44dBm (-140dBm <RSRP<=
-44dBm). For each 1dBm difference from -140dBm, UE report an integer value (ranging 0 to
97) to base station. Example:
Value 0 reported when UE measure RSRP less than -140dBm (RSRP< -140dBm).

Value 1 reported when UE measure RSRP between -140dBm to -139dBm (140dBm<=RSRP<-139dBm)

Value 97 reported when UE measure RSRP greater than and equal to -44dBm (RSRP>=
-44dBm).

RSSI:
RSSI Receive Signal Strength Indicator is the total received signal power from all sources
(power of each resource elements) which includes thermal noise also, unlikely to the RSRP.
RSSI is never reported by UE to base station but it is the input to calculate the RSRQ.

RSRQ:
RSRQ Reference signal received quality is also used for cell selection, reselection and
handover, only when RSRP is not sufficient for making decision. RSRQ is mathematically
defined as (N*RSRP)/RSSI, where N is the number of Resource blocks of the LTE carrier RSSI
measurement bandwidth. To calculate RSRQ, the numerator and denominator are made over
the same set of RBs (like for 5MHz, RSRP and RSSI calculation will be done on 25RBs only at
a time). Since calculation of RSRQ uses RSSI, it enables the combined reporting of signal
strength and interference. Range of RSRQ varies from -19.5dB to -3dB (integer value ranges
from 0 to 34 ). For each .5dB variation UE report an integer value in RRC message.
Example:
Value 0 reported by UE to base station when RSRQ measured less than -19.5dBm (RSRQ<19.5dB).

Value 1 reported by UE to base station when RSRQ measured between -19.5 to -19 (19.5<=RSRQ<-19dB).

Value 34 reported when RSRQ measured greater than equal to -3dB (RSRQ>=-3dB).

Integer value of RSRP and RSRQ reported by UE is included in RRC message (measurement
report of serving cell) shown below:

Measurement of RSRP, RSSI and RSRQ from different antenna port are shown below:

RSTD:
RSTD (reference 3gpp 36.133 and 36.214) reference signal time difference measure the
subframe timing difference of reference cell and neighbor cell. RSTD used for location based
services and introduced in LTE release 9 . RSTD measurement done by UE and it uses the
power received in positioning reference signal (PRS) transmitted by eNodeB. PRS is also
introduced in LTE release 9.

Questions are welcome.


Posted by Abhishek Kumar at 20:59 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: LTE, PRS, RSRP, RSRQ, RSSI, RSTD, UE Measurement

T hu rsd a y, 4 S ept em b er 20 1 4

Downlink Assignment Index (DAI)


DAI (Downlink Assignment Index) is an index, which is communicated to UE by eNB to prevent
ACK/NACK reporting errors due to HARQ ACK/NAK bundling procedure performed by the UE. To
understand how DAI works we need to learn how ACK/NAK reporting used to happen in LTE TDD.
In LTE TDD, UE can send single ACK/NAK of multiple PDSCH sub frame in one bit for each code word
CW0 and CW1.

UE perform AND logical operation on each code word CW0 and CW1 (CRC Passed/Failed) of each
PDSCH received and report the result in two bits (00, 01, 10, 11) on specific uplink subframe. Below is the
table which shows that which all PDSCH subframes need to be bundled for reporting ACK/NAK on which
Uplink subframe for each TDD UL DL configurations (Mentioned only for config 1 and config 2 in green
color).

For Example:

UL/DL Configuration 1:
We can see that the k value for 2nd subframe (Uplink) are 7,6 (according to Table 10.1.3.1-1 of specs
36.213 ) hence on this uplink subframe the ACK/NAK of 5th and 6th subframe (PDSCH, shown in green)
which could be bundled and will be reported on 2nd uplink subframe.

For 3rd uplink subframe the number of bundled subframe would be 1 (for 9 th DL subframe of previous radio
frame).
UL/DL Configuration 2:
According to above table, on uplink subframe 2nd, HARQ ACK/NAK of DL subframe 4, 5, 8 and 6 of
previous radio frame can be bundled.

In the same manner, on 7th Uplink subframe, bundled HARQ ACK/NAK can go for 9th DL subframe of
previous radio frame and 0,3rd and 1st DL subframe of current radio frame.

So DAI (Downlink Assignment Index), will ensure that number of HARQ bundled and reported by the UE
is exactly for same number of PDSCH/PDCCH subframe received by the UE. Now consider a situation
where eNB schedule two subsequent subframe to the UE, but UE misses the first transmission in the first
subframe and successfully decodes the second subframe. The UE would transmit one ACK only for
second transmission but eNB will interpret that, both transmission is successfully decoded by UE. To
prevent such errors DAI will play important role.

In UE log , you can see what DAI is communicated to UE in DCI Information and you can also check how
many subframes are bundled and transmitted on PUCCH or PUSCH in TDD ACK NAK report. If there is a
mismatch there will be chance of DAI mismatch.

Example: Suppose for TDD UL/DL configuration 1, the maximum number of Downlink subframe could be
bundled either 1 or 2 or 0 (in case no PDSCH or PDCCH scheduled to UE), hence DAI values can be
either 1 or 2 or 4 (according to Table 7.3-X: Value of Downlink Assignment Index of specs 36.213),
can be seen in LTE DCI information of UE log.

For more information on DAI Mismatch please refer section 7.3 of 3GPP specs 36.213.

Note: This DAI field (2 bits) is present only in TDD operating mode. Above explanation of DAI value only
applicable for TDD UL/DL config other than 0. In UL DL config 0 , this DAI field used as an uplink index to
signal for which uplink subframe(s) the grant is valid.

Please share if you like this post helpful and add more information in comment section.

Thanks for visiting. Questions are welcome.


Posted by Abhishek Kumar at 12:35 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: DAI lte tdd, DAI Mismatch tdd, Downlink Assignment Index LTE TDD, HARQ ACK NACK LTE TDD, LTE, LTE advance

M on d a y, 2 8 Oct ob e r 20 1 3

KMIMO LTE
KMIMO is a parameter which is being used in bit collection , selection and transmission of downlink
data (specifically to calculate size of a partition which is used for storing a transport block). It is equal
to 2, if UE is configured to receive PDSCH transmissions based on transmission mode 3, 4 or 8, as
defined in section 7.1 of 3GPP 36.213, 1 otherwise.Also in other words , it represents maximum number
of transport blocks that may be transmitted to the UE in a single TTI (Transmission Time Interval or 1ms
or 1 subframe time).

Questions are welcome.


Posted by Abhishek Kumar at 10:31 1 comment:
Email ThisBlogThis!Share to TwitterShare to FacebookShare to Pinterest
Labels: KMIMO LTE, lte question, transmission modes, TTI

T hu rsd a y, 3 Oct ob e r 2 01 3

CQI/PMI and RI on same subframe, SRS and PUCCH format


1/1a/1b 2/2a/2b on same subframe, SR and CQI/PMI on same
subframe

If SRS and PUCCH format 2/2a/2b messages coincide in same subframe, UE shall not
transmit SRS.
If SRS and PUCCH format 1/1a/1b (ACK/NACK and/or +SR) coincide in same subframe, UE
shall transmit SRS iff simultaneousSRSACKNACK is true.
If SRS and PUSCH RARG (Random Access Response Grant) coincide in same subframe SRS

will be dropped.
If SRS and retransmission of same transport block (as a part of contention based Random
access procedure), coincide in the same subframe SRS will be dropped.
If SR and CQI/PMI/RI coincide in same sub-frame, CQI/PMI/RI will be dropped only if UE send
SR (which is triggered by BSR) otherwise CQI/PMI/RI will be reported by UE on the same
subframe.
If CQI/PMI and RI is configured on same subframe and coincide, MAC will schedule RI on that
subframe and hence UE will report RI on that subframe instead of CQI/PMI (One possible
reason could be Periodicity of RI is always greater than equal to CQI/PMI. Which means eNB
will have RI input less frequent than CQI/PMI input from UE, hence priority is given to RI).
(Example: Configure the higher layer parameter cqipmiconfigindex and riconfigindex in a
way such that CQI/PMI and RI coincide on same subframe and verify the reporting using UE
logs as well as FAPI interface . cqipmiconfigindex=17 and riconfigindex=483 could be one
valid configuration to simulate this scenario, according to Table 7.2.2-1A and Table 7.2.2-1B
of 36.213).
Please share if you find this is useful information.

Questions are welcome.

You might also like