You are on page 1of 8

i optimplanner,

I would check the following items:

1. are the UEs in the cell capable more than 7.5 Mbps?
2. What's the real configuration for the IuB in the database although you
have fast ethernet?
3. What were the CQI values when you do the troubleshooting? The CQI
value must be higher than 25 to have a higher throughput than 7.5Mbps
4. Have you tried to enable and disable the flow control at the NodeB to see
whether there is any difference for the throughput?
5. What's the problematic NodeB's geometric connection relation to the
RNC? Directly connected to NodeB, or several NodeBs connected in serial
6. Have you tested your Layer 2 ethernet with VLAN tester?
Isolating the RNC and NodeB connectivity by plugging one VLAN tester
device instead of the RNC and the other VLAN tester device instead of the
NodeB- they can adjust the sending rate manually ( Ex: FE cable is
100Mbps, they can set sending rate 3% this means 3Mbps,etc). There
maybe problem with the ethernet, sometimes the error rate will increase
when the load exceed certain level, for example, 2Mbps.
7. Have you checked the Ethernet cable connection at the RNC and NodeB?
The cable may not up to the standard.
8. The UEs themselves are having problem, certain vendor's UE may have
stable problem.

Hope that the above help narrow down the problem or solve it.


David Zhang

Data Throughput Issues

CQI vs Throughput for UMTS

In live network for HSDPA, Network sends data with different transport block size depending
on CQI value reported by UE. For this mechanism to work properly, there should be a
certain level of agreement between UE and the network about "which CQI value means
which transport block size". These agreement is defined in the following tables of TS

Table 7A: CQI mapping table A.

Table 7B: CQI mapping table B.
Table 7C: CQI mapping table C.
Table 7D: CQI mapping table D.
Table 7E: CQI mapping table E.
Table 7F: CQI mapping table F.
Table 7G: CQI mapping table G

Then next question is which table do I have to use for which case ? The answer is in the
following table from 24.214. As you see, we use different table depending on UE Category,
Modulation Scheme, MIMO. For example, if a UE is Category 14 device and uses 64 QAM
and does not use MIMO, it use Table G for CQI-Transport Block Size Mapping as shown
I put Table 7G as an example. As you see in the table, the range of CQI value is 0~30. 30
means the best channel quality and lower number indicates poorer channel quality. And
Network has to send the data with the proper transport block size according to the CQI
For example,
i) If UE report CQI value 15, it is expected for Network to send data with transport block
size of 3328 bits/TTI which is equivalent to around 1.6 Mbps.
ii) If UE report CQI value 30, it is expected for Network to send data with transport block
size of 38576 bits/TTI which is equivalent to around 19 Mbps.
One thing you would notice that the transport block size for the highest CQI value is not
amount to the ideal MAX throughput defined in 25.306 Table 5.1a. It implies that you
wouldn't get the ideal Max throughput in any case with live network condition which may
operate according to the CQI table defined in 3GPP. (It would not be any problem in real
communication environment since your device would not report CQI 30 in most case).

However, many UE manufacturer/developer wants to see if their device can really reach the
ideal max throughput. In that case, we normally use a special network simulator which
allows us to set the largest transport block size for each UE category. It would be even
better if the network simulator allows us to define CQI-transport block mapping table
arbitrarily. Fortunately I have access to this kind of the equipment and I did an experiment
as shown below using the network simulator and a HSDPA Category 10 UE.

First I defined a CQI-transport block size table very similar to Table 7D, but I changed the
tranport block size for high end CQI (30, 29, 28, 27) to allocate larger tranport block than
the ones specified in Table 7D to push the ideal MAX throughput.
I programmed Network Simulator so that I decrease the downlink power by a certain steps.
As downlink power (Cell Power) gets down, UE would report lower CQI and Network
Simulator would transmit lower transport block size.
The result is as follows.
In the upper plot, you see three traces - Green, Red, Blue. Green trace means the everage
CQI value within 500ms that UE reported. Red trace indicates the the amount of data in
Kbps that the network emulator transmitted to UE within a second. Blue trace indicates the
amount of data in Kbps that UE successfully decoded. If the Red trace and Blue traces
overlaps, it implies that UE successfully decoded all the data transmitted by the network. If
the Blue trace is lower than the Red Trace, UE failed to decode some of the data transmitted
by the network. The black line shown in section A, B, C is the data rate defined in Table 7D,
but I intentionally allocated the higher data rate for section A,B,C to push the data rate
closer to the ideal Max throughput.
In the lower plot, you see three traces - Green, Red, Blue. Green trace means the everage
CQI value within 500ms that UE reported. Red trace indicates the amount of ACKs within
500 ms and Blue trace indicates the amount of NACKs within 500 ms.

There are a couple of things you may notice (The notes here may be different from what
you observed from your device and test setting)
i) Section A is the only region in which UE shows 100% data decoding without any failure. It
means that you have to make it sure that your test equipment configuration, cable
connection between the test equipment and UE is configured properly so that the channel
quality belongs to this area. (I would say "CQI should be much higher than 30". I know 30
is the max CQI value. What I mean is that the channel quality should be much better than
the quality in which UE barely reports CQI 30).
ii) In Section B, you see huge drops in terms of throughput and huge increase in terms of
number of NACKs. Main reason would be that I allocated too large transport block size for
CQI 29, 28. There would also be some UE issues with this range.
Section C,D,E shows a kind of normal trends, but ideally we should expect exact overlapping
of rad trace and blue trace, but reality never goes like ideal -:)

Whenever I have inquiries about HSDPA related throughput problem, I am going through
the following check list.

i) Does the Network (or Network Emulator) define TFRI table for max throughput ?
ii) Does the TFRI Index has been selected at each transmission for max throughput ?
iii) Does UE reflect the proper category information on RRC Connection Setup Complete ?
iv) Does HARQ memory model is properly configured in Radio Bearer Setup ? (e.g, Implicit
vs Explicit, Number HARQ, HARQ Memory Size etc)
v) Does PHY layer shows any HARQ Retransmission ?
vi) Does RLC shows any retransmission ?
vii) Does PC inject the packet which is big enough to fully utilize the data pipe defined by
viii) Does PC inject the data packet as frequently to fully utilize the data pipe ?

Now you may understand why I put such a amphasis on having proper logging tools for
throughput troubleshoot. Almost none of the list you can check without having proper
logging tool. The best option is to have such a logging tool both on Network side and UE
side, but if not.. you should have the tools at least on one side (UE or Network).

Now let's look into overall data flow. In HSDPA case, the packet size at the input stage (IP
packet size) is similar to the final L1 frame size, even though the final L1 frame size can be
a little bit smaller and larger than the input packet size depending on HSDPA Category. But
still you have MAC-d is involved in the data path and the MAC-d packet size is much smaller
than IP packet size and L1 frame size. It means the IP packet should get splitted into
multiple small chunks to go through MAC-d and have to be reassembled before it gets into
L1. I don't think this is very efficient process but we would not be able to get rid of MAC-d
because of current UMTS network architecture. Technically this kind of split/combine
process can be a source of problems.

In HSDPA case, there is another issue that make situation complicated. In R99 case, the
most common L1 transmission timing is 10 ms (1 TTI = 10 ms), but in HSDPA case the
most common L1 transmission timing is 2 ms (1 TTI = 2 ms). It means that if L1 frame size
is similar to one IP packet size, the PC tool should be able to create IP packet 500 times per
second and Network's internal layer is operating fast enough to pass all those packets down
to L1. It implies that PC performance or PC configuration can be a bottle neck for
throughput test (especially HSDPA Category 8, 10 case).

For your reference, I created a table showing you a maximum (near maximum) throughput
for most commonly used HSDPA categories. Just for this throughput issues, let's just focus
on TTI, TBS, PDU. TTI shows how often a network transmit a chunk through PHY layer. For
example, TTI = 2 means the network transmit a PHY layer data chunk every 2 ms.
TBS is Transmit Block Size. The unit in 3GPP table is in Bits, but I added another column
showing TBS in Bytes just for you to easily compare it with IP packet size which is normally
expressed in Bytes.
For example, if TTI = 2 and TBS = 3630, the network transmit a data chunk with the size of
3630 (about 453 bytes) bits every 2 ms.
PDU is the data chunk of MAC-d. So PDU size is the size of data chunk getting out of MAC-d.
If you compare PDU size and TBS, you will notice that TBS (PHY data chunk) is much bigger
than PDU size. If you compare PDU size and common IP packet size (1500 Bytes), you will
notice IP packet size is much bigger than PDU size.
Putting all these together, you will figure out that in this process an IP packet should split
into many PDUs and those many PDUs should be reassembled into a single Transport
Block(TB) and then get transmitted through antenna. This is the meaning of diagram shown

Another important thing you can notice from the table above is that from Category 8, one
transport block size gets bigger than one IP packet. It means that PC has to tranmit one or
more IP packets every 2 ms. If you see Category 10, you will notice that PC(Data Server)
should be able to transmit more than 2 IP packets every 2 ms. So in this case, PC
performance greatly influence the overall throughput.

So my recommendation, especially for high data rate category, is for you to check PC
setting/performance and see if the PC performance is good enough for this testing.
(Connect the client and server PC directly with LAN cable and do the PC-to-PC wireline
throughput test and make it sure that the throughput is well higher than the expected UE