This action might not be possible to undo. Are you sure you want to continue?
Deprioritization of Heavy Users in Wireless Networks
Hai Zhou, Kevin Sparks, Nandu Gopalakrishnan, Pantelis Monogioudis, Francis Dominique, Peter Busschbach, and Jim Seymour, Alcatel Lucent
The explosive demand for wireless data services that followed the introduction of application phones continues to create significant challenges for mobile operators. Measurement campaigns indicate that the consumption of wireless network capacity follows a power law where 20 percent of the users consume more than 80 percent of capacity. This creates unfairness among users in terms of the data volume they are allowed to consume and, more important, during congested periods of time it degrades the quality of experience of all the users. As a response, many operators are attempting to control demand by gradually moving away from unlimited data plans with the introduction of volume caps and tiered charging. Many operators throttle the heavy users after they have exceeded their cap, and consider doing the same during times of congestion. This article evaluates the concept of deprioritization of heavy users in wireless networks for congestion management, the difference between deprioritization and throttling, and the enabling technologies to implement the feature in real-world networks.
cities (0.9 b/s/Hz). The introduction of multipleinput multiple-output (MIMO) provided further benefits in small cells that typically offer higher signal-to-interference-plus-noise ratios (SINRs) than macrocells. In the UL, relative to the baseline R5 (0.12 b/s/Hz), the introduction of enhanced dedicated channel (E-DCH) in Release 6 provided a significant performance boost (0.25 b/s/Hz), mainly because of the fast channel-aware rate control. Other significant proprietary enhancements such as four-way receive diversity and intracell successive interference cancellation (SIC) can further boost the UL performance. The ongoing introduction of Long Term Evolution (LTE) Release 8 boosts the DL spectral efficiency by 20 percent (1.2 b/s/Hz) and the UL by 50 percent (0.61 b/s/Hz). The latter is due to its frequency-division multiple access (FDMA) nature, and it does so without the complicated intracell SIC baseband processing of HSUPA. Standards are evolving, but technological progress alone is not enough to cope with the adoption rate of application phones, and mobile operators have turned to pricing strategies in an attempt to modulate demand accordingly. This is addressed in the next section.
EVOLUTION OF WIRELESS NETWORKS
Over the last two years the release of high-speed packet access (HSPA)-capable application phone platforms, such as the iOS and Android, created an explosion in the demand for mobile data services primarily by consumers, and signaled a worldwide race to upgrade wireless networks to satisfy this new consumer market.
Many operators continue to allow so-called unlimited usage for a flat monthly payment. Usually unlimited has a relative high volume cap, typically 5 Gbytes per pay cycle, a cap associated with fair usage policy (FUP) enforced by many operators. The well-known problem with this model is that usage across data subscribers follows the so called power law, also known as the “80-20” rule where 20 percent of the users cause 80 percent of the air interface loading in some instances. Because of this, as well as the decoupling between revenue and usage, this pricing model is currently being revised. One popular option is to move into tiered pricing, where consumers are offered data buckets of various sizes (200 Mbytes, 2 Gbytes, etc.) and decide which is best suited for them. When a data bucket is exhausted the consumer is simply charged for another data bucket of the same or lesser size (size step). Effectively, this corresponds to marginal revenue charging tailored to multiple consumer segments.
Over the last five years, the spectral efficiency of Third Generation Partnership Project (3GPP) cellular systems in the downlink (DL) and uplink (UL) has improved considerably. In the DL, relative to the baseline high-speed DL packet access (HSDPA) R5 performance (0.5 b/s/Hz), reception diversity at the user equipment (UE) together with spatio-temporal equalizers almost doubled the spectral efficiency in environments with significant multipath propagation such as in
0163-6804/11/$25.00 © 2011 IEEE
IEEE Communications Magazine • October 2011
The poor QoE that thereby ensues for normal users leads to undesirable consequences such as increased churn. We provide qualitative insight into the traffic generated by today’s application phones. the subject of this article. On aggregate. In the flat rate “unlimited” pricing model that is attractive to consumers due to its ease of individual usage management. the resources left over for higher-latency packet data applications would necessarily follow busy-hour voice patterns. which include the number of segments. How operators can make use of such analytics depends on the desired degree of automation. For example. and so on. An example for web browsing and email applications is shown in Table 1 where a relatively thin application such as email is responsible for 4 percent of the network volume but about 65 percent of the signaling volume. The results can be used offline to craft new policies and pricing strategies. USER AND CONTROL PLANE CONGESTION The applications end users use vary widely from cell to cell. In a later section. often congesting the network and limiting the opportunity for other “normal” users to satisfactorily transfer their desired or budgeted quantum of data in any progressive pricing structure. Mobile applications may congest the network disproportionally to the generated volume due to air-interface signaling. Since in many wireless networks packet data applications share the carrier with low-latency voice applications that have the highest priority. While the subject of this article is user-plane congestion. virus attacks. In addition. by heavy users who consume a disproportionate amount of scarce radio resources such as bandwidth. even a simple Twitter or Facebook status update can generate many additional signaling exchanges over the air interface. So in addition to pricing strategies. Alternatively. As mentioned. Alternatively. it is impossible to characterize their traffic intensity individually. The results can be used offline to craft new policies and pricing strategies. analyze. network element failures. NM&A provides and processes all input required for defining new policies. closing the loop with the policy control resource function (PCRF).and operators need to balance customer satisfaction along with network cost containment and monetization priorities to remain competitive.Operators design the pricing model parameters. social network degree. field data points to the existence of a few “heavy” users transferring large volumes of data. the problem of resource abusive heavy users can also be addressed with additional tools such as de-prioritization. Unlimited data plans are popular with consumers. Table 1. new defined policies can be applied online. time of day.” talking constantly to Internet application servers. network monitoring and analytics (NM&A) capabilities are required. Because many popular applications are “chatty. it can be used to identify congestion in the network including the air interface. in many cases congestion is instigated by the signaling applications generate. CONGESTION ANALYTICS To collect. We review fair allocation schemes as used in the medium access control (MAC) layer of (e)Node-Bs and show how deprioritization can result by changing the entitlements of users in the radio interface resources. traffic exhibits what is called a busy hour. closing the loop with the policy control resource function. apart from heavy users the sheer number of typical users is also a problem that can only be dealt with by explicitly increasing the capacity of the network. Air interface signaling HTTP Email 12% 65% Network volume 70% 4% How operators can make use of such analytics depends on the desired degree of automation. and the corresponding bucket and step sizes. and other secondary factors. The latter allows the radio network controller (RNC) or packet data network (PDN) gateway (PGW) to understand how much data can be sent to the (e)NB so that the transmission buffers are not exhausted due to air interface congestion. however. we make precise the definition of heavy users as users that consistently have exceeded certain volume thresholds over a period of time or constitute a suitably determined upper tail of a long-term usage volume distribution. Congestion over the air can be indirectly measured from the behavior of backhaul protocols that employ flow control algorithms. interference management control loops are almost entirely based on long-term estimates and are slow to react to frequent session activations. 1. CONGESTION DETECTION AND ALLEVIATION Analysis of datasets created by probes at standardized interfaces of real-world mobile networks reveals interesting properties of aggregate packet data traffic. In the era where thousands of applications are developed and launched in application stores every year. The analytics function is able to do regression analysis to identify strong correlations between variables of interest. time. We review the tools that are available in Release 8 networks to safeguard the quality of experience (QoE) of these applications and how throttling is done today in networks. The positioning of such an analytics function in a functional architecture is shown in Fig. and act on events that may be caused by congestion. Notably. indoors state. We conclude by highlighting the key differences of deprioritization and throttling using a realistic 3G simulator. balancing heavy usage and the desired market size expansion for always-on applications. new defined policies can be applied online. In conjunction with charging data from the online and offline charging servers. and power relative to the revenue they generate. the problem is exacerbated IEEE Communications Magazine • October 2011 111 . inherited from the behavior of circuit-switched (CS) voice users to place calls more heavily during specific hours of the day.
Apart from congestion. peer-to-peer traffic). The rate of flow of heavy user traffic out of the core network is diminished before it enters the radio access network (RAN) itself. if not higher. The benefits of deprioritization come at the expense of the requirement that the network implements quality of service (QoS) control. This is in contrast to throttling.g. which encompasses centralized detection and tagging of heavy users. and the so-called bottleneck may not even exist at specific times in specific cells. throttling can be applied based on application identifiers (e. the end result of deprioritization is that the scheduler shifts resources from the heavy users to the typical users. PCRF policy enforcement to relevant GW HSS Network monitoring and analysis S6a Operator IP services Rx SGi MME PCRF S1-MME S11 LTE RAN macro/ metro/home Home Macros SGW Gx S1-U Metro S5/S8 PGW 3G RAN macro/ metro/home Home Macros RNC Metro Iu SGSN Gn GGSN Figure 1. Notably. By deprioritization we generally mean lowering the priority rank of a user or user application (bearer or flow) tagged as heavy as compared to what is ordinarily assigned by a scheduling algorithm in the MAC layer of the (e)Node-B. by exploiting the natural resource filling property of typical MAC scheduling algorithms. plenty of air interface [radio] resources are left over). perhaps based on congestion conditions inferred at that level by the network analytics function. Deprioritization — This article introduces an alternative approach called deprioritization.Steps and legend: 1. which can be applied in networks that do not offer QoS. however. and increases their resource availability and hence their throughput and QoE. overdone or even unnecessary) as it does not consider congestion conditions locally at the cell to which the heavy user is attached. and scheduling at the RAN level. a technique known before in wireline networks . ADDRESSING USER-PLANE CONGESTION A typical approach to reducing the volume consumed by heavy users is throttling and in its simple incarnation. while maintaining long-term fairness. Such simple throttling of the heavy user can. Thus. The required QoS enablers are detailed in the next section. deprioritization does not unduly penalize the heavy users... 2. On the other hand. the simpler and cheaper “open loop” approach of centralized heavy user tagging and localized deprioritization. Passing of cost function value to PCRF 3. the inherent local congestion-aware action of the deprioritization approach that automatically holds back from over-penalizing heavy users when the cell loading is light obviates the need for costly local congestion detection and/or feedback to a centralized heavy user management entity.. accomplishes the same goals of a costlier and more complex “closed loop” intelligent throttling approach. if there is no such congestion at certain times (i. 112 IEEE Communications Magazine • October 2011 . As shown in Fig. Simple throttling can be improved by allowing for RAN-level congestion sensing and feeding back such information to the central entity (at the PGW or GGSN level) that will then enforce selective throttling of heavy user traffic. But such selective throttling extracts a price for its sophistication in the form of higher implementation cost and complexity requiring a small latency between detection and action. increase in the throughputs experienced by the non-heavy users.e.e. Deprioritization uses priority weight functions that underweight the heavy users in the scheduler to achieve a similar. This reduces congestion for contending normal users. curtailing its inflow into the base station (eNB) buffers just ahead of the presumed air interface bottleneck. Network monitoring and analytics in a functional architecture. involves a blind reduction in the peak data rate that a heavy user can transmit. be heavy handed (i. This limitation is enforced by rate policing functions at the PGW or general packet radio service (GPRS) gateway service node (GGSN) level in the core network. Network monitoring and analysis 2.
and selectively User throughput Heavy users throughput reduced during congestion Active users QoE (throughput and responsiveness) improved applying the throttling at the gateway only during congestion.. IEEE Communications Magazine • October 2011 113 . Most of the discussion in this article refers to non-GBR resource types since this is the type assigned to support volume demanding mobile applications (e. based on . best effort bearers of the identified heavy users will be mapped to a specific QCI value. Each bearer is assigned a unique QoS class ID (QCI) by the network. the parameter UE aggregate maximum bit rate (UE-AMBR) provides the upper aggregate limit for all nonGBR bearers that simultaneously terminate (DL) or originate (UL) at the UE. The QCI is effectively a pointer to a list of QoS characteristics that each operator usually preconfigures on a node-by-node basis.g. video streaming. which is defined as the offering that the operator makes to a subscriber. Providing different packet forwarding treatment for reasons other than application requirements. and selectively applying the throttling at the gateway only during congestion. each configured as a separate access point name (APN) — a parameter. that is. we focus in this section on the Release 8 QoS architecture. Internet access and voice are two examples of such services with the latter supported by a guaranteed bit rate (GBR) resource type. These UL/DL UE-AMBRs are subscription-level quantities and as such are known at the home subscriber server (HSS). APN-AMBR. large file transfers). More intelligent versions of Heavy users Active users Light users throttling can also be performed by monitoring for congestion. information about the extent of deprioritization a user or flow or bearer is to be subjected to from the central heavy user detection entity in the core network to the MAC scheduler in the specific eNodeB associated with this user. More intelligent versions of throttling can also be performed by monitoring for congestion. calls for the establishment of a separate bearer or the modification of the existing one. The APN and UE AMBR values are key tools for implementing throttling either selectively or on all non-GBR bearers of the UE to an AMBR. CONTROLLING QUALITY OF SERVICE In this section we provide a summary of QoS architecture outlining the key concepts. and discuss the implementation of throttling in the network. was also introduced to configure the DL/UL rate policing function at the gateway level for each of the two APNs. Representation of the effects of de-prioritization to heavy users. we can have a DL APN-AMBR of 50 Mb/s for the VPN service and 5 Mb/s for the non-VPN service. Congestion Figure 2. explained shortly. This QCI is propagated down to the RAN and is interpreted at the MAC scheduler level as an indication of a weight priority function. For example. under policy control. both cases necessitating the usage of a different QCI. in the case of deprioritization. Since the UE may be communicating with multiple services at the same time — for example. They propagate via UE registration procedures down to the RAN to enforce the data rate to/from the specific UE via rate policing. which may contain multiple packet flows that receive the same packet forwarding treatment. Therefore. It is worth mentioning that implementation of DL throttling can simply be done outside of a QoS framework by effectively limiting the offered rate at the gateway level. With respect to the bit rate. under policy control.Heavy and active users contend for RAN resources Data activity Implementation of Light users receive good performance w/ or w/o congestion management Congestion management shifts priority from heavy users to (primarily) active users DL throttling can simply be done outside of a QoS framework by effectively limiting the offered rate at the gateway level. QCIs are thus used in communicating the heavy user tag. QoS is guaranteed at the level of the bearer. Although the principles are applicable to earlier 3GPP Releases. At the highest level of the QoS architecture is the service. a virtual private network (VPN) service and a non-VPN service. Table 2 shows a standardized QCI mapping as specified in .
The first is the average channel variability between users. is attributed to differences in their generated offered load – the extent to which they use their application phones. The fairness issue can be best illustrated by an example that for simplicity is based only on volumetric parameters. Ideally. Normalization ensures that events like New Year’s Eve. two factors determine the spread in the consumed volume. but their individual workplaces have quite different coverage qualities. Even in this simple example. a metropolitan area). we can distinguish two timescales for the implementation: online and offline.g. Normalization is performed by dividing the consumed volume with the median volume across the population of the UE segment. CLASSIFICATION AND DEPRIORITIZATION RANKING OF USERS Either monitoring tools or offline/online charging servers. p2p file sharing. web. chat. ftp. two users may consume the majority of their data volume while at work. The execution of the classification function is online. which ensures that the proportion of users with high volume consumption relative to the median is between the ideal and desired fairness curves. This can be estimated by correlating the field collected long-range volume statistics against weak/normal user complaints or satisfaction levels. or an unfair distribution otherwise. In the latter case the unfairly heavy users can be identified and deprioritized or throttled... one or more billing cycles).QCI 1 2 Resource type Priority 2 4 Packet delay budget 100 ms 150 ms 50 ms 300 ms 100 ms 300 ms 100 ms Packet error loss rate 10–2 10–3 10–3 10–6 10–6 10–6 10–3 Example services Conversational voice Conversational video (live streaming) Real-time gaming Non-conversational video (buffered streaming) IMS signaling Video (buffered streaming) TCP-based (e.) can be done offline and may require advanced statistical processing. 114 IEEE Communications Magazine • October 2011 . email. progressive video) Table 2. The tuning of its parameters (thresholds. p2p file sharing. In Fig. keep the cumulative distributed function (CDF) relatively unchanged..g. Classifi- cation fairness based on volume metrics can be defined by using a sigmoid curve in the (normalized volume.g. and. 3 we depict sample CDF diagrams of the normalized consumed volume over a long period (e. we need to consider statistical information across the UE population of a market (e. The bulk of the spread. The end goal of online decision making and offline tuning of parameters is to optimize the assignment of scarce resources to achieve the concept of long-term user classification fairness. ftp. interactive gaming GBR 3 4 5 6 7 8 9 Non-GBR 3 5 1 6 7 8 300 ms 9 10–6 Video (buffered streaming) TCP-based (e. chat. Standardized QoS class ID characteristics . etc.. though. For addressing fairness. For example. The function can take a number of input arguments that include many PCRF policy affecting factors. can be used to identify heavy users. email. Operators estimate a desired sigmoid curve that represents the desired spread of the upper tail of consumed normalized volume. and can be performed by monitoring and analytics functions as well as mining charging server data. If we have tiered data plans. In reality. progressive video) Voice. the classification function will likely be customized to meet the business and network needs of any specific operator. depending on the type/granularity of usage data required. video (live streaming). as such. web. The classification function produces a dynamic score that represents the heaviness level of the UE (e. As an example. and would classify it as either heavy or typical UE. the normalized consumed volume is a step function — an indication that all UE got exactly the median volume. percentile)-plane.g. a simple binary score would classify the users as either heavy or typical). only the CDF for the highest volume tier needs to be considered. a simple classification function would check if the consumed volume of UE exceeded a certain threshold over a measurement period that can span from minutes to days.g. which considerably increase the volume consumption across the population. Each measurement cycle may reveal a fair distribution across UE that itself manifests a low or tolerable user dissatisfaction. measuring intervals.
UE1 will be offered 1/4 capacity. This means that the new fair share for UE 3 and 4 will be 1/4 + (1/8)/3 + (1/2) (1/8)/3. is a task for the MAC scheduler at the (e)Node-B. As a result.Pr[X<abs] In the authors’ opinion. respectively..4 0. it only acts as a soft threshold that influences relative priorities in the presence of other contending traffic. Once the heavy users have been classified. how to specifically estimate the desired sigmoid curve or solve the open question of long-term fairness is a topic that warrants further research. which in turn will split this equally between the two remaining users.9 0. on the order of one second. including user application dependency. The reduction of the entitlement of the heavy user is far more aggressive: its maximum rate parameter is set far lower compared to the normal (typical) user. This. We start by explaining the concept of maxmin fairness using an intuitive example. proportional fairness (PF) has dominated MAC scheduling algorithms for best effort applications. is more than UE1 wants or can make use of.6 0. we stick to best effort applications and review the intuition behind weighted PF (WPF) by treating a simpler allocation scheme that is amendable to simple calculations. it will return 1/8 as surplus to the scheduler. 1 0. However. UE2 will then be assigned what it needs (i. So are the topics of generalizing the fairness concept to parameters other than volume. This will be the final allocation. CDF of the normalized volume consumed per UE per billing cycle. which results in a deficit for UE3 and UE4.8 0. This is because the surplus is to be equally distributed among the remaining three users. the heavy user has full access to all the leftover resources (in plenty) to sustain high throughput for his/her traffic in spite of the fact that it may far exceed the maximum rate parameter. On the contrary.1 0 Ideal Desired Fair Unfair MAC SCHEDULING Since the publication of F. causing its priority weight to drop below the norm (unity) much earlier.4 0.0). of course. fairness based on relatively short-term timescales. Let us assume that we have four users of equal priority with resource demands x1 = 1/8.5 0. the fair allocation to which each UE is entitled is C/4 = 1/4 of the available capacity. UE Entitlements — In the previous max-min example. Kelly  on the subject of fair allocation of resources. if we start with this proposed assignment. all UE had the same entitlement to the resource (i. Given the IEEE Communications Magazine • October 2011 115 . each weight was equal to 1. if there is little or no contention for 0 1 2 3 Normalized volume 4 5 Figure 3.8 0. The returned surplus will create a new fair share for the remaining three users. For simplicity.6 0. HSPA was used as the radio access technology with the most important simulation assumptions listed in Table 3.2 Heavy user Typical user 0 0 1 100 200 300 400 500 Average data rate (kbps) 600 700 800 Figure 4. PERFORMANCE ANALYSIS To model the behavior during deprioritzation and throttling.3 0. resources due to light offered traffic from the normal user(s). can result in the case where users are paying more than others for resources or the operator deprioritizes users because they have been classified as heavy. and we would like to allocate an available capacity of C = 1 among them. x2 = 1/4. and x4 = 1. Figure 4 shows example priority weight functions for heavy and typical (not heavy) users. for example. the heavy user achieves a smaller average data rate than the typical user if there is contention for resources between the two. and such allocation is inefficient. and so on.e.2 0.7 0.. Obviously. P.e. x3 = 2/3. and because it wants 1/8. each now entitled to 1/4 + (1/8)/3. instead. such that x1 < x2 < x3 < x4. This. 1/4) and will return as surplus (1/8)/3 to the scheduler. In many instances UE may have different a priori priorities and therefore be more or less entitled than others. Example user entitlements (priority weight) as a function of the current average data rate assigned by the scheduler.2 1 User weight (priority) 0. The maximum rate parameter is hence not an absolute hard threshold that completely cuts off scheduling of resources once it is exceeded. 1. described in the next section. an extensive simulation effort was conducted based on a state-of-the-art proprietary simulator. without considering their demands. operator revenue.
CONCLUSION The results of our investigation highlight the challenges operators face in safeguarding their networks against heavy users and the two major approaches in doing so: • Throttling acts at the gateway level and is unable to exploit the temporary traffic gaps of typical users. pp. it continues to rate limit heavy users even if there is no reason to do so. has the visibility offered by the scheduler at the (e)NodeB and is also self-organizing. and D. 1998. P. Figures 6 and 7 compare heavy user performance between throttling and deprioritization approaches for per user throughput and cell utilization. “QoS Control in the 3GPP Evolved Packet System. In Fig. the benefit of deprioritization will depend heavily on the non-heavy user traffic distribution. heavy users are served only when typical users are momentarily absent or demand low resources. and in conjunction with a PF scheduler. is effectively a reduction in the offered load into the queues of the scheduler and therefore is unable to recover quickly from the lack of typical user traffic. are created.Parameter Release compliance Path loss Shadowing UE mobility Traffic models DL PS transport channels UE category Assumption Single carrier UMTS R7 with all protocol layers of URAN including TCP/UDP at the application layer. Many issues in this critical field remain open such as ensuring fairness across the population of users. Ekstrom. respectively. although similar benefits can be seen in the UL as well. Based on NGMN models HS-DSCH with associated DPCH carrying DCH @ 3. Mag. K. A. heavy low-SINR users are the ones that stay longer in the scheduler queue. which points to the design of a classification function that offers substantial room for differentiation between vendors and/or operators. Kelly. • Deprioritization. 5. Feb. DL NGMN traffic model case 120 MU FTP x8 MU streaming x16 MU HTTP x36 LU email x12 HU FTP x4 HU streaming x8 100 Average throughput (kb/s) 80 60 40 20 0 0 Normal (no congestion control) HU deprioritized Figure 5. “Rate Control for Communication Networks: Shadow Prices.4 kb/s 8 Table 3. Deprioritization of heavy users (HU).” J. Simulation assumptions. FTP and streaming models. H Tan. and this benefited the 72 other UE from 34 to 60 percent.  F.. Twelve heavy users’ data rates were reduced by close to 70 percent. Operational Research Society 49. With deprioritization. 237–52. Therefore. Maulloo. we randomly flag a subset of users as heavy and assign to them appropriate traffic models — typically. for varying levels of cell congestion. continuing dominance of DL demand compared to UL. on the other hand.03) Correlated with decorrelation length 50 m UEs are static for all DL simulations but Doppler spread is experienced in both cases. If the cell is not congested. allowing control transparency and better upswing potential of the heavy and typical users. we focused on the DL direction. Proportional Fairness and Stability. As can be seen. typical user traffic is able to capture the resource “gaps” that REFERENCES  H. In the simulation. 2009. the benefit of deprioritizing heavy users on the average throughput of the rest of the UE population is shown. K. A side effect of deprioritization is that heavy users are served only when typical users are absent. Typical urban (30. this is easy to achieve when the system is congested. This self-optimizing feature allows deprioritization to run over long periods without any other external control. depending on their traffic model. 116 IEEE Communications Magazine • October 2011 . In other words. For these plots we varied congestion level by varying the number of medium demand users (MUs) to reflect various levels of congestion and observed the behavior relative to the baseline (depicted as normal) for both deprioritized and throttled cases. Obviously.” IEEE Commun. on the other hand. Throttling. deprioritization will have less effect on the heavy user since the PF priority of the heavy users will rapidly recover once again due to automatic resource gap filling.
throttling on HU UE throughput. New Jersey.0. From 1994 to 1996 was with Intracom SA. in electrical and computer engineering from Purdue University. Comparison of impact of deprioritization vs. where his work focused on design and development of state-of-the-art algorithms and performance for 3G systems. and Ph. and 1994.  C. United Kingdom. video streaming. He has numerous patents and publications in the above areas. Sophia Antipolis.S. He has received two Bell Labs President awards and has co-authored 50 granted/pending patents in the area of wireless communications. and TM Forum. in 1987. from the University of London. and 4G systems and significant impacts on AlcatelLucent’s business through outstanding technical support of Alcatel-Lucent’s customers.” IETF Internet draft.00 84. France. At Alcatel Lucent he was a senior LTE network architect with its Wireless CTO organization and a member of the Alcatel Lucent Technical Academy. From 1987 to 1992 he was a research scientist in Queen Mary College.D. Prior to that he was with the Forward Looking Work Department as a key contributor to Lucent Technologies’ HSDPA scheduler. as senior director of RAN strategy in the Wireless CTO organization focused on technology strategy for Alcatel-Lucent’s broadband wireless solutions. Greece and ETSI. and optimization.00 88. principal engineer. For these he was recognized with the Bell Labs President’s Gold Award. and his B. throttling on RAN resource utilization. where his research focused on improved phase synchronization techniques for mobile communication systems. M. and in 1989 his B.203. and network management. India. and product manager for LCC International. 1991.. as a member of technical staff for the network wireless Radio Performance and Analysis Group where his work focused on improved equalization schemes for TDMA PCS systems. D. Illinois. he held several senior positions in the corporate CTO office and in product units where he was responsible for standardization. United Kingdom. in electrical and electronics engineering from Regional Engineering College.E.00 82. West Lafayette. New Jersey. respectively.00 90. 3G.S.00 Light HU FTPx8 (deprioritized) HU FTPx8 (throttled) Medium Congestion condition High Figure 7. Baton Rouge. in 1994 and 1991.. where he worked on shaped antenna designs for satellite applications. Sept. His standardization work included active participation in IETF. IP.00 86. and M.00 0. Ethernet. The Netherlands. and LTE. He was a manager for nine years leading the Channel Element ASIC Algorithm Group developing various modem designs for various product releases. He is currently working for Alcatel-Lucent in Naperville.D. 300. Indiana. S EYMOUR received B.00 50. VoIP. He has received numerous awards including the Bell Labs President Award in 2003 and the Bell Labs Technical Journal’s Best Paper Award in 1998.00 92. From 1991 to 1993 he was a research scientist in Philips Research Laboratories (PRL)..Throughput (kb/s)  3GPP TS23.S. Prior to joining Alcatel-Lucent. He won the RCR Silver Award for his presentation on CDMA system voice quality auditing.E. ITU-T. He also has a number of publications and patents. From 1992 to 1996 he was a senior lecturer at South Bank University. and a Ph.D. In 1997 he joined Lucent Technologies where he served as member of technical staff initially in the United Kingdom and subsequently in Bell Laboratories. an M. New Jersey.E.Sc. Holmdel. with highest honors from the Technological Educational Institute (TEI).00 Light Medium Congestion condition High HU FTPx8 (normal) HU FTPx8 (deprioritized) HU FTPx8 (throttled) BIOGRAPHIES HAI ZHOU received his Ph. 1.Eng. Tiruchirappalli. and has been awarded 31 patents in the area of wireless communications. modeling.00 96. strategy. degrees from Purdue University in 1989.00 98. Bastian et al. engineering. in electrical engineering from Virginia Tech in 1995 and his B. He holds a B. where he participated in the first prototyping effort of WCDMA technology.Monogioudis@alcatellucent. In 1999 he became technical manager of the Wireless Algorithms Development Group in the Advanced Technologies organization at Lucent Technologies. In 1996 he joined Lucent Technologies where he worked on adaptive antennas for GSM. and ICIC. IEEE Communications Magazine • October 2011 117 .00 250.00 Utilization (%) 94. working on CDMA network planning.00 100. In July of 1994 he joined AT&T Bell Laboratories in Whippany. initially as Distinguished Member of Technical Staff and subsequently as director in the Wireless CTO organization.S. he served as instructor. MPLS Forum. Athens.0. China. University of London. Before assuming this role.” V10. Greece.Sc. WCDMA.00 150. He joined Lucent Technologies in 1997 and is currently a director in the Wireless CTO Organization in Alcatel Lucent at Murray Hill. “Comcast’s Protocol-Agnostic Congestion Management System. end-to-end QoS. including policy management. in electrical engineering from Louisiana State University. Comparison of impact of deprioritization vs.com) received his Ph. Since 2000 he has been with the Wireless R&D division of Alcatel-Lucent.E. and system design for WCDMA and LTE. some of his ideas were adopted in the 3GPP HSDPA and 3GPP2 (3G1x EV-DV) standards suite. 2010. PETER BUSSCHBACH is senior director in Alcatel-Lucent’s LTE Solutions organization. Redhill. J AMES P.E. United Kingdom. During his tenure at Alcatel-Lucent. respectively. he has worked on wireless modem designs for 3G1X CDMA. optical networking.00 200. from the University of Surrey.Tech in electronics and communications engineering from Pondicherry Engineering College in 1990. He holds a Master of Science degree in mMathematics from Eindhoven Technical University. FRANCIS DOMINIQUE received his M. P ANTELIS M ONOGIOUDIS (Pantelis. In 2006 he received the Bell Labs Fellowship award for his outstanding and seminal contributions to wireless technology and standards spanning 2G. in 1983. He has also served as a Technical Reviewer for the IEEE and Wiley. United Kingdom. from Northwestern Polytechnic University. He began his 23-year career in telecommunications as a software developer in AT&T. He has published 9 journal and 7 conference papers. where he participated in the standardization of DECT. “Policy and Charging Control Architecture. HU FTPx8 (normal) 100. where he leads a department that is responsible for end-to-end architecture and systems Figure 6. N ANDU G OPALAKRISHNAN specializes in the areas of radio resource management. and solution development in a wide range of technologies.
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.