UMTS Performance Optimization and Acceptance Guidelines contents 1. INTRODUCTION............................................................................................. ................ 6 1.1. OBJECT....................................................

...................................................... .............. 6 1.2. Scope of this document.............................................................................................. 6 1.3. Key Performance Indicator values and Definitions................................................ 7 1.4. Audience for this document....................................................................................... 7 2. Related documents................................................................................................ 7 2.1. Applicable documents................................................................................................. 7 2.2. Reference documents................................................................................................ 7 3. General Methodology......................................................................................... 9 4. Assumptions.................................................................................................. ........... 10 4.1. General Assumptions................................................................................................ 10 4.2. Specific Assumptions................................................................................................ 11 4.2.1. Design criteria....................................................................................................... 11 4.2.2. Performance criteria............................................................................................. 11 4.3. Warnings on metric definitions............................................................................... 11 4.4. Notation........................................................................................................ .............. 11 5. Test Areas............................................................................................................. ..... 12 5.1. Routes........................................................................................................... ............. 12 5.2. In-building Test

Areas............................................................................................... 12 5.3. Special Test Areas.................................................................................................... 12 5.4. Warranty Zone.......................................................................................................... 12 6. Simulated Traffic Load (if applicable)........................................................ 13 6.1. Uplink............................................................................................................ .............. 13 6.2. Downlink....................................................................................................... ............. 14 7. Test Setup............................................................................................................. .... 14 8. Performance Metrics......................................................................................... 14 8.1. Combined Metric: Successful Call Rate................................................................ 15 8.2. Detailed Metrics........................................................................................................ 16 9. System Optimization process.......................................................................... 16 9.1. Configuration Audit.................................................................................................. 19 9.2. Cell Shakedown.................................................................................................... .... 20 9.2.1. Objective....................................................................................................... ........ 20 9.2.2. Entrance Criteria................................................................................................... 20 9.2.3. RF BASIC checking............................................................................................. 20 9.2.4. Drive Routes......................................................................................................... 21 9.2.5. Test Drives........................................................................................................... 21 9.2.6. Test Areas (indoor Site)........................................................................................ 21 9.2.7. Exit Criteria.......................................................................................................... . 21

9.2.8. Issue List............................................................................................................... 21 9.3. Cluster Optimization................................................................................................. 22 9.3.1. Objective....................................................................................................... ........ 22 9.3.2. Entrance Criteria................................................................................................... 22 9.3.3. Drive Routes......................................................................................................... 23 9.3.4. Test Drives........................................................................................................... 23 9.3.5. Data Analysis / Problem Resolution..................................................................... 26 9.3.6. Exit Criteria.......................................................................................................... . 26 9.3.7. Reports.......................................................................................................... ....... 27 9.3.8. Issue List............................................................................................................... 27 9.4. System Optimization.................................................................................................. . 27 9.4.1. Objective....................................................................................................... ........ 27 9.4.2. Entrance Criteria................................................................................................... 28 9.4.3. Drive Routes......................................................................................................... 28 9.4.4. Drive TESTs......................................................................................................... 28 9.4.5. Data Analysis / Problem Resolution..................................................................... 28 9.4.6. Exit Criteria.......................................................................................................... . 28 9.4.7. Reports..........................................................................................................

...................... Optimization Metrics......................................1............................................ 36 14......................2. Appendix 2: Performance Metrics...... Reports........1..................................3............................................................................. 29 10.. Tolerance Interval of Several Data Populations................................................... .................................... 32 12..........................................3.............................2.............. 37 14.......................................2....... 30 10................................. 31 10.............................. 32 12................................. definitions.... .............................1..................................................1.............. ...............................2...................................... 44 14........................................... 32 12..2.... 37 14...... Appendix 1: Test Equipment List..................................................... 29 9......... Performance Measurement Reports...................................................... 30 10...... Acceptance Metrics................................3.......................................... Abbreviations.................................4.... 54 ..............4......... 31 11..................................... Test Environment Report....................................................................................... 34 13............ KPI Indicative values...................................................... OCNS unavailability............. Key Performance Indicators (KPIs) Reports............................ .............................................................................2...... 38 14.......... Issue List.................................. 53 14...........2............. 53 14... Geodetic Information......1... Abbreviations and definitions........................ Detailed metrics for acceptance and optimization............................................. 31 10...............3.. Measurement Uncertainty........................ 38 14............... Tolerance Interval of One Data Population.....................2......1............ Issue List.......1..............1..................................................................... 30 10.................................................. 30 10...............8........... Performance Result Report.....................

........................ confidence levels...... IS-95 Experience...................................... presented and described in this document.. teams.. Table z-values vs...... 57 15...........14.... Procedures.......... As such... Optimization resources:.............. Scope of this document This document is intended to define two key processes before launching a UMTS network: performance optimization and performance acceptance.......1.2.......... Extrapolation to UMTS.............. 58 1...... etc.... This document is a generic and internal document..................... 56 15...... .1...................3................... It provides guidelines and a base for writing the actual Network Performance Acceptance Test Plan.... the objectives of the performance optimization process are to: § uniform optimization procedure § uniform measurement standards (test set-up.............. 1.) § set up clear methods and way of measurements in agreement between the customer and Nortel Networks. 57 15................................. 57 15.........2...3........................... Performance Optimization A detailed system optimization process is needed to ensure that each of the System Performance Metrics can be achieved...... INTRODUCTION 1........ conditions and performance metrics for actual contracts may have variations to those referenced.....1............................ OBJECT This document is intended to describe a generic procedure for performing UMTS network performance optimization and acceptance tests as well as the main performance metrics that require field assessment for the set of services provided through this network..................................... Appendix 3: Overall planning for performance acceptance........3.....1............... .1......................1... It must not be communicated directly to the customer.......... test area selection.. route selection.. 57 15.....................

Key Performance Indicator values and Definitions The Product-KPI values and default definitions may be found in [R1]. Due to the unavailability of UMTS collection and analysis hardware and software equipment at the moment.§ set up a process of following up with issues and work yet to be completed (issue lists) § tracking of performance improvement trends over time Performance Acceptance Performance Acceptance can be defined as the final performance test before commercial launch of a network. In any case. the presented methods will be updated as the equipment becomes available. However..4.. values and definitions must be adapted according to customer requirements or network peculiarities. The criteria and process for performance acceptance must be clearly defined in order to ensure consistency all over the system. The detailed measurement processes for each metric will be defined including all necessary sign off requirements. the definitions have been recalled in this document since some have to be slightly adapted to measurement methods and devices (availability of points of measurements.). The sometimes called “in-service performance acceptance” (when the acceptance is actually done on a commercial network) is outside the scope of this document. post-processing ability. Part of the audience should be found in teams dealing with: · Network performance acceptance · RF engineering · Network monitoring (metric implementation) · Network performance contract negotiations 2. The resulting Acceptance Test Plan is required to be signed off by both the Nortel Networks’ and the customer’s acceptance primes. The Network-KPI values (including network engineering) must be evaluated on a case by case basis to include contract specificities. 1. Related documents . It shows the compliance with contractual commitments. when writing the final and customized Acceptance Test Plan for the customer. The performance acceptance test validates the network design. Audience for this document This document is internal and intended mainly for dressing generic Performance Acceptance Guidelines that may be used to customize each customer’s Acceptance Test Plan. 1.3.

008 v3. The suggested methodology tries to merge both efficiency and user experience.5.331 v3.800 Methods for objective and sujective assessment of quality Objective quality measurement of telephone band (300-3400 Hz) speech codecs [R14] ITU-T P.101 v3.1. a set of representative applications and metrics has to be mutually agreed between the customer and Nortel Networks.107 v3.0 RRC Protocol Specification (Release 1999) [R11] 3GPP TS 25.7.4.6. addressing and identification [R5] 3GPP TS 23.10.2.904 v3. Reference documents [R1] UMT/SYS/DD/72 v01.0 QoS Concept and Architecture [R6] 3GPP TS 24.0 Physical layer – Measurements (FDD) [R9] 3GPP TS 25.003 v3.215 v3.0 Interlayer Procedures in Connected Mode [R10] 3GPP TS 25. General Methodology UMTS systems introduce a wide range of applications.303 v3. .2.5.0 Service aspects. Applicable documents All customized UMTS network performance acceptance test plans.0 UTRAN Iu Interface RANAP Signalling [R12] ITU-T I.7.0 Numbering.861 Methods for objctive and subjective assessment of quality Methods for subjective determination of transmission quality [R15] RFC-2330 Framework for IP Performance Metrics [R16] RFC-2679 A One-way Delay Metric for IPPM [R17] RFC-2680 A One-way Packet Loss Metric for IPPM [R18] PE/IRC/APP/154 E2E Voice Quality User Manual [R19] TBD 1xRTT RF Acceptance Guidelines [-] TBD UMTS OCNS settings (not before first UMTS commercial release or latter) 3.105 v3. 2. Like roads.3. services and service capabilities [R4] 3GPP TS 23.6.356 Integrated Services Digital Network.0 Mobile radio interface layer 3 – Core Network Protocols – Stage 3 [R7] 3GPP TS 25.02 UMTS Key Performance Indicators (KPI) Design [Draft] [R2] 3GPP TS 21.0 UE Capability Requirements [R3] 3GPP TS 22.0 UE Radio Transmission and Reception (FDD) [R8] 3GPP TS 25.6. streets or buildings are selected and chosen by both parties to be geographically representative.413 v3. Overall network aspects and functions – Performance objectives [R13] ITU-T P.

providing the same tuning are duplicated. the standards (please refer to [R3]) distinguish several type of information transfer according to three main categories: · connection oriented vs.During the same tests. All tests must be done in a controlled environment. Traffic (voice or data) should be generated by tools for repeatability and controllability purpose. uni-directional broadcast) For optimization and acceptance purpose we will keep the focus on bidirectional point-to-point connection oriented services. non guaranteed bit rate ▪ constant bit rate vs. non real time · Traffic characteristics: ▪ Point to point (uni-directional. excluding as long as possible. As a reminder. SDU error ratio. On CS-domain. Frame error ratio depending on which is the most appropriate) · Data Rate Here are the UMTS Bearer Attributes defined in the standards [R4]: · Traffic Class · Maximum Bit Rate . bi-directional asymmetric) ▪ Point to multipoint (uni-directional multicast. Our goal. As defined in the standards [R4]. Results are supposed then to be extrapolated to the entire network. a specific cluster is selected to test thoroughly the targeted applications. the external bearer and the TE/MT local bearer. the end-to-end QoS is understood as including all parts of the network. both CS and PS domains will be simultaneously tested. is to provide and commit on performances on the network part we are providing to our customer. bi-directional symmetric. Only one combined metric (successful call rate) will characterized the performance acceptance based on a pass-or-fail criterion. voice will be used as an application. If more in-depth tests are requested. Therefore. we will concentrate as much as we can on the UMTS Bearer. when a mixture of ping and file transfer by FTP will be used on PS-domain. connectionless · Traffic type: ▪ guaranteed bit rate vs. dynamically variable bit rate ▪ real time vs. The same document [R3] defines the following attributes for information quality: · Maximum delay transfer (end-to-end one-way delay) · Delay variation · Information loss (error ratio: bit error ratio. that means all mobile devices are strictly controlled by the acceptance team and only the test equipment is running in the tested areas. as a vendor.

. General Assumptions This document is written with the following assumptions: § Logging (call trace family) is an RNC feature available and fully operational at optimization and acceptance time § Hardware equipment and corresponding software tools are available and fully operational. we will focus on information quality measurements.4.1. [R5] section 6. Here is the table listing the applicability of the attributes according to the traffic class.3. not all of these attributes apply to any traffic class. Assumptions 4.3 table 2 “UMTS bearer attributes defined for each bearer traffic class”: Traffic class Conversatio nal class X X X X Streaming class X X X X X X X X X X X X X Interactive class X X X Background class X X X Maximum bit rate Delivery order Maximum SDU size SDU format information SDU error ratio X Residual bit error ratio X Delivery of erroneous X SDUs Transfer delay X Guaranteed bit rate X Traffic handling priority Allocation/Retention X priority X X X X X X As far as optimization and acceptance metrics are concerned. 4.· Guaranteed Bit Rate · Delivery Order · Maximum SDU Size · SDU Format Information · SDU Error Ratio · Residual Bit Error Rate · Delivery of erroneous SDUs · Transfer Delay · Traffic Handling Priority · Allocation/Retention Priority However.

antenna assumptions) 4.1.2. Counters may not be available or may not have a granularity accurate enough to perform the desired calculations. blocking rate). Performance criteria A reference should be provided to a table including all performance to be reached. call setup access success.2 (page 38) do not fully take into account the measurement methodology. per associated quality of service) § traffic assumptions (in kbps. per environment) § link budgets (including at the very least quality of service thresholds such as Eb/N0 targets per service or BLER and FER targets. Design criteria The list of the design criteria must be provided in the acceptance document. In lots of cases. Warnings on metric definitions The metric definitions in 14. Some of the network performance metrics are influenced by these criteria (e. A threshold shall be provided for each of the metrics and is to be met during the acceptance. uplink and downlink) § services to be offered (various transmission rates. per environment.2. Some of the messages may be only accessible through test mobiles or logging features. 4. radio margin assumptions. Values can be divided into two main groups: mainly-equipment-driven values (such as call setup time. These values have to be mutually agreed by the customer and Nortel Networks and must result from the design criteria and the network element known or planned performance. a context of communication must be considered to link the different messages and itemized the metrics according to causes that are notified in other . OCNS is a feature available and fully operational at optimization and acceptance time[1] 4.g. Specific Assumptions 4. dropped call rate.3.§ For loaded conditions (if applicable).2. transfer delay) and mainly-design-driven values (such as dropped call rate. The list should include at least (but is not limited to): § cell edge reliability § coverage targets (per environment.2. call setup success rate). Please refer to [R1] for the former and to the specific contract design assumptions for the latter.

signaling messages during the call. medium requirements in suburban areas and low requirements in rural areas). 4. The cause is generally used to indicate a group of causes that should be used to filter the related messages. The cause does not correspond necessarily to a cause described in the standards (explanations are given as far as possible to relate to the causes described in the standards). All metrics are intended to be calculated on a network-wide basis or at the very least on a per-cluster basis. suburban and rural morphologies). To insure repetitive results. speed shall be carefully checked all along the drive tests.1.g. Should the acceptance be done with different performance thresholds (such as high requirements in dense urban areas. routes shall be chosen to ensure a sampling of the main encountered environment in the designed service area (such as dense urban. Routes The drive routes shall cover the areas of interest and include some primary as well as secondary roads. In addition. 5. the test routes shall be driven at speeds as being representative of normal subscriber behavior.4. The message name is the one used in the corresponding standard. the messages are generally written in italic and with the following notation: ProtocolAbbreviation_MessageName(Cause) The abbreviations are detailed in the “Abbreviations and Definitions” chapter. During data collection. in-building measurement areas and special test areas shall be mutually agreed between the customer and Nortel Networks. Test Areas The coverage area for test consists of drive-test routes. 5. the routes will be carefully designed to keep each drive-test in an area of one and only one type of coverage requirements (e. The set of driven roads. urban. in-building and special test areas within the designed service area. Notation In the following formulas. page 32. proscribe a drive overlapping dense urban and suburban in the previous example) .

A specific process will be designed for such areas in agreement between the customer and Nortel Networks. then the warranty zone would have to be reviewed to include only up-and-running sites. Typically. etc. The warranty zone shall be determined from predictions coming from Nortel Networks prediction tool (iPlanner) and shall be based on real site coverage (this means that all sites shall be CW-drive tested)[2]. Special Test Areas Special test areas (such as trains. water areas. For performance acceptance.2. The customer shall provide access to Nortel Networks to all defined test areas within these buildings. Simulated Traffic Load (if applicable)[3] The RF performance of the UMTS network should be evaluated with mobile units that are operating under nominal loading conditions.. exclusions and specific non-warranty areas (e. service areas and test areas within the building. Warranty Zone Maps defining accurately the warranty zone have to be provided and appended to the Performance Acceptance Test Plan. 5.).5. Specific buildings requiring indoor testing shall be clearly defined (building location.) may be included in the performance acceptance. interaction with external coverage).. 6. subways. 5.g. the warranty area is the design coverage area diminished by deployment restrictions. In case some of the sites included in the warranty area were not up and running at the beginning of the first drive test.4. However. building penetration margins specified in the design may be verified at street level using attenuators at the mobile (only RSSI would be checked since the Ec/N0 will stay unchanged due to the fact that it is a ratio). tests may be conducted with simulated traffic . As a definition. tunnels. In-building Test Areas As far as possible.3. The tests will be confined in this zone and any measurement coming out of this warranty zone shall be discarded. in-building coverage from external site infrastructure should not be tested. specific bridges. the warranty zone will be defined from the P-CPICH EC/I0 (downlink) and the uplink budget.

. However. The exact value of the attenuator shall be consistent with the equivalent noise rise at the cell site receiver generated by the traffic load (at the busy hour) used in the design assumptions. During load simulation testing. It has to be noted that this setting is pessimistic. It shall not exceed 70 % in any case (equivalent noise rise of 5. Uplink Uplink load simulation shall be conducted by using a fixed attenuator (most likely 3 dB. This traffic load shall correspond to the number of Erlangs and kbps for which the system has been designed as defined in the Final Cell Plan.1.23 dB). The general method of loading shall be as follows. 6. The equivalent noise rise (interference margin) shall be determined by expressing the number of channels n associated with the traffic load as a fraction of the pole point P: n = number of channels P = sector pole point r = n / P = fractional loading with respect to the pole point The design value of r shall be 50 % by default (equivalent noise rise of 3 dB). the above settings should be corrected by a factor allowing taking into account a given amount of load balancing between cells. This feature should be part of Nortel Networks’ UMTS RNS[4] and will be used to broadcast downlink orthogonal interference of appropriate level. the noise floor rise should be measured to verify the 3 dB theoretical value (at r = 50 % load). 6.2. OCNS settings will be detailed in a referenced document [FFS]. Actually.load in complement of the unloaded tests. Downlink Downlink load simulation shall be conducted by using Orthogonal Channel Noise Simulation (OCNS). This interference shall be consistent with the traffic load (at the busy hour) defined in the Final Cell Plan. but not to exceed 5. the probability of several overlapping cells to be altogether at their maximum load is close to zero in normal conditions (unsaturated network). Therefore. level and channel activity of interfering links shall be defined in a mutual agreement between the customer and Nortel Networks. The number.2 dB) inserted in the transmit path of the mobile unit. the amount of this factor is FFS.

Performance Metrics This section describes the test procedures to be used to determine the system performance metrics as defined between the customer and Nortel Networks for the performance acceptance. 8. The preferred way for simplifying acceptance tests is to focus on only one combined metric linked to a simple pass-or-fail criterion. tests under loaded conditions might be performed according to the design-assumed load (by default 50% loading relatively to the pole capacity). Depending on the feature availability and the load and/or ramp-up load of the network. Test mobiles shall have fixed attenuators connected between the transceiver and the antenna to compensate for penetration loss. 8.7. and performance of the test vehicle antenna as appropriate (it assumes that duplexers are used to separate uplink transmission and downlink reception). additional vehicle height. This assumes to have adequate packaging: screened boxes providing radio isolation at the attenuators need to be used with such a test set. All metrics will be measured along the drive test routes and in the specific test areas.1. Combined Metric: Successful Call Rate Objective The suggested method for acceptance is stating a pass-or-fail criterion based on a unique successful call metric. page 36. Test Setup All tests shall be performed using characterized and calibrated test mobiles that meet or exceed the minimum performance specifications as specified in [R2] and [R6]. Additional attenuation may also be added to simulate uplink system load (if applicable) and/or building penetration loss (indoor coverage from outdoor sites). The required test equipment list is detailed in “Appendix 1: Test Equipment List”. The successful call metric . All tests should be completed in a totally controlled environment under unloaded conditions.

Depending on the measurement tool capabilities. the dropped call rate. mobile-originated and mobileterminated calls could be mixed during the test (with a proportion to be agreed between the customer and Nortel Networks. . Definition For a call to be successful. one mobile for voice mobile-originated calls). hold during the average holding time (AHT). as per the definition. if necessary successfully hand-over (within the 3G coverage). A call passes if it is successful. the “successful call rate” metric combined the call setup success rate. It has to be emphasized that this metric should be applied to the 3G coverage only and inter-RAT hard handover must not be part of this metric. The final percentage of success call rate is then computed by dividing the number of successful call by the total number of call attempts. As per this definition. one mobile for voice mobile-terminated calls. mobile-terminated and mobile-originated calls could be physically passed on different physical sets (e.1. two mobiles for voice calls. e. Some examples for target values and quality thresholds for this metric can be found in section 14. the impact of paging (paging success rate) must be considered in the target values (either a longer setup time or a lower success rate). it must correctly setup within a given time. two mobiles for data calls). the call setup time. page 37. satisfy all defined quality criteria and correctly release the resources. For each call.g. the post-processing tool should come with a binary result “pass” or “fail”.g. the successful handover rate and the successful release rate.g. 50% mobileoriginated. To get closer to a “one-shot” test.2 page 16). The successful call rate is defined as the ratio between the number of successful calls versus the total number of attempts. Measurement Method Calls (either voice calls or data calls) will be chained during the entire test. To have enough samples.actually combined several of the typical metrics (see section 8. 50% mobile terminated). In this case. it is recommended to have more than one mobile per tested application (e. the quality thresholds.

detailed metrics may be preferred by the customer to the successful call rate. the following metrics may be added to the previous one: Blocking (radio) RAB Unavailability Rate Please refer to section 14. maximum. In that case. average. Shakedown Shakedown tests are done to verify the basic functionality of the RNS. as described hereafter.2. both ways) File Transfer Time (one-way. RRM algorithm parameters and basic call processing functions are tested on a per-sector basis.8. 9. Shakedown tests are completed when correct codes are set up. This includes system-wide activities and tests performed on each base station (noise floor measurements. the metrics that can be measured for acceptance are: Dropped Call Rate Call Setup Success Rate (originations) Call Setup Time (originations and terminations) Paging success rate (it might be combined with call setup success rates in a “call setup success rate (terminations)” metric) Bit Rate (minimum. etc). code allocation. Some or all of the work might have been already done during the installation. measured on both ways) Voice Quality (typically MOS. for detailed descriptions of each metric including metrics used for optimization but not for acceptance. . page 38. System Optimization process The network optimization consists of four different stages. commission and hardware acceptance. Configuration Audit Configuration audits are the preparatory work that has to be done before the sites are on air.2.or DMOS-like tests) For tests under loaded conditions. basic call setup and softer handoffs are successfully executed during drive tests around the site. power levels are measured. Detailed Metrics On customer’s request. Transmit power. rather than performance. installation inspections.

WGs. Additionally. Configuration Audit Each of the following criteria must be met prior to advancing to the next phase of testing. page 38. MSCs. Noise floor measurements over the whole channel bandwidth for all the used carriers should reach at least the minimum requirements (expected values for the . tion and will adjust settings for parameters. Hereafter is a chart describing the process: 9. network testing and inter-working must have progressed to the point that basic call routing (voice and data) has been validated (i.5 days per site for one drive-test team[5] as a total across all rounds.e. It is estimated to take between 1 and 1.Cluster Optimization Cluster optimization provides the first pass of optimiza. local calls can be completed) for each of the NEs. GGSNs and other core network elements) should have passed the functional acceptance (all features tested and accepted). approximately 6 weeks per cluster of approximately 30 cells with one drive-test team. RNCs. some of the system optimization activities will be performed in parallel with the optimization of remaining clusters. To the possible extent. depending on the number of problems encountered (i. Tests will use routes over multiple clusters.e. antenna tilts and antenna orientations where required. System Optimization (Control Route Analysis) System optimization along the control routes is intended to further tune the system in preparation for commercial clients. For more information about planning and resource estimation. Further testing might occur in the extent that they are not interfering with performance optimization tests. please see appendix 6. all rounds included). In large markets.1. the schedule for node-Bs installation and integration should be based on the clusters defined in the RF plan. Cell Site Information The plan must identify the date that each cell site will be installed and integrated. Core Network Elements and Access Network Elements The NEs (Node-Bs. Spectrum Clearance The spectrum should be cleared on all the service area.

page 36) RF Design Information Details of the RF design must be provided including: · geographical maps from the RF design tool with site positions · predictive coverage plots (Ec/N0. The noise floor can be also checked with a drive test receiver (such as the one from Agilent referenced in 13. construction drawing. All the appropriate information stated above is to be provided as a soft copy prior to the Shakedown phase of the project. . as-built antenna azimuths. antenna type. cable problems.. Directions to each site Written directions to the location of each site in the system should be provided (maps are welcomed to pinpoint the location). azimuth. should be provided. The measurement is done with a spectrum analyzer (on uplink and downlink bands) and does not need to have the Node-B equipment on site.) · datafill information: all parameter values from the in-service MIB. model. DRF format) should be provided for each cell: · database information with coordinates (longitude. as-built antenna tilts. this information should be itemized on a per sector basis when needed (such as antenna height. A plot from at least 30% of all the sites in the system (evenly geographically distributed). antenna model. The information should be gathered and placed in a binder for each site identified in the system. bad tilts. Antenna Sweep Tests Antenna sweep data shall be provided for each cell. cable lengths. uplink (mobile transmitted power) and number of pilots) · predicted soft/softer handoff plots · CW drive test data · On-the-field RF data: as-built antenna heights (at the radiating center). tilt.. latitude).maximum noise power allowed in the channel bandwidth would be typically around –105 dBm). downlink-uplink balance. number of antennas (use of diversity or not).. scrambling code. This step is not correlated with Node-B installation. tilts.. For border cells (cells in more than one cluster) the information can be photocopied and placed in appropriate binders. indicating the noise floor of the system. Construction Issue List An issue list should be provided for each site (issues such as bad antenna orientation. wrong connections.) as well as the attached plan to solve them. equipment configuration. Cell Database and Datafill A complete information set (soft copy. antenna type.

2. The associated sweep test acceptance forms shall be supplied for the cell site. RF BASIC checking Receiving Antenna Cabling The transmission and especially reception cabling have to be checked. NSS and WG datafill parameters must be verified with accuracy. cable interconnecting and correct code assignments. 9.2.2. Integration tests are used to show construction completion and basic network readiness. 4. 3. Antenna sweep tests must be successfully completed. This is assuming that the power coming from each antenna can be read independently at the Node-B or at the OMC-B. Entrance Criteria Each of the following criteria must be met on each cell of the cluster prior to starting cell verification tests. A given power (TBD) shall be transmitted in front of the receiving antennas (main and diversity) and the received power shall be checked at the Node-B.3.2. It should have been done during I&C (Installation and Commissioning). Cell Shakedown 9. If not or if unsure. the cabling checking has to be part of these tests. compared to the noise level recorded when the reception is idle (no signal transmitted). Effective Transmit Antenna Orientation EC/N0 by code is analyzed to determine if all antennas are pointing in the right directions. These tests will be useful in verifying that the cells are properly cabled and are ready to support RF. If an antenna is found to be incorrectly installed.2. then a work order will have to be issued as soon as possible to fix the problem. RNS. 1. Proper operation is verified through drive tests around the site and evaluation of antenna orientation.1. Cleared spectrum. Both customer and Nortel Networks representatives must have accepted the hardware and the connectivity. 9. Integration tests must be passed and accepted.9. 2. Objective The objective of the cell shakedown is to ensure proper cell site operation in the integrated system. .

5. Load: unloaded Data collected: PCPICH EC/N0. . Load: unloaded Data collected: PCPICH EC/N0. possibly soft and hard) between cells. and code assignment according to the RF design. DL transmitted code power and UL RTWP. UE transmitted power will be analyzed to double-check possible reception cabling problems. scrambling code allocation for each sector according to the RF design and neighbor list coherence with engineering requirements.2. 9. UE transmitted power. FTP. Purpose: to test call setup in each cell. Primary Pilot EC/N0 (PCPICH EC/N0). Calls on both domains (Circuit Switched and Packet Switched) must have been completed successfully on the site. Test Drives These tests are performed by driving circular routes around the base station.2.2. Exit Criteria Cell testing is completed when correct code and reasonable power levels are measured. Purpose: to test indoor handoffs (softer.7. to test handoffs (soft and possibly hard) between indoor cells and outdoor cells. PCPICH EC/N0.9. UE transmitted power. DL UTRA carrier RSSI. DL UTRA carrier RSSI. verify antenna orientations.6. Call type: voice call. if applicable.4.2. and softer handoffs are successfully executed. Call type: voice call. Test Areas (indoor Site) These tests are performed in selected indoor areas where coverage is provided by a specific indoor site. Drive Routes The drive routes for shakedown are defined as circles around the cell at approximately 30 percent of the expected cell coverage area. if applicable. DL transmitted code power and UL RTWP. 9. handoffs (softer) between cells and to verify antenna orientation. 9. file transfer (FTP).

3. page 12). The purpose is to have a warranty area compliant with the exit criteria.e. Satisfactory system performance is quantified by the successful call rate metric being no less than 50 % more failure than needed to reach the agreed system-wide threshold. Objective The objective of cluster optimization is to perform RF design verification. the warranty area might be revised before starting the next phase by excluding the areas affected by the remaining issues. Entrance Criteria Each of the following criteria must be met prior to starting optimization on each cluster: · All cell sites in the cluster must be integrated and accepted. 9. then the cluster must be redefined in order to exclude these sites (see impact on the warranty area too.2.3. Due to the high number of possible mix of applications. 9. If one or more sites are not ready at cluster optimization time. without remaining issues.1. it is not possible practically to test thoroughly all clusters with all applications.3. Example: if the system-wide threshold is 95 % of successfull calls. If detailed metric acceptance must be done.5% successfull calls. identify and categorize network or coverage problems.2. The system performance metrics need not to be met to exit cluster optimization but should be close enough to the targets to ensure acceptance during the system optimization stage. each cluster would be required to reach at least 92. All problems must have been addressed and resolved for exiting this phase.8. Areas not meeting system performance must be included in the issue list. . Issue List There should be no issue list after shakedown tests. · Shakedown tests must have been completed on all cell sites in the cluster. However if issues are remaining. optimize for best performances according to the acceptance criteria and serve as a first pass of problem resolution. Cluster Optimization 9.9. satisfactory system performance is quantified by the agreed metric values within their target thresholds. i.

· All cell sites must have cleared the shakedown test exit criteria. · The cluster must not have any UMTS mobiles operating other than those from the optimization team (i.e. controlled environment). · System performance acceptance criteria are clearly defined and agreed by both the customer and Nortel Networks. · System performance acceptance deliverables are defined and agreed by both the customer and Nortel Networks. · Predicted coverage maps per cell are available. · Databases of system design and propagation tool reflect accurately the system configuration. · The warranty area is clearly defined and agreed by both the customer and Nortel Networks. Maps of the warranty area are issued. · If applicable, OCNS settings have been set up and agreed by both the customer and Nortel Networks. · Maps of drive test routes and specific data collection spots (including indoor sites) have been reviewed and mutually agreed by both the customer and Nortel Networks. · The network is locked from any hardware or software changes, except for maintenance purpose (e.g. hardware failure, critical software bug...). · All logistical hardware and software for performance acceptance processing are on site and ready to work (e.g. calibration hardware and software, coordination procedures...) 9.3.3. Drive Routes Engineering judgment should be used to determine these routes. Marketing concerns and design issues should also be considered when defining the various drive test routes. Drive test routes must also be designed according to the threshold definitions: if several threshold levels have been defined, then each drive test must be contained in areas where only one threshold set applies. Example: Three different threshold levels are defined depending on morphologies: one set for dense urban, one for suburban and one for rural. Then, drive test routes shall be designed exclusively within dense urban areas or within suburban areas or within rural areas. On average, clusters should comprise around 30 cells. Drive routes for each cluster will be mutually agreed by the customer and Nortel Networks and will be chosen as follows: Optimization routes Optimization routes will be selected from primary (e.g. highways) and secondary roads (e.g. city streets of more difficult morphology) within the cluster. In some instance, the engineering team may determine that additional roads should be included in the cluster to ensure adequate performance of critical service areas that are not included in the primary

and secondary road classifications. These routes should encompass all land classifications (morphologies) and areas of engineering concern and should pass through all inward-facing sectors in the cluster. These drive test routes must be mutually agreed to between the customer and Nortel Networks. Drive test routes will not be defined through known bad coverage areas in the test area (i.e. areas with missing cells or areas not planned to be covered). It should take approximately five to ten hours to drive all optimization routes within a typical cluster. Control Routes Control routes will be used for demonstrating system performance on an individual cluster basis and on a system wide basis. These routes are used primarily for performance progress monitoring and should reflect typical network performance. The control routes should be a subset of the optimization routes and should require no more than 3 hours of driving for a 30-cell cluster. These routes should also encompass all roads of primary concern to the engineering staff and cover all land classifications defined within the cluster. When possible, control routes should be chosen to be contiguous with those from neighboring clusters. Control routes are extended as each cluster is exited. Note: time of drives may vary since the time is dependant on the size of the cluster areas and may vary depending on how close the cells are spaced apart. 9.3.4. Test Drives To simplify the process and be closer to the user experience, only one type of test setup will be done for all drive tests. However, iterations of these drive tests have to be planned to improve the network; these iterations are necessary and mainly due to the dynamic aspect of the CDMA technology. Each iteration will have a different purpose. Each drive tests (except the first one, pilot optimization drive) will get at least four mobiles, two dedicated to voice calls, and two dedicated to data calls. The tests shall follow the following processes during each applicable drive: MO Setup the call Hold the call during the holding time [ping RTT, file transfer time] Release the resources [when holding time is off]

MO only MO Setup the call Hold the call during the holding time [MOS] Release the resources [when holding time is off] MO«MT PS data calls CS data calls

PS calls will follow a test pattern composed of initial ping, subsequent pings, file transfer by FTP (either uplink and/or downlink), idle states (to activate the downsize/upsize feature, also known as “always-on”). Transferred files must be sized according to RAB rates, holding time and test patterns. The optimization engineers shall use their judgment in deciding whether or not all test drive iterations are required and how often and in what order each drive test needs to be performed. In the same spirit, several hereafter proposed drive tests can be grouped in the same run (e.g. pilot optimization and radio optimization, grouping the scrambling code analyzer with test mobiles) Note: RNC logging must be set up for all drives, except pilot optimization drive. It has to be emphasized that the following number of drive tests is higher than it should be with a mature technology using as much as possible remote optimization techniques and tools. This shall be updated and integrated in the current process when these techniques and tools are available. Pilot Optimization Drive Route: Optimization routes. Load: unloaded Purpose: to determine the actual pilot coverage for each cluster and solve the main RF propagation issues (pilot shoot-up, pilot pollution, scrambling code overlap...) Equipment: Pilot scanner (e.g. Agilent E7476A). Applications: none. Data collected: Scrambling code analysis (Ec/N0, Delay Spread, Time), top-N analysis with a window at 20. Analyses: Per-PCPICH coverage; Best PCPICH coverage; 2nd, 3rd, 4th, 5th,

bit rates (FTP only).6th PCPICH coverage. if the loaded UU optimization drive has been done. Load: simulated load as per page 13 (only if OCNS is available). Load: unloaded. DL transmitted code power. DL UTRA carrier RSSI. hold during the agreed holding time and released. mobile originating and mobile terminating for voice. neighbor list tuning) Process: each call is setup. file transfer time. DL UTRA carrier RSSI. voice quality. Data collected: Ec/N0. mobile originating and mobile terminating for voice. number of dropped calls. Load: unloaded Purpose: Mainly insure the RF coverage control and RF optimization (antenna azimuths. mobile originating only for data. tilts. transfer delay (FTP only). number of dropped calls. UL RTWP. file transfer times. UE transmitted power. DL UTRA carrier RSSI. DL transmitted code power. hold during the agreed holding time and released. Metrics calculated: dropped calls. voice quality. Optimization Check and Fine Tuning Drive This test drive is performed on the optimization routes. then simulated load as per page 13. Applications: voice. mobile originating and mobile terminating for voice. unloaded otherwise and only if substantial RF changes have been made. Data collected: Ec/N0. Data collected: Ec/N0. Process: each call is setup. UE transmitted power. average number of radio links. mobile originating only for data. Radio optimization DRIVE Routes: Optimization routes. hold during the agreed holding time and released. combined ping and FTP. Eb/N0. average number of radio links. mobile originating only for data. . UL RTWP. Applications: voice. Loaded Uu Optimization Drive (If applicable)[7] This test drive is performed on the optimization routes. combined ping and FTP. Eb/N0. UE transmitted power. Number of PCPICH over a given Ec/N0 (typically – 12dB in IS-95 networks)[6]. DL transmitted code power. Purpose: RF optimization under load Process: each call is setup. Purpose: Check and fine-tune the optimization results. average number of radio links. UL RTWP. power settings. ping RTTs. Eb/N0. Metrics calculated: dropped callsbit rates (FTP only). Applications: voice (originated and terminated). combined ping and FTP. voice quality.

location update / registration success rate. Supplementary drive tests might be needed if additional tests are required (such as IMSI attach (resp. Network timer checking and tuning. ping RTTs. detach) success rate. call setup times. call setup success rate. call setup times. Applications: voice (originated and terminated).5. Metrics calculated: successful call rate (pass-or-fail). if drive #2 was done under loaded conditions. paging success rate. 1.number of dropped calls. number of dropped calls. then simulated load as per page 13. dropped call rate. call setup success or failures. Neighbor list optimization (based on unloaded lists) · correct neighbor list entries (minimize neighbor list length) · ensure proper prioritization of entries This can be done with a pilot scanner drive test system and the correct processing tool (see page 36) 2. failures due to multiple or rapidly rising pilots) · adjust antenna azimuths. voice quality. paging success rate. heights and types · downtilt / uptilt antennas · decrease Node-B power to create dominant server and reduce pilot pollution[8] Antenna type and azimuth changes must be approved by local RF engineer. and so on) 9. file transfer times. Purpose: Check and collect acceptance measurement for this cluster. Data Analysis / Problem Resolution Following are the basic parameters adjusted during cluster optimization. bit rates (FTP only). ping RTTs. combined ping and FTP. call setup success or failures. Coverage problems (can include holes. Load: unloaded. call setup success rate. DL transmitted code power. Acceptance Data collection and Final Cluster Check DRIVE This test drive is performed on the control routes. UL RTWP. DL UTRA carrier RSSI. bit rates (FTP only). file transfer times. Data collected: Ec/N0. dropped call rate. Eb/N0. Metrics calculated: successful call rate (pass-or-fail). average number of radio links.3. UE transmitted power. voice quality. . detach) time. 3. IMSI attach (resp.

8. Cluster optimization sign-off will be performed by the regional customer manager or representative. The Performance Result Report and issue list will be provided for cluster exit on the control routes of each cluster. the successful call rate results[9] must be close enough[10] to the agreed thresholds (see examples in section 14. The other data will be available for review during optimization and provided to the customer upon network performance acceptance. one global issue list or one issue .1. Exit Criteria In order to exit each cluster.3.6. A sign-off form with the issue list must be created and agreed to with the customer.3. Performance Result Report · Drive data information · Geographical information · Successful call rate results. 2.3.4. page 37) or the corresponding cluster must be put on the issue list. Test Environment Report · List of all cell parameters when exiting from a cluster · List of test equipment · Load simulation (OCNS) details (one time and if applicable) 4. Issue list 9. Reports Following is a list of reports available to the customer.7. 1. Handoff parameters checking and tuning 9. 9. Issue List The issue list shall contain at a minimum: · complete list of problems · opening date for each item · owners and dates of resolution Depending on the project organization. Performance Measurement Report (informative only) · Maps of recorded values 3.

The entire region is fine-tuned with remaining and new issues resolved. the meaning of “region” has to be clearly defined between the customer and Nortel Networks. except the pilot optimization drive. the first drive test (done with coupling test mobiles and pilot scanner. In addition.4. Radio optimization checking Hopefully.list by regional entity might be created. 9.4. The system performance optimization starts with two optimized clusters and ends with all the clusters in the region. The cluster optimizations should have brought the system to a level where most of the optimization problems are already solved. Objective The objective of system optimization (or control route analysis) is to optimize the sum of all clusters covering an entire region. additional optimization drive test routes may be defined as required for diagnostic testing of new problem areas that are identified as a result of new clusters being switched on.4. 9. 9. However. . Entrance Criteria All clusters to be optimized must have met the cluster exit criteria or signed off by the regional RF manager or designate with mutually agreed issue list. System Optimization 9.3.4.1. RNC logging must be set up for all drives. 9.4. The system performance metrics are calculated for the entire region and serve as the warranted values as per the agreement between the customer and Nortel Networks. action plans for future work are defined.4. Drive TESTs Each of the five drive tests used in Cluster Optimization (page 23) can be used by the optimization engineers as required.2. equivalent to pilot optimization drive and radio optimization drive merged) will show very few optimization settings to change. A region can be defined along administrative borders or as an economical entity but in any case. Drive Routes System optimization is typically performed on the control routes.

tilts.Optional optimization drives After possible changes in parameter values for optimization. This is also a time that can be used to finish up the tuning of difficult spots.6. Data Analysis / Problem Resolution Since the System Optimization is based on a network composed of already tuned clusters. the system performance metrics must meet the threshold requirements. 9.4. Exit Criteria In order to exit each region. Nortel Networks will be preparing Control Routes Testing reports as permitted by time.7.4. A sign-off form with the issue list must be created and agreed to with both parties. Reports Interim reports During the System Optimization phase. most of the following parameters should have been optimized: · total cell power and P-CPICH power (typically the ratio between both is kept constant) · antenna azimuths.4. 9. 9. handoff parameters. These should include: · Updated issue lists · Successful call rate results · List of test equipment · Final antenna tilts and azimuths . heights and types Due to the cross-effects between clusters.5. System Optimization sign-off will be performed by the regional RF manager or representative. another drive might be necessary to capture the new tuning and check the resulting improvement. neighbor list contents and neighbor prioritization might need changes at borders between clusters. Acceptance data collection The next drive shall then be used to collect acceptance data for checking if the successful call rate metric is meeting the acceptance criteria.

8. . the issue list might be global or there might be one issue list per market. Issue List The issue list shall contain as a minimum: · complete list of problems · owners and dates of resolution for issues in the process of being solved · new or remaining untreated issues Depending of the organization of the project. · List of test equipment · Load simulation (OCNS) details (if applicable) 4. The data collected for these reports will be from control and/or optimization routes. Test Environment Report · List of all cell parameters as final network datafill. 10.1. Performance Measurement Report · Maps of recorded values 3.Final deliverables At the end of the System Optimization tests.4. Performance Result Report · Drive data information · Geodetic information · Successful call rate results 2. Nortel Networks must submit the following documentation: 1. 10. Reports Following are the details of the various reports that are generated throughout the optimization process. Issue list 9. Performance Result Report The Performance Result Report will include measurements that can be used to determine the overall quality of system performance and whether the measured network meets. or is trending towards meeting the agreed system performance metric values.

1. · Test setups[11] when collecting the data. the total for launch in the region. Key Performance Indicators (KPIs) Reports The KPI graphs or maps will show both uplink and downlink results after discarding the (1-r)% of the bins having the worst measurement results (r % coverage reliability). handset problems or network failures.g. . e.1. Geodetic Information The Performance Result Report should include geographical information in the form of a map showing the control routes as well as areas of acceptable or not acceptable quality in addition to areas not evaluated. this information should be indicated. The number of cells currently undergoing optimization should also be included. 10. as well as a list of these cells with their references. · Number of cells included vs. · Version of the software loads and other equipment-related information. where cluster optimization is underway and where cluster optimization is complete · control routes · areas/places of coverage problems · any relevant information that helps interpreting the results 10. The objective of the geodetic information is to show the extent of the area for which data has been collected and to ensure that the coverage is maintained as the system is optimized.2.The first part of the Performance Result Report will provide the following information for the latest set of data collected and reported: · Date of report and name of person who completed the report. Where failures are believed to be caused by issues other than RF. · Outstanding actions that will improve or affect overall performance of the area with target dates.1. The report should also include information on the length and duration of drive routes used to collect the latest displayed results and any pertinent information on events. The report should also include information on the number of calls of each type made for the latest displayed results and any pertinent information on events that have influenced the measurements. The following information needs to be displayed on the map: · site locations · area tested. including drive test routes.

including test mobiles. Supplemental measurements are used for optimization and especially for troubleshooting. Test Environment Report This report is gathering all information on the test environment: · List of test equipment. · List of all cell parameters with their final values. The performance measurement reports are gathering these metrics used for optimization. · Antenna tilts and azimuths · If applicable. It will begin with the first cluster and grow as new clusters are added. Issue List One global Issue List will be kept by region.10.2.3. Drive test data may include the following measurements: · Dropped calls and unsuccessful call attempt plots. Some of the problems identified on the issue list may not be regional but rather national problems. 10. Performance Measurement Reports Measurement scope is wider than the ones done for System Performance Metric (successful call rate) calculations.4. They are mainly collected in areas where system performance objectives are not met. Each ·1= ·2= ·3= issue will be assigned a priority: currently impacting optimization work must be resolved prior to commercial launch other issues . load simulation details (including OCNS settings). counts and percentages · BLER plots · Ec/N0 plots · Eb/N0 plots · Mobile Transmit Power plots · Mobile Receive Power plots · Number of Radio Link plots (handoff state) 10. It may also include details on issues that affected the network during performance acceptance measurements. National problems can be tracked down on a project basis. · Equipment setting details.

Abbreviations and definitions 12. It is reasonable to do an acceptance under unloaded conditions and a close follow-up of the launched network to optimize the parameters as the traffic is ramping up. it is critical to react quickly to adjust the right parameters to avoid brutal service degradations.1. all the tests will be kept under unloaded conditions. Assumption: the acceptance without OCNS will be done on early networks (opening before the first General Availability version or soon after). 12. Abbreviations ABR AHT BBLER CCAIT CBR CC CPICH CRE CRT CS CSSR CST CW DCR DL DRF Available Bit Rate Average Holding Time Background (traffic class) BLock Error Rate Conversational Real Time (traffic class) CDMA Air Interface Tester (Qualcomm©) Constant Bit Rate Call Control (see [R7] for details) Common PIlot CHannel Cell Reference Event Conversational / Real Time Circuit Switched Call Setup Success Rate Call Setup Time Continuous Wave Dropped Call Rate DownLink Data Request Form . This means that there is no way to do the tests under loaded conditions. optimization services have to be included or proposed to the customer to assist its team in handling the increasing load during the first stage of the network.11. OCNS unavailability At the time this document is written. at the best. Depending on the contract. the UMTS OCNS feature is not scheduled to be implemented before the second commercial UTMS release. number of network performance acceptances will have to be done without this feature. due also to the tension on the mobile device market. Therefore. The traffic load of such networks when opening commercially will be probably very light. According to the experience on previous CDMA networks. In this case.

Dst FDD FER FFS FQI GGSN GoS HHO IIE IPPM ISO kbps KPI LA mCR MIB MO MP MT NAS NB NE NSS OCNS OSI OVSF PCPICH PCR PDU PS PSR QoS RA RAB RANAP RAT RFC RL RNC RNS RR RRC RRM RSSI RTT RTWP SSAP Destination (IP address of the destination source) Frequency Division Duplex Frame Erasure Rate For Further Study Frame Quality Indicator Gateway GPRS Support Node Grade of Service Hard HandOver Interactive (traffic class) Information Element Internet Protocol Performance Metrics International Standard Organization kilobit per second (1000 bits per second) Key Performance Indicator Location Area Minimum Cell Rate Management Information Base Mobile Originated Measurement Point Mobile Terminated Non Access Stratum Node-B Network Element Network Sub-System Orthogonal Channel Noise Simulation Open System Interconnect Orthogonal Variable Spreading Factor Primary Common PIlot CHannel Peak Cell Rate Protocol Data Unit Packet Switched Paging Success Rate Quality of Service Registration Area Radio Access Bearer Radio Access Network Application Part Radio Access Technology Request For Comments Radio Link Radio Network Controller Radio Network System Radio Resource Management Radio Resource Control Radio Resource Management Received Signal Strength Indicator Round Trip Time Received Total Wideband Power Streaming (traffic class) Service Access Point .

please look at RFC-2330 [R15] Unspecified Bit Rate User Equipment UpLink UTRAN Registration Area Universal Terrestrial Radio Access Network Variable Bit Rate Wireless Gateway 12.000 bits geographic bins are used to calculate area availability. Typical AHT is around 2 min. That is.000.. definitions Area availability area availability refers to the percentage of geographical bins over the specified coverage area that meets the system performance objective as agreed while compensating for the appropriate penetration margin. or left generic.024 bytes = 8192 bits b = bit kb = kilobit = 1. “with a payload of B octets”). if any geographical bin within the specified coverage area fails to meet the KPI objective while compensating for the appropriate penetration margin.e.g. kb. then a service problem exists..2. exactly what type of packet is meant). Mb Bin size . For more details. Average time during which the call is sustained. This duration is typically used to define the call duration for the acceptance tests. KB. partially defined (e. b. type P might be explicitely defined (i. Average Holding Time B. B = Byte = 8 bits KB = Kilobyte = 1.000 bits Mb = megabit = 1.SCR SDU SDUER SHO SM Src SSHO Str TBC TBD TBR TD TDD TE/MT TrCH Type-P packet UBR UE UL URA UTRAN VBR WG Sustainable Cell Rate Service Data Unit Service Data Unit Error Rate Soft HandOff Session Management (see [R3] for details) Source (IP address of the source host) Softer HandOff Streaming (traffic class) To Be Confirmed To Be Defined To Be Referenced Transfer Delay Time Division Duplex Terminal Equipment / Mobile Terminal (parts of the UE) Transport CHannel Allows defining metrics and methodologies generically. further on.

a responsible party. Typically around 30 sites that serve a contiguous geographic area The final set of data collected for a cluster that demonstrates the overall performance of the cluster along the specified drive routes Area with clear spectrum (verifying the requirements on noise floor level) and where all working mobiles are test mobiles (no other mobile should be able to roam in the area) A segment of the major roads within a cluster that will be targeted for optimization. Area defined to be tested (consists of all drive test . Key Performance Indicator: considered as a metric that is of particular importance for indicating the level of quality of service Quality of Service: referring to the UMTS QoS as provided by the UMTS Bearer (see [R5]) Composed of the active set and the monitored set (with a maximum of the six best received neighbors) Area defined to be covered (may vary according to the service) This expression should not be mistaken with the service area defined in the 3GPP TS.6: the received energy per chip on the primary pilot channel divided by the power density in the band Final issue of the RF design specifying the design coverage area A list of system deficiencies (e. etc. and the estimated resolution date. construction.) noted during optimization. operation.g. Drive routes will be classified as follows: Optimization Routes: drive routes selected to encompass all morphology classifications and areas of engineering concern defined in a cluster. During data collection. Control Routes: a subset of the optimization routes that include the roads of principle concern for the regional engineering staff and cover most land classifications within the cluster. a priority. coverage.Call Cluster Cluster exit data Controlled environment Drive Route Ec/N0 Final Cell Plan Issue list KPI QoS Reported set Service Area Test Area The bin size is 200m x 200m (or otherwise mutually agreed to) Successfully established instantiation of a service that is provided through the network to a given user. CPICH EC/N0 as defined in the standard [R8] section 5. concerning only a limited group of cells. the test routes shall be driven at speeds mutually agreed to as being representative of normal subscriber behavior.1. It will include a description of the deficiency.

this part should be much more detailed in the future. exclusions and specific non-warranty areas (e.. possibly with Qualcomm-CAIT or similar software (if available on WCDMA/UMTS) · Commercial mobiles (3rd-party vendor.) · UMTS RFO-DIVA (test mobile post-processing software) · Logging post-processing software (also dependant on logging availability in the equipment) · UMTS end-to-end tool (voice quality and data quality) · Radcom protocol analyzer with UTRAN-SPY (post-processing software) Test UE Circulator Circulator x dB attenuator for simulated load (if applicable) y dB attenuator for penetration losses Antenna Possible DL bypass . Appendix 1: Test Equipment List Most of the tools and equipment are being studied and developed as of this document is written. Garmin. all specific in-building test areas.) 13. Depending on the availability and the very first tests. · Agilent E7476A W-CDMW Drive Test System (currently available) · UMTS Diva-Agilent post-processing tool for SC scanner (currently available) · Test-mobiles (3rd-party vendor. Magellan. specific bridges. Lowrance... all special test areas) The design coverage area diminished by deployment restrictions.. tunnels. availaibility to be specified) · Test isolation / insertion Loss Kit (within a suitcase): AITS (Air Interface Test Suitcase) · Vehicle GPS receiver (Trimble. availability to be specified).Warranty Area roads.g.

50 % mobile-terminated if no other repartition is better suited to the client’s needs.1. Medium coverage: areas where most ressource-consuming services are expected to be less used allowing less density in the coverage 3. Let us assume that three levels of coverage are designed. Dense coverage: key marketing areas where coverage has been designed to support all services at high quality levels 2. Successful call rate metric[12] Per-cluster Region threshold Coverag Service threshol (no more Call success conditions e density d than 50 % vs.Sample test set up for Isolation and Losses (x and y shall be defined according to contract values) 14.5 MO call setup delay < 4s . The repartition for CS-voice call originations is supposed to be by default 50 % mobile-originated. The repartition for PS call is 100 % mobile-originated. Light coverage: areas where basic services are provided with very good quality but no high ressource-consuming services are expected. Please refer to [R1] for example of Product-KPI values. Hereafter is an example for successfull call rate. KPI Indicative values Most of the Network KPIs must be stated on a case by case basis to include all network design and contract specificities. according to the marketing targets: 1. Appendix 2: Performance Metrics 14. Region) CS-voice dense 95 % 92 % over the call MOS ³ 3.

Acceptance Metrics Warnings: for each metric. the total number of established calls. Please refer to the design assumptions and [R1] for values to be met within the warranty area.2. Dropped call rate Objective Measure the reliability of a call within the network coverage (which is strictly limited to the warranty zone).2.1.medium light 85 % 80 % 78 % 70 % PS (I/B dense 64k-UL / 64k-DL) [13] medium light 95 % 92 % 85 % 80 % 78 % 70 % MT call setup delay < 5s Holding time = 2 min (without call drop) same as dense same as dense Initial 500 B (4 kb) ping RTT (call setup) < 5. Detailed metrics for acceptance and optimization 14. Definition[14] The dropped call rate is the ratio of abnormally terminated calls vs. when it is specified Iub or Iu (for spying with protocol analyzers).5 s Continuous 500 B (4 kb) ping RTT < 0. a measurement source is given as far as possible. The measurement source. may vary depending on the tool or feature availability. it is not recommended at this time to include CSdata tests in acceptance criteria. which can be expressed by the following formulas: . Moreover.5 s 800 kb file transfer < 20 s 4 Mb file transfer < 85 s Holding time = 2 min (without call drop) same as dense same as dense Due to tool capabilities. 14. it does not mean that both interface spying are equivalent but that one or both at the same time may be required to get the enough information to compute the metric. as well as the measurement methodology.

Please refer to the design assumptions and [R1] for values to be met within the warranty area. Protocol analyzers on Iubs and post-processing tools Call setup Success rate (Mobile Originated) Objective Measure the service availability when user wants to activate it. During the same drive. congestion and re-establishment reject.[15] Measurement method To verify the coverage continuity. Trace mobile with post-processing 2.008 [R6]. The short call duration will be the Average Holding Time (AHT) as agreed between both parties (the customer and Nortel Networks). unspecified PS Domain: where Abnormal_Release_Cause refers to the following IE failure cause values appearing in the preceeding RRC_Connection_Release as defined in 3GPP TS 25. Definition The call setup success rate is the number of successfully established calls . the time and the geographical location when the failure occurs will be recorded along each control route. short calls will be also performed. Potential source of measurements: 1. Cause IE. long calls will be performed (call are started and maintained as long as possible). The number of dropped calls. the AHT is between 90s and 120s). different from: · Cause #16: Normal Call Clearing · Cause #31: Normal.CS Domain: where Abnormal_Release_Cause is any cause as defined in 3GPP TS 24. The AHT may differ from one service to another (typically for speech service.331 [R10]: unspecified. pre-emptive release.

Measurement method The test mobile tool will be configured to originate a sequence of calls to a known non-busy number or IP address (depending on the application). It is the time it takes for a mobile originated (respectively terminated) call to be set up across the network. This is therefore defined . Each call will be terminated when established. Note 1: this metric could be itemized by traffic class (assuming that the originating cause is correctly assigned when setting up the call) Note 2: the domain (CS or PS) information appears in the RRC_Initial_Direct_Transfer message following in the call setup message flow (please refer to [R1] for more details). which can be expressed by the following formulas: CS Domain: * the Establishment Cause IE is defined in 3GPP TS 25. PS Domain: * the Establishment Cause IE is defined in 3GPP TS 25.331 [R10].vs. Trace mobile with post-processing 2. Potential source of measurements: 1. Definition The call setup time is the time between the initial call establishment request and the call establishment confirmation. the total number of call setup attempts. Protocol analyzers on Iubs and post-processing tools Call Setup Time (Mobile Originated and Terminated) Objective Measure the responsiveness of the network when the user wants to access it. Note 1: this metric could be itemized by traffic class (assuming that the originating cause is correctly assigned when setting up the call) Note 2: the domain (CS or PS) information appears in the RRC_Initial_Direct_Transfer message following in the call setup message flow (please refer to [R1] for more details). Please refer to [R1] for values to be met within the warranty area.331 [R10].

For each call. the total number of paging attempts.331 [R10]. Protocol analyzers on Iu (mobile originated calls) and PSTN interface (CS mobile terminated calls with post-processing software Paging SUccess rate Objective Measure the network efficiency in finding the appropriate mobile. Definition The paging success rate is the ratio of number of paging responses vs. Please refer to the design assumptions and [R1] for values to be met within the warranty area. This can be defined by the following formula: [21] * the Establishment Cause IE is defined in 3GPP TS 25. ¤ the CN Domain Identity IE is defined in 3GPP TS 25. [20] PS Domain. all terminating causes must be grouped together. the call setup messages will be time-stamped and recorded and the time difference will be calculated as described in the definitions. only .by the following formulas: CS Domain. mobile terminated call: Non applicable as of this document was written. mobile terminated call: [18] PS Domain.331 [R10]. Potential source of measurements: 1. Trace mobile with post-processing 2. mobile originated call: [19]. [17] CS Domain. mobile originated call: [16]. Measurement Method This test is accomplished in conjunction with the call setup success rate (for origination) and with the paging success rate (for termination).

Definition The Mean Opinion Score is a subjective evaluation of voice quality [R14].pages from a given CN domain shall be considered for this metric (meaning there is a Paging success rate for CS and a paging success rate for PS) Measurement method The test mobile will be in idle mode. Mean Opinion Score (MOS) Objective The objective of this measurement is to provide a measurement of voice quality. A sequence of incoming call will be set up. Bit Rate (data only) . Potential source of measurements: An end to end tool shall be used to compare the received voice sample with the original one. Measurement Method Please refer to [R18]. It is the result of an absolute rating (without reference sample) on a scale going from 1 (bad) to 5 (excellent). Each call will be terminated when established. The coverage reliability r must be considered in the measurements and therefore the resulting metric must be computed on the r % of the best samples only (the worst (1-r) % of the samples must be discarded). Potential source of measurements: Trace mobile with post-processing Voice Quality. Please refer to [R1] for values to be met.

Please refer to [R1] for thresholds to be met. Bit rates for downlink and bit rates for uplink will be processed separately. Definition The bit rate is the number of bit received at the application SAP (usually brought down to the TCP level) over a given period of time (usually measured in kbps). The minimum. Potential source of measurements: Trace mobile with post-processing File Transfer Time (Data Only) Objective This metric will measure the transmission efficiency between the source application SAP (usually at TCP level) and the destination application SAP (usually at TCP level).r )% of the bins on the downlink and the worst (1. The best application for this type of test looks like a file transfer through FTP. If r is the coverage reliability. . maximum and average values will be calculated for each way.r )% of the bins on the uplink shall be excluded from analysis. Measurement Method The measurement is taken from long calls. If key or strategic marketing locations included in the planned designed coverage area fall within the r % of bins that are discarded. For performance acceptance. A big enough file will be chosen in order to keep transferring during the entire drive test. Minimum. the worst (lowest bit rate) (1. maximum and average shall be measured to get an idea of the transmission regularity.Objective This metric measures the throughput in kbps delivered to the application (usually at TCP level). these locations shall be put on the issue list. The requested data rate transfer will be chosen according to the design assumptions in the driven areas. Please refer to [R1] for the values to be met. the remaining r % of the bins of each of the downlink and uplink shall satisfy the bit rate requirements.

Optimization Metrics In complement of the performance metrics that will be used (partly or totally) for network performance acceptance. The data processing should follow the same methodology than the one used for bit rate processing. This definition has the great advantage not to require any synchronization between both ends for the measurement tool. assuming file sizes over 100 kb. except the first delay between the time of the request and the transfer time of the first SDU. Measurement Method The measurement can be done with the same method than the bit rate measurement and should be done during the same tests.2. However. Potential source for measurements: An end-to-end tool must be used to monitor the test at both ends (checking that the file was received without errors).2. .Definition The easiest way of measuring the transfer delay is to measure a so-called file transfer time defined as the time to receive without errors a file of a given size (between the first bit of the first received packet and the last bit of the last received packet completing the error-free transmitted file). 14. where delay for an SDU is defined as the time from a request to transfer an SDU at one SAP to its delivery at the other SAP. other metrics are measured and computed for optimization purpose without being formally linked to the acceptance process.” The file transfer time is then equivalent to adding all the SDU transfer delays for completing the file at destination. This section is supposed to give a list and details about these metrics. since the last SDUs of a burst may have long delay due to queuing. The list is probably non-exhaustive and will have to be adjusted after the first UMTS network performance optimizations. we can consider this time as negligible compared to the total file transfer time.107 [R4] the following definition applies for the transfer delay: “Transfer delay (ms) Definition: Indicates maximum delay for 95th percentile of the distribution of delay for all delivered SDUs during the lifetime of a bearer service. whereas the meaningful response delay perceived by the user is the delay of the first SDU of the burst. NOTE 3: Transfer delay of an arbitrary SDU is not meaningful for a bursty source. According to 3GPP TS 23.

Potential source of measurements: 1. Protocol Analyzers on Iub with post-processing software UE Tx Power Objective Troubleshoot for best server reception and uplink interference reduction.331 [R10]). Scrambling-code scanner (such as the Agilent E7476A) with a postprocessing software 2. Measurement Method The measurements shall be taken during any drive tests (all opportunities are welcome).331 [R10]). Definition As reported by the UE in the Intra-frequency measurement quantity IE (defined in TS 3GPP 25. RNC logging and post-processing software (measurement report analyses) 4.RF Performance Primary CPICH Ec/N0 Objective Troubleshoot for best server reception and interference reduction. . Definition As reported by the User Equipment in the UE Internal measured results IE (defined in TS 3GPP 25. Trace mobile and post-processing software 3.

Definition within a given cell. Measurement Method The measurements shall be taken during continuous drive tests. Protocol Analyzers on Iub with post-processing software Average Number of RLs in the active set Objective Neighbor list and SHO tuning. RNC logging and post-processing software (measurement report analyses) 2.Measurement Method The measurements shall be taken during any drive tests (all opportunities are welcome). Potential source of measurements: 1. Trace mobile and post-processing software 2. Trace mobile Average Number of PCPICHs in the Monitored set Objective Neighbor list and SHO tuning. Potential source for measurements: 1. Protocol Analyzers on Iub with post-processing software 3. Definition Average calculated from the number of monitored cell set measured by the mobile and transmitted to the UTRAN in the measurement report .

Definition within the test areas (either cluster or system) Measurement Method The measurements shall be taken during continuous drive tests. Measurement Method[22] The measurements shall be taken during continuous drive tests. Definition The Block Error Rate (BLER) is the Transport Channel Block Error Rate as .messages or by UTRAN counters. Trace mobile Average Number of RLs per call Objective Neighbor list and SHO tuning. Protocol Analyzers on Iub with post-processing software 3. RNC logging and post-processing software (measurement report analyses) 2. Protocol Analyzers on Iub with post-processing software BLER (UL and DL) Objective Check the radio link transmission quality. Potential source for measurements: 1. Potential source for measurements: 1. RNC logging and post-processing software (measurement report analyses) 2.

Protocol Analyzer spying the Iub with post-processing tool Network Access Performance RAB Unavailability Objective RAB resource shortage. Potential source of measurements (TBC): 1. Potential source of measurements (TBC): 1. RNC logging with post-processing software 2.215 [R8]) and as reported by the UE in downlink in the RRC_Measurement_Report(quality_measured_results_list). Definition [23] Measurement Method The measurements shall be taken from drive tests used to measure call setup success rate. This metric is classified as a Tier-2 metric. Measurement Method The measurements shall be taken from all continuous drive tests.measured by UTRAN in uplink (as specified in 3GPP TS 25. . Trace Mobile with post-processing software (DL-BLER only) 2. Protocol Analyzers on Iub with post-processing software Radio Blocking Rate Objective Radio resource shortage.

· “Requested Guaranteed Bit Rate Non Available” (#21). · “No Remaining RAB” (#31).Definition * where: RANAP_RAB_AssRsp is the RANAP “RAB Assignment Response” message.331 [R10]). · “Requested Traffic Delay Non Achievable” (#22). · “No resource available” (#114) Details may be found in [R11] Measurement Method This metric applies only on loaded networks. Iu spying with protocol analyzers and post-processing software Mobility Performance UTRAN ð2G HHO Success Rate Objective Inter-RAT HHO reliability. · “Requested Guaranteed Bit Rate Uplink Non Available” (#36). RNC logging with post-processing software 2. . · “Requested Maximum Bit Rate Downlink Non Available” (#33). Measurements shall be done during the same drive tests as the ones used for call setup success rate (short calls). · “Requested Maximum Bit Rate Uplink Non Available” (#34). · “Requested Maximum Bit Rate Non Available” (#20). the failure will not be counted. Congestion causes are the following failure causes[24]: · “Requested Traffic Class Non Available” (#18). RABs_failed_SetupOrModify is the “RABs failed to setup or modify” part of the previous message. Potential source of measurements (TBC): 1. · “Requested Guaranteed Bit Rate Downlink Non Available” (#35). Definition Limits: if the mobile cannot resume the connection to UTRAN (as specified in 3GPP TS 25.

Definition To be measured on the 2G (GSM) part as: Measurement Method Measurements should be made on previously selected areas of the network as agreed between both parties (the customer and Nortel Networks). Within this context. GSM Call Trace? ATM ATM Cell Rate . RNC logging with post-processing software 2. Sufficient drives must be done to get enough samples. Potential source of measurements: 1. it is highly recommended to avoid including such a metric in the network performance acceptance contracts. Dual-mode test mobile with post-processing (GSM messages) 2.Measurement Method Due to the cross-network specificity of this feature. It would be much better to isolate the test cases as a demonstration purpose of HHO reliability and correct tuning. Iub spying with protocol analyzers and post-processing software Average UTRAN ð2G HHO time Objective Inter-RAT HHO tuning. Potential source of measurements (TBC): 1. special drive-tests must be done in selected areas handing calls down to 2G.

Interface spying (either Iub or Iu) with protocol analyzers and postprocessing software ATM Cell Delay Variation (CDV) Objective Check the ATM transmission constancy. Measurement Method Measurement should be made on all drives.” The definition of this metric in ITU-T I. Passport counters.Objective Check the validity of the negotiated PCR for the ATM CoS CBR. It can be related to cell conformance at the MP. As described in ITU-T I. Definition The CDV we will consider in the document is the one defined as 1-point cell delay variation considering the Measurement Point (MP) at the end of the communication link (either a UE or the fixed extremity of the call).356 [R12]: “The 1-point CDV parameter describes variability in the pattern of cell arrival (entry or exit) event at an MP with reference to the negotiated peak cell rate 1/T (see Recommendation I. and to network queues. the couple (MCR.” . 2. Potential source of measurements (TBC): 1. it includes variability introduced (or removed) in all connection portions between the cell source and the specified MP. the SCR for VBR.356 [R12] is: “1-point cell delay variation is defined based on the observation of a sequence of consecutive cell arrivals at a single MP.371). Definition The ATM Cell Rate is defined in this document as being the effective cell rate in number of cells per second. PCR) for ABR.

Measurement Method Measurement should be made on all drives. Interface spying (either Iub or Iu) with protocol analyzers and postprocessing software IP IP One-Way Packet Loss Objective Check the IP transmission reliability. Interface spying (either Iub or Iu) with protocol analyzers and postprocessing software ATM Cell transfer Delay (CTD) Objective Measure the performance of the ATM transmission as far as the transmission delay through the transmission network is concerned. Potential source of measurements (TBC): 1. CRE1 at time t1 and CRE2 at time t2. Potential source of measurements (TBC): 1. Passport counters ? 2. t2 – t1. The value of Tmax is for further study. Definition As defined in the ITU-T I.356 [R12]: “Cell Transfer Delay (CTD) is the time. Passport counters. between the occurrence of two corresponding cell transfer events. 2. but should be larger than the largest practically conceivable cell transfer delay.” Measurement Method Measurement should be made on all drives. where t2 > t1 and t2 – t1 £ Tmax. .

the total number of transmitted packets which can be translated in the following formula: * where the One_Way_Packet_Loss is defined in RFC-2680 [R17] as: “>>The *Type-P-One-way-Packet-Loss* from Src to Dst at T is 0<< means that Src sent the first bit of a Type-P packet to Dst at wire-time* T and that Dst received that packet. End-to-end data quality tool IP Maximum One-Way Delay Objective Check the IP transmission efficiency. Potential source of measurements (TBC): 1. infinite)<< means that Src sent the first bit of a Type-P packet . >>The *Type-P-One-way-Packet-Loss* from Src to Dst at T is 1<< means that Src sent the first bit of a type-P packet to Dst at wire-time T and that Dst did not receive that packet. Definition The maximum value of the one-way delay metric for the 95th percentile of the distribution of delay for all delivered packets during the lifetime of the bearer service. >>The *Type-P-One-way-Delay* from Src to Dst at T is undefined (informally. The one-way delay is defined in RFC-2679 [R16] as: “For a real number dT. >>the *Type-P-One-way-Delay* from Src to Dst at T is dT<< means that Src sent the first bit of a Type-P packet to Dst at wire-time* T and that Dst received the last bit of that packet at wire-time T+dT.” Measurement Method Measurement should be made on all data drives.Definition This metric can be defined as the ratio of lost packets vs.

3. End-to-end data quality tool IP Maximum Jitter Objective Check the IP transmission constancy. Measurement Uncertainty This is mostly applicable when a set of different detailed metrics are chosen for acceptance criteria.3.) Let p be the metric to estimate from the data (for example p could be FER or .” Measurement Method Measurements should be made during long data call drive tests. (Please see [R19] for more details) 14. Potential source of measurements (TBC): 1.to Dst at wire-time T and that Dst did not receive that packet. End-to-end data quality tool 14. Potential source of measurements (TBC): 1. frames. etc. Measurement Method Measurement should be made on all data drives. Tolerance Interval of One Data Population All of the performance metrics that is used to assess the quality of a network is estimated using a sample of events (calls. Definition The IP Maximum Jitter is defined as the maximum variation of the one-way delay metric as defined here above.1.

The pass criterion is selected such that the estimate is less than or equal to the target value of the metric plus a Tolerance Interval. We assume that the measurements in all subsets (or data populations) are statistically independent. etc). The Tolerance Interval is the value expressed as a number of standard deviates that achieve a given Tolerance Interval. then the tolerance interval is given by . Tolerance Interval of Several Data Populations In some cases. number of events that meet a certain criteria such as frame erasures. and a standard deviation given by: The accuracy of the estimate is thus influenced by the sample size. the estimate converges to the target value as the sample size approaches infinity or when the target value is close to 1 (this indicates that the success events occur with high probability or certainty and the sample size doesn’t need to be large to estimate the metric). calls. For acceptance test we will use a Tolerance Interval of 99%. Let the sample sizes of the data populations be n1. dropped calls. the estimate must satisfy On the other hand. If the FER is estimated from 100 samples. That is. . the estimate should satisfy where p – is the target value for the metric (between 0 and 1). where N is number of data populations.e. the number of standard deviates z = 2. z – is the number of standard deviates that corresponds to a given Tolerance Interval.. Or. it may be desired to calculate the tolerance interval associated with the cluster.2. etc. this measurement error must be taken into account when selecting pass/fail criteria for qualifying the network. Example: Assume the metric under consideration is FER and the target/design value is 2%.sample size. In this case. n3. and k is the number of successes (i. For example. where Thus.dropped call rate. then and the estimate must satisfy 14.). Clearly.3. … nN.. Then p is estimated as the ratio: It can be shown that the distribution of this error is approximately normal (Gaussian) with zero mean.) in the sample. Let n be the total number of events (frames. etc. it may be desired to calculate the tolerance interval of the system-wide metric from the performance metrics of subsets of the systems (e. it may be desired to calculate the tolerance interval of more than one data population with different sample sizes. which is composed of several samples with different sizes.g.33. In this case. n . if the sample size is 1000. Therefore. This section provides a formula to calculate the effective tolerance interval. clusters). n2. with more samples giving a more accurate estimate (smaller deviation of the error).

0033. the standard deviation of all data populations combined equals The tolerance interval of the total population is given by (please see section 14.954 0.33 for 99% confidence.01.739 0. 10000 and 10000.036 1.643 0.0033.674 0. we note that the tolerance interval decreases by the square root of number of populations. 0.3. and 0.175 1. confidence levels Confidence 70% 71% 72% 73% 74% 75% 76% 77% 78% 79% 80% 81% 82% 83% 84% 85% 86% 87% 88% 89% 90% 91% 92% z 0. Let the target/design criterion be p = 2%. The composite tolerance interval is given by 14.524 0.915 0.994 1.842 0.772 0.080 1.613 0. We note that if the sample size in all data populations is the same. The tolerance intervals that are associated with the FER measurements in the 4 clusters are given by 0.01.806 0.1 for details) where z = 2. then (For equal sample size in all data populations) Comparing this result to that of a single data population case.583 0.227 1. 1000. Example: Assume the FER measurements are collected from 4 clusters. Table z-values vs. 0.405 .The standard deviation of data population i is given by: Since the data populations are statistically independent.553 0.126 1.341 1.282 1.3. Let the sample sizes of the clusters be 1000.878 0.3.706 0.

Brisbane. IS-95 Experience The following figures are coming from an IS-95 network performance acceptance project and are given for information only. Optimization resources: 15.881 2.1. 15. a technician to collect data and check the RSSI on the received path (main and diversity) at the Node B. teams Shakedown team A shakedown team is composed of three persons: 1.93% 94% 95% 96% 97% 98% 99% 1.5 Shakedown teams 2 (3 to 4 sites per team per day) Drive test teams 4 Analysis Computers 5 Weeks 9 Shifts 2 (7/17 and 16/2) (Example of CDMA-IS95 Telstra network performance acceptance. This is considered as one of the typical project but extrapolation to UMTS must be done carefully.751 1. Australia.326 For any other value. please refer to any table of the standard normal distribution.555 1.2.1.476 1. 1999) 15. . Cluster Size (in number of sites) 14-20 Sites (total number for the tested 225 network) Analyst Engineers 5.1.645 1.054 2. More accurate numbers will be provided with time after the first UMTS network acceptance projects. Appendix 3: Overall planning for performance acceptance 15.1. a driver 2.

this task may need a fully dedicated technician. This could be the on-duty switch (RNC) technician. The operator runs data collection and controls the drive test equipment for all calls (typically voice and data). either site or measurement equipment or network problems) This estimation includes: § pre-driving routes (feasibility. configuration audit.1.g.5 site / team / shift[25] . Extrapolation to UMTS This does NOT include: § driving time to & from site § re-do if maintenance problems (equipment failure. Depending on the workload. as well as managing the backups and the archives.3. clear description to insure all subsequent Shakedown 1 h / site / team Cluster optimization Data 1. In some cases. S/he analyses the data from each drive tests and makes change recommendations. 15. At-the-Switch (RNC) team A logging operator has to be scheduled as well to start logging. stop logging. possible realtime analyses with the on-site drive test team. this might need an extra operator to handle calls (e. A technician has also to be present to do datafill changes.Drive Test Team A drive test team is composed of one operator and one driver. The experienced engineer has also a coordination role: coordinate drive test teams (briefing before the drive tests. The operator is also in charge of downloading and post-processing the resulting data and managing the backups and the archives. Analysis Team The analyst team is composed of one experienced engineer. advise to drive test teams when they encounter problems. debriefing after the drive tests. specific data calls such as web browsing). He would be also in charge of downloading and post-processing the logged data.

h/w failure. this would lead to the following estimations: Cluster optimization Data collection 3 sites / team / Beside analysis time. This would lead to around 20 days for a cluster of 20 sites and 30 days for a cluster of 30 sites for one team (one shift a day). this includes: shift § generating change requests § reviewing datafill dumps .collection Cluster optimization Analyses System optimization drives will follow the same path) § time lost because of equipment problems (either drive test equipment failure or network equipment failure) § “sanity check” driving (check after s/w upgrade without much analysis needed) § driving to and from clusters Beside analysis time. one around 10 to 20 clusters. data collection problems such as bad configuration. each around shift a day) 20 to 30 sites (200 to 600 sites) These estimations are based according to the process described in the document. However. if time or money supersedes network optimization (and potentially network tuning accuracy). here is an order of importance of the drive tests: Drive test type (see process) Pilot Optimization Cluster optimization mandatory System optimization recommended (merged with next)[26] mandatory if OCNS available mandatory Radio Optimizaton mandatory Loaded Uu Optimization if OCNS available Acceptance data mandatory collection Taking the most risks on optimization and reducing at the minimum the drive tests.) 2 weeks estimation assuming a system of (for one team. this includes: § generating change requests § reviewing datafill dumps § advising drive test teams when they call in with a problem 1 site / engineer / § advising rigging crews shift § analysis abortion due to miscellaneous reasons (h/w failure at site. data missing...

.) 1 week estimation assuming a system of around 10 to 20 clusters. this would reduce the overall time frame to 3 months and with two shifts a day. data missing. we would have the following schedule: With a 7-days-a-week schedule. this includes: engineer / shift § generating change requests § reviewing datafill dumps § advising drive test teams when they call in with a problem § advising rigging crews § analysis abortion due to miscellaneous reasons (h/w failure at site. we reach the minimum time of a little bit less than two months. each around 20 to 30 sites (200 to 600 sites) This would lead to 10 to 15 days for clusters of 20 to 30 sites. data collection problems such as bad configuration. With all these assumptions. following the I&C and the network element acceptance steps § the cluster optimization can be started simultaneously with the stability period § the ending of the cluster optimization (last few clusters) allow to do the first drive test for system optimization. Warning: The above figures are only estimations and shall be updated according to the first on-field experiences... data missing.Cluster optimization Analyses System optimization § advising drive test teams when they call in with a problem § advising rigging crews § analysis abortion due to miscellaneous reasons (h/w failure at site.. data collection problems such as bad configuration. h/w failure. h/w failure.) 2 sites / Beside analysis time. An example of possible planning would be the following one: Let us assume the following: § the network area (system) contains 300 sites § this area is divided in 10 clusters § 4 clusters can be optimized simultaneously (meaning without impacting each other) § 4 analyst engineers and 4 drive test teams are on site and ready § the shakedowns have been performed on a site by site basis. .

™ End of DOCUMENT ˜ .

Sign up to vote on this title
UsefulNot useful