Professional Documents
Culture Documents
contents
1.
INTRODUCTION.............................................................................................
................ 6
1.1.
OBJECT..........................................................................................................
.............. 6
1.2. Scope of this
document.............................................................................................. 6
1.3. Key Performance Indicator values and
Definitions................................................ 7
1.4. Audience for this
document....................................................................................... 7
2. Related
documents................................................................................................ 7
2.1. Applicable
documents................................................................................................. 7
2.2. Reference
documents................................................................................................ 7
3. General
Methodology......................................................................................... 9
4.
Assumptions..................................................................................................
........... 10
4.1. General
Assumptions................................................................................................
10
4.2. Specific
Assumptions................................................................................................
11
4.2.1. Design
criteria.......................................................................................................
11
4.2.2. Performance
criteria............................................................................................. 11
4.3. Warnings on metric
definitions............................................................................... 11
4.4.
Notation........................................................................................................
.............. 11
5. Test
Areas.............................................................................................................
..... 12
5.1.
Routes...........................................................................................................
............. 12
5.2. In-building Test
Areas............................................................................................... 12
5.3. Special Test
Areas.................................................................................................... 12
5.4. Warranty
Zone..........................................................................................................
12
6. Simulated Traffic Load (if applicable)........................................................
13
6.1.
Uplink............................................................................................................
.............. 13
6.2.
Downlink.......................................................................................................
............. 14
7. Test
Setup.............................................................................................................
.... 14
8. Performance
Metrics......................................................................................... 14
8.1. Combined Metric: Successful Call
Rate................................................................ 15
8.2. Detailed
Metrics........................................................................................................
16
9. System Optimization
process.......................................................................... 16
9.1. Configuration
Audit.................................................................................................. 19
9.2. Cell
Shakedown....................................................................................................
.... 20
9.2.1.
Objective.......................................................................................................
........ 20
9.2.2. Entrance
Criteria................................................................................................... 20
9.2.3. RF BASIC
checking............................................................................................. 20
9.2.4. Drive
Routes.........................................................................................................
21
9.2.5. Test
Drives...........................................................................................................
21
9.2.6. Test Areas (indoor
Site)........................................................................................ 21
9.2.7. Exit
Criteria..........................................................................................................
. 21
9.2.8. Issue
List...............................................................................................................
21
9.3. Cluster
Optimization.................................................................................................
22
9.3.1.
Objective.......................................................................................................
........ 22
9.3.2. Entrance
Criteria................................................................................................... 22
9.3.3. Drive
Routes.........................................................................................................
23
9.3.4. Test
Drives...........................................................................................................
23
9.3.5. Data Analysis / Problem
Resolution..................................................................... 26
9.3.6. Exit
Criteria..........................................................................................................
. 26
9.3.7.
Reports..........................................................................................................
....... 27
9.3.8. Issue
List...............................................................................................................
27
9.4. System
Optimization..................................................................................................
. 27
9.4.1.
Objective.......................................................................................................
........ 27
9.4.2. Entrance
Criteria................................................................................................... 28
9.4.3. Drive
Routes.........................................................................................................
28
9.4.4. Drive
TESTs.........................................................................................................
28
9.4.5. Data Analysis / Problem
Resolution..................................................................... 28
9.4.6. Exit
Criteria..........................................................................................................
. 28
9.4.7.
Reports..........................................................................................................
....... 29
9.4.8. Issue
List...............................................................................................................
29
10.
Reports..........................................................................................................
............ 30
10.1. Performance Result
Report............................................................................... 30
10.1.1. Geodetic
Information......................................................................................... 30
10.1.2. Key Performance Indicators (KPIs)
Reports................................................... 30
10.2. Performance Measurement
Reports.................................................................. 31
10.3. Test Environment
Report..................................................................................... 31
10.4. Issue
List................................................................................................................
. 31
11. OCNS
unavailability...............................................................................................
32
12. Abbreviations and
definitions......................................................................... 32
12.1.
Abbreviations................................................................................................
.......... 32
12.2.
definitions.....................................................................................................
.......... 34
13. Appendix 1: Test Equipment
List...................................................................... 36
14. Appendix 2: Performance
Metrics.................................................................. 37
14.1. KPI Indicative
values............................................................................................... 37
14.2. Detailed metrics for acceptance and
optimization............................................. 38
14.2.1. Acceptance
Metrics.......................................................................................... 38
14.2.2. Optimization
Metrics......................................................................................... 44
14.3. Measurement
Uncertainty.................................................................................... 53
14.3.1. Tolerance Interval of One Data
Population....................................................... 53
14.3.2. Tolerance Interval of Several Data
Populations............................................... 54
1. INTRODUCTION
1.1. OBJECT
This document is intended to describe a generic procedure for performing
UMTS network performance optimization and acceptance tests as well as
the main performance metrics that require field assessment for the set of
services provided through this network.
Procedures, conditions and performance metrics for actual contracts may
have variations to those referenced, presented and described in this
document.
This document is a generic and internal document. It provides guidelines
and a base for writing the actual Network Performance Acceptance Test
Plan.
It must not be communicated directly to the customer.
1.2. Scope of this document
This document is intended to define two key processes before launching a
UMTS network: performance optimization and performance acceptance.
Performance Optimization
A detailed system optimization process is needed to ensure that each of
the System Performance Metrics can be achieved. As such, the objectives
of the performance optimization process are to:
uniform optimization procedure
uniform measurement standards (test set-up, route selection, test area
selection, etc...)
set up clear methods and way of measurements in agreement between
the customer and Nortel Networks.
3. General Methodology
UMTS systems introduce a wide range of applications. Like roads, streets
or buildings are selected and chosen by both parties to be geographically
representative, a set of representative applications and metrics has to be
mutually agreed between the customer and Nortel Networks.
The suggested methodology tries to merge both efficiency and user
experience.
Conversatio
nal class
X
X
X
X
Streaming
class
X
X
X
X
Interactive
class
X
X
X
Background
class
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
X
4. Assumptions
4.1. General Assumptions
This document is written with the following assumptions:
Logging (call trace family) is an RNC feature available and fully
operational at optimization and acceptance time
Hardware equipment and corresponding software tools are available and
fully operational.
5. Test Areas
The coverage area for test consists of drive-test routes, in-building and
special test areas within the designed service area. The set of driven
roads, in-building measurement areas and special test areas shall be
mutually agreed between the customer and Nortel Networks.
5.1. Routes
The drive routes shall cover the areas of interest and include some
primary as well as secondary roads. In addition, routes shall be chosen to
ensure a sampling of the main encountered environment in the designed
service area (such as dense urban, urban, suburban and rural
morphologies).
During data collection, the test routes shall be driven at speeds as being
representative of normal subscriber behavior. To insure repetitive results,
speed shall be carefully checked all along the drive tests.
Should the acceptance be done with different performance thresholds
(such as high requirements in dense urban areas, medium requirements in
suburban areas and low requirements in rural areas), the routes will be
carefully designed to keep each drive-test in an area of one and only one
type of coverage requirements (e.g. proscribe a drive overlapping dense
urban and suburban in the previous example)
7. Test Setup
All tests shall be performed using characterized and calibrated test
mobiles that meet or exceed the minimum performance specifications as
specified in [R2] and [R6].
Test mobiles shall have fixed attenuators connected between the
transceiver and the antenna to compensate for penetration loss,
additional vehicle height, and performance of the test vehicle antenna as
appropriate (it assumes that duplexers are used to separate uplink
transmission and downlink reception). Additional attenuation may also be
added to simulate uplink system load (if applicable) and/or building
penetration loss (indoor coverage from outdoor sites).
This assumes to have adequate packaging: screened boxes providing
radio isolation at the attenuators need to be used with such a test set.
The required test equipment list is detailed in Appendix 1: Test Equipment
List, page 36.
8. Performance Metrics
This section describes the test procedures to be used to determine the
system performance metrics as defined between the customer and Nortel
Networks for the performance acceptance.
All tests should be completed in a totally controlled environment under
unloaded conditions.
Depending on the feature availability and the load and/or ramp-up load of
the network, tests under loaded conditions might be performed according
to the design-assumed load (by default 50% loading relatively to the pole
capacity).
All metrics will be measured along the drive test routes and in the specific
test areas.
The preferred way for simplifying acceptance tests is to focus on only one
combined metric linked to a simple pass-or-fail criterion.
actually combined several of the typical metrics (see section 8.2 page 16).
Definition
For a call to be successful, it must correctly setup within a given time, hold
during the average holding time (AHT), if necessary successfully handover (within the 3G coverage), satisfy all defined quality criteria and
correctly release the resources.
The successful call rate is defined as the ratio between the number of
successful calls versus the total number of attempts.
As per this definition, the successful call rate metric combined the call
setup success rate, the call setup time, the dropped call rate, the quality
thresholds, the successful handover rate and the successful release rate.
It has to be emphasized that this metric should be applied to the 3G
coverage only and inter-RAT hard handover must not be part of this
metric.
Some examples for target values and quality thresholds for this metric can
be found in section 14.1, page 37.
Measurement Method
Calls (either voice calls or data calls) will be chained during the entire test.
To have enough samples, it is recommended to have more than one
mobile per tested application (e.g. two mobiles for voice calls, two mobiles
for data calls).
To get closer to a one-shot test, mobile-originated and mobileterminated calls could be mixed during the test (with a proportion to be
agreed between the customer and Nortel Networks, e.g. 50% mobileoriginated, 50% mobile terminated). In this case, the impact of paging
(paging success rate) must be considered in the target values (either a
longer setup time or a lower success rate).
Depending on the measurement tool capabilities, mobile-terminated and
mobile-originated calls could be physically passed on different physical
sets (e.g. one mobile for voice mobile-terminated calls, one mobile for
voice mobile-originated calls).
For each call, the post-processing tool should come with a binary result
pass or fail. A call passes if it is successful, as per the definition.
The final percentage of success call rate is then computed by dividing the
number of successful call by the total number of call attempts.
Cluster Optimization
Cluster optimization provides the first pass of optimiza, tion and will adjust
settings for parameters, antenna tilts and antenna orientations where
required. It is estimated to take between 1 and 1.5 days per site for one
drive-test team[5] as a total across all rounds, depending on the number
of problems encountered (i.e. approximately 6 weeks per cluster of
approximately 30 cells with one drive-test team, all rounds included). For
more information about planning and resource estimation, please see
appendix 6, page 38.
System Optimization (Control Route Analysis)
System optimization along the control routes is intended to further tune
the system in preparation for commercial clients. In large markets, some
of the system optimization activities will be performed in parallel with the
optimization of remaining clusters. Tests will use routes over multiple
clusters.
Hereafter is a chart describing the process:
All cell sites must have cleared the shakedown test exit criteria.
The cluster must not have any UMTS mobiles operating other than those
from the optimization team (i.e. controlled environment).
System performance acceptance criteria are clearly defined and agreed
by both the customer and Nortel Networks.
System performance acceptance deliverables are defined and agreed by
both the customer and Nortel Networks.
Predicted coverage maps per cell are available.
Databases of system design and propagation tool reflect accurately the
system configuration.
The warranty area is clearly defined and agreed by both the customer
and Nortel Networks. Maps of the warranty area are issued.
If applicable, OCNS settings have been set up and agreed by both the
customer and Nortel Networks.
Maps of drive test routes and specific data collection spots (including
indoor sites) have been reviewed and mutually agreed by both the
customer and Nortel Networks.
The network is locked from any hardware or software changes, except for
maintenance purpose (e.g. hardware failure, critical software bug...).
All logistical hardware and software for performance acceptance
processing are on site and ready to work (e.g. calibration hardware and
software, coordination procedures...)
9.3.3. Drive Routes
Engineering judgment should be used to determine these routes.
Marketing concerns and design issues should also be considered when
defining the various drive test routes.
Drive test routes must also be designed according to the threshold
definitions: if several threshold levels have been defined, then each drive
test must be contained in areas where only one threshold set applies.
Example: Three different threshold levels are defined depending on
morphologies: one set for dense urban, one for suburban and one for
rural. Then, drive test routes shall be designed exclusively within dense
urban areas or within suburban areas or within rural areas.
On average, clusters should comprise around 30 cells. Drive routes for
each cluster will be mutually agreed by the customer and Nortel Networks
and will be chosen as follows:
Optimization routes
Optimization routes will be selected from primary (e.g. highways) and
secondary roads (e.g. city streets of more difficult morphology) within the
cluster. In some instance, the engineering team may determine that
additional roads should be included in the cluster to ensure adequate
performance of critical service areas that are not included in the primary
MO only
MO
Setup the call
Hold the call during the holding time
[MOS]
Release the resources
[when holding time is off]
MOMT
PS data calls
CS data calls
number of dropped calls, voice quality, file transfer times, ping RTTs, call
setup success or failures, call setup times.
Metrics calculated: successful call rate (pass-or-fail), dropped call rate,
bit rates (FTP only), call setup success rate, paging success rate.
9.3.7. Reports
Following is a list of reports available to the customer. The Performance
Result Report and issue list will be provided for cluster exit on the control
routes of each cluster. The other data will be available for review during
optimization and provided to the customer upon network performance
acceptance.
1. Performance Result Report
Drive data information
Geographical information
Successful call rate results.
2. Performance Measurement Report (informative only)
Maps of recorded values
3. Test Environment Report
List of all cell parameters when exiting from a cluster
List of test equipment
Load simulation (OCNS) details (one time and if applicable)
4. Issue list
9.3.8. Issue List
The issue list shall contain at a minimum:
complete list of problems
opening date for each item
owners and dates of resolution
Depending on the project organization, one global issue list or one issue
Final deliverables
At the end of the System Optimization tests, Nortel Networks must submit
the following documentation:
1. Performance Result Report
Drive data information
Geodetic information
Successful call rate results
2. Performance Measurement Report
Maps of recorded values
3. Test Environment Report
List of all cell parameters as final network datafill.
List of test equipment
Load simulation (OCNS) details (if applicable)
4. Issue list
9.4.8. Issue List
The issue list shall contain as a minimum:
complete list of problems
owners and dates of resolution for issues in the process of being solved
new or remaining untreated issues
Depending of the organization of the project, the issue list might be global
or there might be one issue list per market.
10. Reports
Following are the details of the various reports that are generated
throughout the optimization process.
10.1. Performance Result Report
The Performance Result Report will include measurements that can be
used to determine the overall quality of system performance and whether
the measured network meets, or is trending towards meeting the agreed
system performance metric values. The data collected for these reports
will be from control and/or optimization routes.
The first part of the Performance Result Report will provide the following
information for the latest set of data collected and reported:
Date of report and name of person who completed the report.
Number of cells included vs. the total for launch in the region. The
number of cells currently undergoing optimization should also be included,
as well as a list of these cells with their references.
Test setups[11] when collecting the data.
Version of the software loads and other equipment-related information.
Outstanding actions that will improve or affect overall performance of
the area with target dates.
10.1.1. Geodetic Information
The Performance Result Report should include geographical information in
the form of a map showing the control routes as well as areas of
acceptable or not acceptable quality in addition to areas not evaluated.
The objective of the geodetic information is to show the extent of the area
for which data has been collected and to ensure that the coverage is
maintained as the system is optimized. The following information needs to
be displayed on the map:
site locations
area tested, including drive test routes, where cluster optimization is
underway and where cluster optimization is complete
control routes
areas/places of coverage problems
any relevant information that helps interpreting the results
10.1.2. Key Performance Indicators (KPIs) Reports
The KPI graphs or maps will show both uplink and downlink results after
discarding the (1-r)% of the bins having the worst measurement results (r
% coverage reliability).
The report should also include information on the length and duration of
drive routes used to collect the latest displayed results and any pertinent
information on events.
The report should also include information on the number of calls of each
type made for the latest displayed results and any pertinent information
on events that have influenced the measurements.
Where failures are believed to be caused by issues other than RF, e.g.
handset problems or network failures, this information should be
indicated.
Measurement scope is wider than the ones done for System Performance
Metric (successful call rate) calculations. Supplemental measurements are
used for optimization and especially for troubleshooting. They are mainly
collected in areas where system performance objectives are not met.
The performance measurement reports are gathering these metrics used
for optimization.
Drive test data may include the following measurements:
Dropped calls and unsuccessful call attempt plots, counts and
percentages
BLER plots
Ec/N0 plots
Eb/N0 plots
Mobile Transmit Power plots
Mobile Receive Power plots
Number of Radio Link plots (handoff state)
At the time this document is written, the UMTS OCNS feature is not
scheduled to be implemented before the second commercial UTMS
release, at the best.
Therefore, number of network performance acceptances will have to be
done without this feature. This means that there is no way to do the tests
under loaded conditions. In this case, all the tests will be kept under
unloaded conditions.
Assumption: the acceptance without OCNS will be done on early
networks (opening before the first General Availability version or soon
after). The traffic load of such networks when opening commercially will
be probably very light, due also to the tension on the mobile device
market. It is reasonable to do an acceptance under unloaded conditions
and a close follow-up of the launched network to optimize the parameters
as the traffic is ramping up.
Depending on the contract, optimization services have to be included or
proposed to the customer to assist its team in handling the increasing
load during the first stage of the network. According to the experience on
previous CDMA networks, it is critical to react quickly to adjust the right
parameters to avoid brutal service degradations.
12. Abbreviations and definitions
12.1. Abbreviations
ABR
AHT
BBLER
CCAIT
CBR
CC
CPICH
CRE
CRT
CS
CSSR
CST
CW
DCR
DL
DRF
Dst
FDD
FER
FFS
FQI
GGSN
GoS
HHO
IIE
IPPM
ISO
kbps
KPI
LA
mCR
MIB
MO
MP
MT
NAS
NB
NE
NSS
OCNS
OSI
OVSF
PCPICH
PCR
PDU
PS
PSR
QoS
RA
RAB
RANAP
RAT
RFC
RL
RNC
RNS
RR
RRC
RRM
RSSI
RTT
RTWP
SSAP
SCR
SDU
SDUER
SHO
SM
Src
SSHO
Str
TBC
TBD
TBR
TD
TDD
TE/MT
TrCH
Type-P
packet
UBR
UE
UL
URA
UTRAN
VBR
WG
12.2. definitions
Area availability
Average Holding
Time
B, KB, b, kb, Mb
Bin size
Call
Cluster
Cluster exit data
Controlled
environment
Drive Route
Ec/N0
KPI
QoS
Reported set
Service Area
Test Area
Warranty Area
medium
light
PS (I/B dense
64k-UL /
64k-DL)
[13]
medium
light
85 %
80 %
78 %
70 %
95 %
92 %
85 %
80 %
78 %
70 %
drop)
same as dense
same as dense
Initial 500 B (4 kb) ping RTT (call
setup) < 5.5 s
Continuous 500 B (4 kb) ping RTT
< 0.5 s
800 kb file transfer < 20 s
4 Mb file transfer < 85 s
Holding time = 2 min (without call
drop)
same as dense
same as dense
Due to tool capabilities, it is not recommended at this time to include CSdata tests in acceptance criteria.
CS Domain:
* the Establishment Cause IE is defined in 3GPP TS 25.331 [R10].
Note 1: this metric could be itemized by traffic class (assuming that the
originating cause is correctly assigned when setting up the call)
Note 2: the domain (CS or PS) information appears in the
RRC_Initial_Direct_Transfer message following in the call setup message
flow (please refer to [R1] for more details).
PS Domain:
* the Establishment Cause IE is defined in 3GPP TS 25.331 [R10].
Note 1: this metric could be itemized by traffic class (assuming that the
originating cause is correctly assigned when setting up the call)
Note 2: the domain (CS or PS) information appears in the
RRC_Initial_Direct_Transfer message following in the call setup message
flow (please refer to [R1] for more details).
Measurement method
The test mobile tool will be configured to originate a sequence of calls to a
known non-busy number or IP address (depending on the application).
Each call will be terminated when established.
Potential source of measurements:
1. Trace mobile with post-processing
2. Protocol analyzers on Iubs and post-processing tools
for PS)
Measurement method
The test mobile will be in idle mode. A sequence of incoming call will be
set up. Each call will be terminated when established.
Potential source of measurements:
Trace mobile with post-processing
file transfer time defined as the time to receive without errors a file of a
given size (between the first bit of the first received packet and the last bit
of the last received packet completing the error-free transmitted file).
This definition has the great advantage not to require any synchronization
between both ends for the measurement tool.
According to 3GPP TS 23.107 [R4] the following definition applies for the
transfer delay:
Transfer delay (ms)
Definition: Indicates maximum delay for 95th percentile of the distribution
of delay for all delivered SDUs during the lifetime of a bearer service,
where delay for an SDU is defined as the time from a request to transfer
an SDU at one SAP to its delivery at the other SAP.
NOTE 3: Transfer delay of an arbitrary SDU is not meaningful for a bursty
source, since the last SDUs of a burst may have long delay due to
queuing, whereas the meaningful response delay perceived by the user is
the delay of the first SDU of the burst.
The file transfer time is then equivalent to adding all the SDU transfer
delays for completing the file at destination, except the first delay
between the time of the request and the transfer time of the first SDU.
However, we can consider this time as negligible compared to the total
file transfer time, assuming file sizes over 100 kb.
Measurement Method
The measurement can be done with the same method than the bit rate
measurement and should be done during the same tests.
The data processing should follow the same methodology than the one
used for bit rate processing.
Potential source for measurements:
An end-to-end tool must be used to monitor the test at both ends
(checking that the file was received without errors).
RF Performance
UE Tx Power
Objective
Troubleshoot for best server reception and uplink interference reduction.
Definition
As reported by the User Equipment in the UE Internal measured results IE
(defined in TS 3GPP 25.331 [R10]).
Measurement Method
The measurements shall be taken during any drive tests (all opportunities
are welcome).
Mobility Performance
UTRAN 2G HHO Success Rate
Objective
Inter-RAT HHO reliability.
Definition
Limits: if the mobile cannot resume the connection to UTRAN (as specified
in 3GPP TS 25.331 [R10]), the failure will not be counted.
Measurement Method
Due to the cross-network specificity of this feature, it is highly
recommended to avoid including such a metric in the network
performance acceptance contracts.
It would be much better to isolate the test cases as a demonstration
purpose of HHO reliability and correct tuning.
Within this context, special drive-tests must be done in selected areas
handing calls down to 2G. Sufficient drives must be done to get enough
samples.
Potential source of measurements:
1. RNC logging with post-processing software
2. Iub spying with protocol analyzers and post-processing software
Measurement Method
Measurements should be made on previously selected areas of the
network as agreed between both parties (the customer and Nortel
Networks).
Potential source of measurements (TBC):
1. Dual-mode test mobile with post-processing (GSM messages)
2. GSM Call Trace?
ATM
ATM Cell Rate
Objective
Check the validity of the negotiated PCR for the ATM CoS CBR; the SCR for
VBR; the couple (MCR, PCR) for ABR.
Definition
The ATM Cell Rate is defined in this document as being the effective cell
rate in number of cells per second.
Measurement Method
Measurement should be made on all drives.
Potential source of measurements (TBC):
1. Passport counters.
2. Interface spying (either Iub or Iu) with protocol analyzers and postprocessing software
IP
IP One-Way Packet Loss
Objective
Check the IP transmission reliability.
Definition
This metric can be defined as the ratio of lost packets vs. the total number
of transmitted packets which can be translated in the following formula:
* where the One_Way_Packet_Loss is defined in RFC-2680 [R17] as:
>>The *Type-P-One-way-Packet-Loss* from Src to Dst at T is 0<< means
that Src sent the first bit of a Type-P packet to Dst at wire-time* T and that
IP Maximum Jitter
Objective
Check the IP transmission constancy.
Definition
The IP Maximum Jitter is defined as the maximum variation of the one-way
delay metric as defined here above.
Measurement Method
Measurement should be made on all data drives.
Potential source of measurements (TBC):
1. End-to-end data quality tool
to the target value of the metric plus a Tolerance Interval. The Tolerance
Interval is the value expressed as a number of standard deviates that
achieve a given Tolerance Interval. That is,
where
Thus, the estimate should satisfy
where
p is the target value for the metric (between 0 and 1).
z is the number of standard deviates that corresponds to a given
Tolerance Interval.
n - sample size.
For acceptance test we will use a Tolerance Interval of 99%. In this case,
the number of standard deviates z = 2.33. Clearly, the estimate
converges to the target value as the sample size approaches infinity or
when the target value is close to 1 (this indicates that the success events
occur with high probability or certainty and the sample size doesnt need
to be large to estimate the metric).
Example:
Assume the metric under consideration is FER and the target/design value
is 2%. If the FER is estimated from 100 samples, then the tolerance
interval is given by . In this case, the estimate must satisfy On the other
hand, if the sample size is 1000, then and the estimate must satisfy
z
0.524
0.553
0.583
0.613
0.643
0.674
0.706
0.739
0.772
0.806
0.842
0.878
0.915
0.954
0.994
1.036
1.080
1.126
1.175
1.227
1.282
1.341
1.405
1.476
1.555
1.645
1.751
1.881
2.054
2.326
For any other value, please refer to any table of the standard normal
distribution.
15.1.2. teams
Shakedown team
A shakedown team is composed of three persons:
1. a driver
2. a technician to collect data and check the RSSI on the received path
(main and diversity) at the Node B.
Drive Test Team
A drive test team is composed of one operator and one driver.
The operator runs data collection and controls the drive test equipment
for all calls (typically voice and data). In some cases, this might need an
extra operator to handle calls (e.g. specific data calls such as web
browsing).
The operator is also in charge of downloading and post-processing the
Shakedown
1 h / site / team
Cluster
optimization
Data
collection
Cluster
optimization
1 site / engineer /
shift
Analyses
System
optimization
Cluster
optimization
mandatory
Radio Optimizaton
mandatory
Loaded Uu Optimization if OCNS available
Acceptance data
mandatory
collection
System optimization
recommended (merged with
next)[26]
mandatory
if OCNS available
mandatory
Taking the most risks on optimization and reducing at the minimum the
drive tests, this would lead to the following estimations:
Cluster
optimization
Data collection
Cluster
optimization
Analyses
System
optimization
1 week