You are on page 1of 12

Cognitio Consultants is a global Telecommunications and IT Consulting Company.

We
help CSP’s, Governments and Enterprises. We have a wide range of associates
working on Security, USO, Regulatory, Business Planning and Technology

TE Modelling for
better business outcomes
Introduction
In the days of 2G, before vendors took over much of the network design and
dimensioning, mobile operators generally designed their networks and
purchased equipment, instead of adopting solutions from the equipment
manufacturers. Lately there has been much effort to in-source network strategy
and design as it has become an essential part of the value chain in ever more
complex networks. The following represent four approaches that an operator
can take:

Each has its merits, but in this paper, we strongly argue for the TE modelling
approach as it is firmly embedded in a detailed understanding of the technology.
It can turn around iterations fast and has the potential for greater accuracy and
depth of insight.

The basic idea is to have a model that reflects the costs and, where useful,
potential revenues, to allow decisions to be made in an informed manner. Our
experience at Cognitio involves using such models to make critical and strategic
decisions. For example, they are often used when determining the price ceiling
for spectrum at auctions. In practice they have much broader application as we
November 2021
discuss here.

Page | 1
A short review of the four approaches to dimensioning, CAPEX
formation and decision making
Overview
In this paper, our purpose is to briefly explore the different approaches that network operators can use
to establish the requisite level of investment for the future growth of their networks. Typically, this
involves the use of models which establish the capital investment in networks based on capacity and
coverage.

In this section we consider the four generic approaches to network design and dimensioning identified
earlier and consider how well each of them meet this requirement. A high-level view of their features,
strengths and weaknesses is indicated below:

Accounting methods
Traditional accounting methods tend to use simple ratios to estimate the volume of equipment and
levels of cost involved in networks. In these models, attempts are made to forecast capital
requirements, operating costs and the ROI this produces by considering all relevant forms of demand
and supply.
These methods often attempt to connect supply and demand through relationships such as price
elasticity and target rates-of-return which link prices with their cost of delivery. Additionally, accounting
models will often embed a range of important metrics such as the cost of capital, exchange rate
movements and forecast rates of inflation.

Page | 2
It is important to realise that despite all of the sophistication surrounding modern accounting models – a
good example of such a tool being CostPerform1 - they rely for their results on processes for the
allocation of established costs. These allocation ‘keys’ are chosen by the user of the tool and to the
extent that it assists with forecasts, this is generally based on simple ratios and trends established from
the historic accounting data. You can see from the graphic, below, an explanation of the use of
CostPerformTM by several telecom operators taken from the tool provider web site:

A strong function of such a model is in mediating the relationship between operators and national
regulators. It slices accounting information to understand the profitability of major accounts.
Using such tools to support pricing, for example, has the
advantage of being simple to understand. It is based on a
transparent source of information, making it appealing to
both the C-Suite of operators and external third-parties such
as regulators and major customers. The value of this should
not be underestimated. It allows rigorous debate based on
defendable and verifiable data. In our experience, many
models produced by technical staff are complex and
generate answers that are unclear. This then frustrates the
leadership team which may lose faith in these more complex
models. The answers they provide are frequently nuanced
and require a deep technical understanding. Although
accounting models have advantages, they can suffer from
being overly simple: they encourage the use of simple ratios,
such as cost per subscriber, which are then used to estimate network costs in a way that does not reflect
reality. Networks are often made-up of ‘lumpy’ components where capacity is not added incrementally.
These network elements can also have tiered pricing according to their capacity, which then results in
non-linear capacity-cost relationships. Simple ratios are even less well-suited when determining
spectrum and licensing costs. Their capacity-cost relationships, requires much greater sophistication
and depth of analysis.

1
https://www.costperform.com

Page | 3
Usually, the accounting type methods are best suited to corporate-level decision making once the
network strategy has been determined.
Planning tool approaches
When a mobile operator is developing investment plans for its network, one approach is to use a radio
planning tool to determine the number and location of base station sites. This can then be used to
establish the volumes of network resources required to satisfy the plan. The levels of investment are
then simply estimated using a budgetary model which scales the volume of these resources with unit
costs. These unit costs are often held as a table of benchmarks for similar equipment, sites or
constituted from a price book.
Radio planning tools are essential for determining site locations, coverage areas and detailed network
planning. However, in many organisations they are regarded as a dark art and results from them are
often not given much credence. Part of the issue lies in their complexity and the prediction nature of
the methods they use. Ideally a quite extensive knowledge of statistics and statistical methods is
required to fully interpret their results. This is often not suited to the hard choices confronting those
tasked with investment decisions.
It takes considerable time and effort to produce a plan. The approach can be time consuming and
expensive when investigating scenarios. Each iteration of a plan can take hours or days to complete and
so where a range of such results are required it can typically take weeks to arrive at decisions. The same
scenarios can take minutes or hours using a TE model.
The other weakness of employing the planning tool approach is the restricted functionality of the tool.
As its purpose is to assist in radio network planning it is does not necessarily incorporate variables such
as capacity triggers of the operator, or technology upgrade paths that are the most economic. Even less
likely is that such tools will establish investment over time whilst taking into account associated
operating costs over the life of capital assets.
For detailed network planning there is still a place for radio planning tools, and the need for this has
become even greater as operators grapple with mixed technologies and heterogeneous networks.
Despite the disadvantages described above, many organisations still resort to planning tools to establish
their network strategy and roadmap. This is often the closest that many organisations have to a
department for network strategy and in many cases the link between planning and strategy is regarded
as being obvious.
There is a further issue concerning
network planning and the process
of developing strategy: determining
the design authority.
Where an operator is purchasing a
solution instead of just equipment,
it is the vendor that is the design
authority. So, responsibility for
network dimensioning and the
technology roadmap sits with that
vendor. In this situation it can be
difficult for the operator to have
proper control over the economics and competitive performance of its network strategy unless a
separate and parallel planning activity is undertaken by that operator. This process is often made more

Page | 4
difficult if the vendors refuse to reveal the details of their dimensioning rules. At the very least, this
situation calls for exercises which develop benchmarks for assessing plans and their associated BoQ’s.
Data analytics
An emergent and popular approach is to use big data and
analytics approaches to process recent history and As-Is
data, and from this, to extrapolate trends and gain insight
into future network requirements.
The reduction in cost and widespread availability of data
analytics tools and cloud computing has made analytics a
very real prospect for even modest size telco operators. It
has also found favour with many non-traditional telecoms
consultants as it lends itself to generic data science
disciplines as opposed to an ‘expert systems’ approach
which demands deep industry-specific knowledge.
Data analytic tools and techniques are widely taught on
many courses nowadays, and the insights from such an
approach can be very powerful. The approach relies on the
processing of costs and performance data that already exist.
As such, it is not a deterministic process and is weak at anticipating future investment where systemic
forms of change are not represented in existing data sets.
Techno-economic modelling
A suitable techno-economic model for exploring investment scenarios is illustrated in the graphic,
below. This would contain a model for each site in the network, a description of all relevant features
relating to site-configuration (including technologies deployed and activated licenses, for instance), a
price book covering relevant hardware, software, services and licenses and information describing
various forcing functions, such as customer demand for services.

Page | 5
This data is then processed to derive the overall network required to fulfil those demands over time. In
a proper TE modelling exercise, various forms of ‘live’ data are taken from the network such as site-level
configurations, current demand per sector – later aggregated to the level of each site – and data from
the OSS that relates to the performance of the network. This is then parsed using extract, transform and
load (ETL) functions.
One of the biggest challenges is usually collecting up-to-date and accurate data. The ETL function allows
the data to be cleansed and transformed so that it is suitable for loading. Occasionally the cleansing
involves predictive and analytic techniques such as Monte-Carlo simulations to replace or forecast
missing data.
The combination of all site-level models describes the totality of the current actual network which
depends on known parameters such as available and activated spectrum, the technology deployed, and
the approved strategy for capacity upgrades. However, the implications of strategy can also be an
outcome of the modelling process as this is adjusted to yield optimal results. Once the capacity upgrade
options of activating available spectrum and implementing technology upgrades are exhausted, the
model implements forms of site-densification according to approved policy. This covers such options as
increased sectorisation, addition of new sites or the deployment of small cells within the existing macro
cell coverage area.
The intermediate output of the process is a year-by-year forecast for all of the sites, covering resource
requirements for new hardware, software, services and licenses required for expanding the network.

The advantages of a TE model


Once a model is built, a process for using the model can establish a baseline plan from which scenarios
and ‘what if’ questions can be explored to see how results change and to gain business insight. It is often
the case that the process of building and refining a model reveals a great deal of insight in much the
same way as a traditional research project. This is important to understand. The process of building the
model yields extensive learning and knowledge of the network and organisation. The value of the
process should not be underestimated. It facilitates a dialogue within and around the CTO office that
bridges both marketing and finance. The effect can have most impact in an organisation whose
functions are ‘stove piped’ or ‘siloed’ which has value beyond that of the planning tool approach
described previously which is not so effective at achieving this bridging function.
Absolute CAPEX and OPEX
A very practical use of a TE model is to estimate the year-by-year CAPEX and OPEX for the network. This
approach is much more realistic than the use of an accounting method and it is much quicker to develop
results for various scenarios than using a planning tool. An example from a modelling exercise on a real
network is shown below, revealing the kind of profiles that can emerge for CAPEX for different scenarios
of forecast network payload. This is from a real TE model describing a real network.
The baseline for this CAPEX profile is a network comprising sixty thousand sites. This exercise allowed a
dialogue to take place concerning capacity building options where demand grew faster or slower than
planned and allowed the CTO to develop contingency arrangements to avoid either an overbuild or an
underbuild of the network. The exercise also allowed the operator to independently verify the level of
CAPEX being planned within a vendor offer to a high degree of certainty.
In addition to preparing meaningful CAPEX forecasts, the TE modelling exercise provided insight into the
incremental OPEX that was to be expected as a result of expanding the network. Categories of such
additional sources of OPEX were site rentals and utility costs arising from small cell deployments

Page | 6
(alternatively, additional macro sites, where that option is chosen for increased densification), license
fees associated with site-level activation of spectrum, increased scope within Managed Services
contracts and additional fees incurred for international IP-transit.

Spectrum Valuations
There are several methods of valuing new spectrum including price benchmarking, alternate use and
cost of production methods. The cost of production approach allows a very tangible evaluation of
spectrum value, and the site-level TE modelling approach accurately shows how additional spectrum can
result in network CAPEX avoidance by substituting other means of adding capacity, such as technology
upgrades or forms of site densification.
An illustration of the effect of this is shown in the graphic, below. In this case, a TE modelling exercise
was undertaken for a network operator that was already well established and operating with 55 MHz of
paired spectrum.

Page | 7
What you can see on the left-hand side is the availability of 20 MHz of new spectrum introduced in one
month during the three-year forecast period. On the right-hand side you can see the gradual activation
of this new spectrum within the population of base stations (light green bars at the bottom of the chart).
Notice that only sites which trigger the need for additional capacity after the introduction of the new
spectrum benefit from it, and that only about half of all sites have this new spectrum activated by the
end of the three-year modelling period. Interestingly, the modelling exercise revealed that a portion of
sites never require any form of capacity upgrade, and indeed, the initial configuration of some sites
showed that the costs of activating some spectrum had been incurred but that this had then never been
used. This has implications for budgeting and cost forecasting, and provides a very different answer to
simple budgeting methods.
The insights developed from this exercise went well beyond its initial objective of showing the value of
new spectrum. In developing site-level representations for how they were initially configured the
quality of configuration data needed to be challenged. A baseline of reasonable capacity was established
and compared to the actual configuration. This could then be compared to the most economic upgrade
path established by the model. Most importantly, once the site-level models had been developed and
populated with good quality data it was then possible for them to serve as a means of tracking the
actual network elements against development plans over the 3 years of the modelling period. This was
accomplished by periodically updating site-level configuration data and contrasting this against the
initial plan, which helped to validate and improve the planning process itself. Starting with a current
detailed design, the model can be used to track how a network is actually built and performs –
pinpointing issues to individual sites.
Technology Plans
Once a good quality TE model has been established, various scenarios can be explored relating to the
operator technology plan and demand forecasts. For instance, re-farming of spectrum, licensing
arrangements and different deployment strategies can be evaluated with the aim of determining a
‘sweet spot’. Returning to the theme of capacity management, it is possible to establish the most
economic approach to capacity growth and enhancement. These steps will typically prioritise the
activation of spectrum, implement technology upgrades and then consider various methods of site
densification. The goal is to economically optimise the match between network supply and demand.
In contrast to transmission systems and Core network nodes, the economic relationship between supply
and demand is at its most complex in the Radio Access Network (RAN). This complexity has been greatly
increased as progressive generations of radio technology have sought to achieve better spectral
efficiency through advanced forms of modulation and coding, advanced software features such as Multi-
User MIMO and Coordinated Multipoint, and hardware upgrades for higher-order MIMO.
The graphic, below, shows an example of the economic sequencing of technology upgrades – mostly for
4G but also featuring a 5G macro cell upgrade with massive MIMO at the end of the sequence. Each
step in this upgrade sequence incurs costs, but also delivers an increase in the expected spectral
efficiency. This sequencing was developed early in the process of building the TE model and it used
benchmarked costs from a price book and local spectrum prices to inform the analysis.
The TE modelling exercise again provided more insight than was initially intended. For instance, the
software feature upgrades of downlink COMP and downlink MU-MIMO were not supported by one of
the major manufacturers (features that are fully standardised by 3GPP) which meant that the spectral
efficiency limits – and therefore capacity –of the 4G technology were reached far earlier than those of
competitor manufacturers. The effect was to force the operator to modernise with 5G technology far
earlier than they might otherwise do.

Page | 8
Scenarios
Often the greatest of insights for decision makers do not come from absolute answers but from a
combination of sensitivities, scenarios and ‘what if’ testing of a plan. A TE model can facilitate this
dialogue much more quickly than using a planning tool and with a greater degree of certainty than
accounting methods. The following are typical questions that such models have been used to explore:
Question 1: What happens if marketing radically reduces the price per GB within allowance
packages and customers react by consuming far more data? What would be the impact
of this on CAPEX and OPEX? What might be the effect on total absorption cost per GB
and on marginal costs of production?
Question 2: What is the impact of increased penetration of newer and more advanced handsets in
the network? One effect might be to increase levels of data consumption, but another
would be to effectively raise levels of network capacity through the higher spectral
efficiency they support from features such as higher-order MIMO. To what extent
would it be worthwhile to stimulate the adoption of advanced devices through direct
handset subsidies or more attractive allowance packages?
Question 3: What would be the effect of driving network optimisation harder to achieve better
levels of spectral efficiency? (this potentially places a tangible value on optimisation)
Question 4: What are the implications of faster migration of circuit-switched voice traffic to VoLTE?

Questions such as these can be answered in a convincing manner and these different effects can be
combined to describe the most likely outlook for guiding decision makers in their task of choosing the
best development path that the operator should take.

Page | 9
Case study 1 – Gulf Operator pre-IPO
Original Brief
The client operator was running out of network capacity and was considering the deployment of small
cells. They asked the modelling team to investigate the implications of this architectural shift.
Method
All network data, contracts and the vendor offer were collected and analysed. A TE model was built to
reflect the expected levels of growth that needed to be supported by the network. The results produced
a family of tables showing the effect of various scenarios over time:

Outcome
A key insight from the exercise was that the vendor had assumed an ever-increasing availability of new
spectrum in this lightly regulated administration. However, it was clear from changes being introduced
into the regulatory environment that this assumption was unlikely to be valid over the coming years.
Meanwhile, the management team were in dialogue with securities analysts as a precursor to a
potential IPO so plans were being subject to extra levels of scrutiny. In support of the management, we
invited the vendor to re-assess their plans with more realistic assumptions for the future acquisition of
spectrum and our TE modelling allowed the resulting vendor estimates to be independently verified.
The value of the exercise was to reveal some very risky assumptions underpinning the development
plans of the operator and to assist dialogue around strategies for moving the business forward.
Case Study 2 – Gulf Operator – ever increasing CAPEX
Original brief
At this operator the finance department were unconvinced of the need for the levels of CAPEX being
requested by the networks department. During the budget preparation phases over several years they
were given different explanations of why the CAPEX was needed and the lack of any consistency in these
explanations raised questions about whether everything possible was being done to get the best value
from the investments that had already made in the network.

Page | 10
Method
Baseline site-level data was collected and all contracts were analysed. A comprehensive TE model was
built using software algorithms across more than one language platform – a situation that is different
from most instances where models are realised exclusively in MS Excel. The model was very complete
and used many technology parameters that are not often used to provide a wide-ranging capability.
Outcome

What the analysis revealed was that much of the network was working below a realistic level of
efficiency. Most KPI’s were fine: dropped calls were low and customer experience was good, but the
underlying efficiency was low and this in turn meant that the level of investment was artificially inflated.
The exercise led to a dialogue with their Managed Services provider to improve overall levels of
efficiency and established the achieved level of spectral efficiency as a key metric for their future
operational relationship.

Conclusion
This paper has attempted to establish some of the key advantages of professionally implemented TE
modelling in contrast to other methods, and to illustrate its use and value. A summary comparison
between TE modelling and these other methods is shown below.
Several decades ago, a key capability of an operator was the ability to design and cost its network,
providing it with a great deal of strategic flexibility. Since then, network technology has become ever
more complex and many elements of the value chain formerly performed by operators have become
outsourced to the major equipment vendors. Increasingly, many operators have come to depend on
these vendors for their CAPEX planning and for the dimensioning of their networks.
Many operators employ strategy consultants, whose background is often rooted in business training as
opposed to engineering, and the outcome has been the widespread use of accounting methods to
assess CAPEX and OPEX. The emergent use of big data and analytics has led to the development of
trends and insights from data extrapolation and heat mapping. This approach can lead to valuable
insights but lacks a grounding in technology fundamentals.

Page | 11
Although accounting methods and big data analytics have a valuable business role to play within
network operators, we strongly argue for the adoption of TE modelling as a key method for establishing
a bridge between network planning and the rest of the business. We have considerable experience of
the valuable insights it can reveal and have demonstrated to many clients its ability to generate strong
and rational answers independent to those of the major equipment vendors.

Contact Cognitio

Page | 12

You might also like