You are on page 1of 14

Operational risk management with real-time

business intelligence

B Azvine, Z Cui, B Majeed and M Spott

Operational risk is a function of the complexity of the business and the environment that the business operates in. Such
complexities increase as the business or the environment become more dynamic, i.e. where change is a permanent feature and
a factor to build into the management of the business. The key question that arises is how do businesses respond to such
changes today and, if the nature of the business and the environment is becoming more and more dynamic, what actions can
businesses take to predict and prepare for change. Viewed in this manner, operational risk becomes very closely related to the
operational performance of the enterprise because it can be considered as dealing with changes that have a negative impact
on the operational objectives. It is vital for enterprises to understand in real time how they are performing, and where on the
spectrum of operational risk they are positioned. To accomplish this, it is essential to have a system for establishing the status
of a business at any moment in time in relation to its performance objectives. This is the role of real-time business intelligence
(RTBI), without which operational risk management could be out of date, or in some cases out of synchronisation with the
business cycle, with serious consequences. This paper discusses the cornerstones of RTBI and demonstrates how these are also
essential elements of an effective operational risk management framework.

1. Introduction and timely reporting of performance and expec-


Enterprise risk management can be considered as a number tations. For these objectives, enterprise risk manage-
of, possibly overlapping, components such as strategic, ment can provide reasonable assurance that
operational, financial, technology-oriented, etc. The focus management and, in its oversight role, the board are
of this paper is operational risk management (ORM) which is made aware, in a timely manner, of the extent to
defined by the Basel Committee on Banking Supervision as: which the entity is moving toward these objectives’
‘The risk of direct or indirect losses due to failures in systems, [3].
processes and people, or from external factors’ [1]. Although
this definition originated in the banking environment, it has The above statement encapsulates the important
been accepted as a generic definition by other enterprise elements of ORM and the inter-relationship between these
sectors. This definition, however, does not establish a link elements, and also indicates that the starting point of ORM
between the performance of the enterprise and its ORM. should be the definition of performance objectives and the
ability to measure the success or failure in achieving these
A more intuitive view of ORM, which establishes a clear objectives. This means that in order for ORM to be successful
link with operational performance, is presented by the and up to date with what is actually happening within the
enterprise risk management framework developed by The enterprise, a framework for measuring the performance of
Committee of Sponsoring Organisations of the Treadway the enterprise must be available and must be capable of
Commission (COSO) [2]. This framework has been tracking the performance objectives, whether at the
established as one of the most comprehensive roadmaps strategic level or at the business process level of the
that helps enterprises in building their ORM processes. This organisation. Such performance tracking and management
framework states that: capabilities have been provided by business intelligence (BI)
systems and more recently by the real-time business
‘Enterprise risk management over operations focuses intelligence (RTBI) framework developed by BT Research and
primarily on: developing consistency of objectives and Venturing [4].
goals throughout the organisation; identifying key
success factors and risks; assessing the risks and In order to clarify the role of BI and RTBI within ORM, we
making informed responses; implementing appro- dedicate the rest of this section to briefly discussing BI and
priate risk responses; establishing needed controls; RTBI, highlighting the differences between them, and how

154 BT Technology Journal • Vol 25 No 1 • January 2007


Operational risk management with real-time business intelligence

the deficiencies of BI has led to the development of the RTBI the third issue is related to the ability to fuse and relate
vision. This is then followed by an analysis of the relationship the huge amount of data from the different sources into
between RTBI and ORM. The rest of the paper will be a timely and meaningful source of information,
dedicated to describing the components of RTBI, the including the ability to validate the data and deal with
achievements and technical challenges of these quality issues.
components, and how they fit within the overall ORM
capability. The deficiencies of traditional BI mentioned above can
be addressed by providing capabilities for the seamless
1.1 The evolution of real-time business transition from data into information into action, which we
intelligence refer to as RTBI [4]. This means that RTBI must provide the
As with many generic concepts, BI is not a well-defined same functionality as traditional business intelligence, but
term. Some consider BI as data reporting and visualisation, operate on data that is extracted from operational data
while others include business performance management. sources with adequate speed, and provides a means to
Database vendors highlight data extraction, transformation propagate actions back into business processes in an
and integration. Analysis tools vendors emphasise statistical adequate time-frame. Specifically, RTBI should provide
analysis and data mining. These different views make it very three critical components:
clear that BI has many facets. To capture them, we globally
define BI as the framework for accessing, understanding, real-time information delivery,
and analysing one of the most valuable assets of an
enterprise — raw data — and turning it into actionable real-time business performance analysis,
information in order to improve business performance. real-time action on the business processes.
Current BI systems suffer from a number of obstacles It must be emphasised here that the concept of real time
that prevent the realisation of their envisaged potential: does not necessarily equate to zero latency in the operation
of these three components. The concept of real time
firstly, the transition from data into information is
indicates the timeliness of the ‘information-decision-action’
hindered by the shortage of analysts and experts who
cycle that is relevant to the specific business environment.
are required to configure and run analytical software,
the second issue is the bottle-neck in the transition Figure 1 illustrates the situation of current BI systems.
from information into action, which has traditionally The information flow between operational, tactical and
been of a manual nature because of the lack of strategic layers is broken by manual intervention. The
automatic links back into the business process layer that challenge is to use intelligent technologies to model the
facilitate rapid modification of process parameters to manual intervention present in current systems and
improve performance, automate both the flow of information from operational to

SO

SO strategic objective
KPI key performance indicator
KPI OPM operational performance measure

OPM

strategic
tactical

retail regional product operational

Fig 1 Traditional BI implementation.

BT Technology Journal • Vol 25 No 1 • January 2007 155


Operational risk management with real-time business intelligence

tactical to strategic layer, representing data to the reporting — these relate to the effectiveness of the
information stage of RTBI, and the actions necessary to internal and external reporting processes, including
translate strategic objectives back to operational drivers to financial or non-financial information,
influence strategic decisions in real time, as shown in Fig 2.
compliance — relating to the entity’s compliance with
applicable laws and regulations.
1.2 The relationship between ORM and RTBI
A close comparison of available implementations of ORM, In a similar manner, RTBI is based on building a hierarchy
particularly if carried out according to the COSO framework of enterprise performance measures starting from the top at
highlighted in the previous section, with the vision and the strategic business objectives and linking these to finer
developments of RTBI, unveils very strong links in terms of and lower-order key performance indicators and operational
requirements, goals and methodologies. measures that emanate from the business process layer, as
shown in Fig 2.
Firstly, in order to have any meaningful ORM process in
place, the enterprise has to have a clear and consistent set of The second point of similarity between ORM and RTBI is
objectives, which according to the COSO framework [3] can evident from looking at the role of key risk indicators (KRIs)
be categorised as: in ORM. A KRI is a metric representing one or more critical
success factors [5]. For example, the age of the IT systems or
strategic — these are the high-level objectives, aligned the number of server failures per unit time, are KRIs to a
with and supporting the entity’s mission/vision, major system failure event. It is sometimes possible to
measure KRIs directly from available data. However this is
operations — these are the operational layer objectives, not always the case. Typically, risk is calculated by analysing
which are related to effectiveness and efficiency of the and modelling of relationships hidden in data. Such analysis
entity’s operations, including performance and is often performed by experts off line, leading to significant
profitability goals, delays and high costs. One of the key features of RTBI is real-

vision SO strategic objective


fully joined-up automated BI KPI key performance indicator
strategic OPM operational performance measure
SO
tactical
operational

KPI ta
e

rg
c
an

et
rm

s
rfo
pe

OPM
OPM

common data model

human finance operations vertical SCM CRM


resources applications

on demand infrastructure: network, storage and processing

Fig 2 The vision of RTBI.

156 BT Technology Journal • Vol 25 No 1 • January 2007


Operational risk management with real-time business intelligence

time discovery of relationships between operation out at an early stage in order to plan suitable mitigating
performance measures, which can be also applied to the actions.
discovery of key risk indicators. RTBI enables real-time
analysis of operational data through continuous and The above discussion makes it clear that ORM is a natural
automated/semi-automated learning resulting in models partner for RTBI. The following sections will go into the
that can be used for what-if analysis, target setting and details of this relationship, highlighting the components of
forecasting of future operational risks. RTBI, current achievements, and focusing on the technical
challenges that should be addressed in order for this
Another important issue is that in order for ORM to be partnership to achieve its goals. Section 2 describes the real-
successful, it requires accurate and timely information about time analytics and performance framework, and how it can
the internal operations of the business and its external contribute to ORM. Section 3 discusses the challenges
environment. Without such information, the impact of risk associated with real-time data fusion and data quality
events cannot be accurately quantified, and risk mitigation management and its impact on ORM. Section 4 focuses on
and control measures will not be able to respond adequately the business process management, its role within ORM and
to threats. With today’s technical advances in IT, and the how real-time technology could reduce change
emergence of the highly dynamic service-oriented implementation time and decrease operational risk.
architecture (SOA-)based enterprise models, the ORM
framework has to deal with huge amounts of data that
change rapidly and that vary in nature from quantitative to 2. RTBI analytics framework
qualitative, and from accurate to lacking in quality. The The role of the analytics part within the overall RTBI/ORM
answer to such a challenge is in adopting the RTBI data framework is concerned with building performance models
fusion and modelling methodologies that establish an of the organisation, allowing evaluation of the performance
information systems infrastructure capable of the timely parameters, given external and internal risks. We will first
capture, aggregation and sourcing of the relevant data. introduce the concept of performance modelling, and then
add threats and risks as a natural extension.
The link between the ORM framework and operational
process levels of the RTBI pyramid exhibits itself in two The main building blocks of a performance framework
aspects. are business entities (BEs), each of which represents exactly
one performance quantity of a part of the organisation.
Risk mitigation Examples are strategic quantities such as customer
satisfaction or profit, and tactical or operational quantities
The first is related to risk mitigation. Once risk is such as ‘average time to clear a fault’ or ‘number of
identified and its impact is quantified, then some action abandoned calls in call centre’. Furthermore, we distinguish
should be taken to reduce or eliminate its impact. In a between:
process-based enterprise the majority of these actions
need to be taken at the process level either requiring internal performance quantities such as the ones just
long-term changes or with immediate effect. To do this, mentioned,
a comprehensive business process management (BPM)
framework is needed to model, simulate, and execute business levers which can be changed in order to
business processes, and to monitor changes at the improve the performance, e.g. the number of call
process level. A robust and well-designed execution centre staff,
environment provides for the necessary compliance
with regulatory requirements. This means that once risk external influences stemming from the business
mitigation action is needed, it can be put into operation environment, i.e. anything related to customers,
with minimum delay at the business process level. competitors or other factors, e.g. weather, which
influence the business.
Risk analysis
The second dimension of the ORM/BPM relationship is 2.1 Defining performance frameworks
a result of the needs for ORM at the business process The first step of building a performance framework is to
level. In BPM, process designers and/or owners should identify relevant performance quantities. The approach is
consider carefully the risk environment affecting the very similar to the ideas formulated by Kaplan and Norton
performance of each individual process. This is done in [6] for balanced scorecards. The search for the right
terms of analysing the effect of risk events on the quantities is usually driven by the strategy of the
achievement of the process performance indicators and organisation, since we are only interested in those quantities
on the compliance of the process actors and procedures that influence the performance at the strategic level.
with the regulatory environment. Modelling and Typically, answering questions like the following helps to
simulation allows a process risk analysis to be carried identify a set of relevant quantities.

BT Technology Journal • Vol 25 No 1 • January 2007 157


Operational risk management with real-time business intelligence

How can we express our strategic goals in terms of completely ignore the distribution of customers who are not
measurable quantities? very happy, i.e. we do not measure if most of them are still
quite happy, or if they are utterly unhappy. The second one
What are the influences of strategic quantities at
takes this into account, but still does not tell us anything
tactical levels, and which operational quantities
about the variation of happiness among customers.
influence tactical ones?
What can we control in our business in order to All measures are based on data, e.g. we require survey
influence the performance? data to measure customer satisfaction. Therefore, for each
What are the external influences we have to take into performance measure a data source needs to be specified. In
account? that context, the role of the data fusion layer described in
section 3 is of great importance since the required data is
Once all the relevant quantities have been identified, the typically distributed between a number of data sources that
following step consists of producing a framework that shows can be easily described as ‘disjoint and heterogeneous’. This
how each quantity affects the rest. As the questions above is particularly the case in large organisations. In order to
already suggest, we select quantities such that they obtain the correct measurement of a performance quantity,
influence others in the performance hierarchy. Business it is possible that a combination of data sources need to be
levers and external influences are at the bottom of the accessed in order to assemble the required value. The
hierarchy, linking into operational quantities above them. capabilities of the data fusion layer in terms of
These in turn are linked to tactical ones and finally into understanding the data model and relating the contents of
strategic quantities at the topmost level. Figure 3 shows an different sources, in addition to the management of data
example framework describing a call centre scenario. quality, are crucial for ensuring the validity of the collected
measurements within the performance framework.
At this point of defining the performance framework,
everything has been done at the qualitative level. We have Finally, the relationships between the connected
defined what quantities we want to measure but not how. quantities need to be quantified. If the relationship is
Therefore, the third step is about defining measures for the known, an equation expressing the relationship can be
quantities. For the quantity customer satisfaction, for defined. Many relationships, however, are unknown in
example, we could compute the relative number of very advance or are of a dynamic nature, i.e. changing over time
happy customers according to surveys. An alternative could as the business environment changes. An example of this is
be to measure the average happiness of customers. The the relationship between operational quantities and
decision depends on which definition of measure is more customer satisfaction. Such relationships can be learnt from
relevant for the strategy. In the case of the first one, we historic data as described in section 2.3.

Fig 3 Call centre performance framework.

158 BT Technology Journal • Vol 25 No 1 • January 2007


Operational risk management with real-time business intelligence

2.2 Monitoring performance target optimisation (optimise business levers and


An important function of the RTBI system is to monitor the targets for BEs given strategic targets).
performance of an organisation in real time. Data is collected
from operational systems or other internal or external data What-if analysis answers the question of how business
sources via the data fusion engine, and fed into the analytics levers and the business environment influence operational,
module. Whenever new data arrives, the performance tactical and, finally, the strategic performance. We could, for
measures are evaluated. The RTBI platform provides instance, determine how customer satisfaction would
configurable dashboards to display the resulting change if we were to increase the number of call centre staff
performance figures (and optionally a relevant historical and if the number of incoming calls were to change.
view) of each of the quantities within the framework. This
approach is considerably different from common reporting Another type of analysis is the scenario in which a
where performance is only published on a regular basis, e.g. higher-level target value is set (typically by managers), and
monthly, weekly or daily. The dashboards can also be set to the requirement is to determine how this translates into
provide alarms or traffic light type monitors such that lower-level targets. Following the example in Fig 3, targets
warnings can be issued in case of a quantity deviating from for customer satisfaction and costs have to be translated into
its required region of normality. targets for response time, transfer time, server time outs,
etc. This target translation is not trivial since the relations
between the business entities we established can generally
2.3 Learning relationships
not be inverted. The problem of finding targets therefore
As was mentioned previously, not all relationships between
turns into an optimisation problem. For instance, we might
entities are known in advance, and in most cases only
look for targets for all measures such that we achieve a
qualitative knowledge is available about the nature of the
target for customer satisfaction at minimal cost.
relationship. On the other hand, each performance quantity
is measured regularly and the values are collected over time.
In order to be proactive, managers have to predict future
In particular, we can assume that the system collects data of
developments to make the right decisions at the right time.
related entities. Therefore, we can employ intelligent data
This can be achieved by predicting the development of
analysis (IDA) and data mining techniques to learn the
external influences and propagating predictions through the
relationships from data. But learning on its own is not
performance framework similar to what-if analysis. In that
enough for RTBI because of what we mentioned earlier
way, the future performance of the business can be
regarding the changeable nature of some (if not most) of the
predicted given the chosen settings of the business levers.
relationships typically found in the business environment.
The learning and data analysis must be carried out
automatically and repeatedly in a timely manner in order for
2.5 Threats and risks — extending the
the results to be relevant to the situation developing at the performance platform
current time. The performance framework described above measures how
well the organisation is meeting its objectives at various
levels. It also provides capabilities for analysing what is
Learning relationships from data is not trivial, since it
going on in the business using what-if analysis and
consists of a number of steps, each of which requires expert
prediction, as well as for planning and control using target
data analysis knowledge. For each relationship, an
optimisation. ORM, as explained in section 1, is also
appropriate data mining technique has to be chosen and
concerned with the business objectives of the organisation
configured, and the data has to be pre-processed
in that it considers the effects of threats. Threats are
accordingly among other things. Realistically, an expert
situations or entities (e.g. operational conditions,
would set all these steps up based on an initial analysis.
organisations, persons) that might inflict harm on the
Relationships can then be re-learned or adapted at any time
organisation. Thereby we assume that a threat might cause
using the given set up. In the future, tools like SPIDA [7]
an event which influences the organisation. For instance, a
could be plugged into the RTBI system which would
group of hackers (the threat) might attack the network of an
automate a great deal of the set-up procedure. Business
organisation (the event). As a final result, a high level
users could then trigger the learning capabilities of the RTBI
performance measure such as customer satisfaction might
platform without the need to understand the learning
drop considerably, because customers might not be able to
techniques behind it.
access their services any more.

2.4 What-if scenarios, target optimisation and It follows naturally from the above discussion that the
prediction performance platform can be considered as an ORM
Apart from monitoring, two main functions of the RTBI environment through which one can carry out what-if
system are: analysis to predict the effect of risk events, generated by the
threats, on the different performance measures modelled in
running what-if scenarios, the performance hierarchy provided a link/relationship can

BT Technology Journal • Vol 25 No 1 • January 2007 159


Operational risk management with real-time business intelligence

be established between the risk event and one of the 3.1 Key ORM data requirements
performance quantities in the graph. It is important to note As discussed in the earlier sections, KRIs are developed by
here that this differs from the traditional way of carrying out analysing the business requirements according to data
operational risk analysis, where the temptation is to link the available from underlying business processes. In the age of
influence of the risk event directly to the higher level the Internet, the problem is not lack of data, but rather in
objectives, which is mainly done by ORM consultants who identifying good data. Since the data required by ORM is
use their expertise in quantifying such relationships. In our produced by many operational applications and is stored in a
opinion such an approach is not suitable for today’s rapidly number of data repositories, careful analysis must be carried
changing and service-oriented business environment out to determine its suitability. The implicit context and
because it requires a constant modification of the functions semantics of data must be made explicit to ORM designers,
describing the links between the risks and the objectives, as well as to business users, to avoid data misuse.
and requires the expensive services of ORM consultants each
time a new threat is discovered that requires evaluation. There are many existing tools for handling so called dirty
data. These tools can adequately tackle syntactical errors of
A much better approach is to link the events to the data, missing data and incorrect data such as non-telephone
performance quantities they immediately influence — this numbers for telephone number columns, and non-
can be done easily by the business expert and does not need numerical data for numerical columns. However, these tools
an ORM expert — and propagate the effects up the are often unable to handle semantic issues associated with
performance hierarchy using the capabilities of the frame- data. This is a serious shortcoming. As mentioned above,
work to evaluate the effect on high-level measures. data is produced with implicit semantics in specific contexts.
For example, the percentage of the churn of broadband
However, up to this point, all the framework customers cannot be generalised to the percentage of the
relationships are expressed by quantitative functions that churn of all the company’s customers. Data has to be taken
assume a deterministic world. In risk analysis, probability in the right context so that all users would have the same
distributions are attached to risk events to reflect the fact interpretation (i.e. semantics) of a set of data no matter
that these events are uncertain, and their impact on the where and how it is used.
performance is similarly uncertain. In other words, rather
than a single performance value, the result of what-if Although data management communities have talked
analysis would be a probability distribution on performance about the importance of data semantics for a long time,
values. The business user can then derive the most likely current vendor solutions have not made data semantics
outcomes, but can also investigate other ones which might explicitly available to end users. For example, data
be less likely but still possible. Uncertainty in the warehouse solutions focus on target schema definitions and
relationships between the framework entities must also be ETL (extract, transform and load). There is hardly any formal
considered and included in the framework, although this is a documentation, i.e. documentation that could be processed
more difficult issue and needs further evaluation. by machines. Even if some informal documentation exists of
target schemas and transformation specifications, it is rarely
available to the end user, because much of the semantics is
Risk countermeasures can also be incorporated in the
still hidden in transformation codes. As these are not made
framework by inserting their influence on the relationships
explicit, KRIs are often defined by dedicated teams who
between the risk event and its entry point to the framework,
understand the business and the data. The high cost
or in some cases in the relationship between two framework
associated with this means only a few KRIs could be defined.
entities. To find the best set of counter measures, what-if
However, there are many occasions that KRIs should be
analysis or optimisation can be used.
defined dynamically by end users who may not know the
implicit semantics. This means that data semantics as well as
contexts have to be available to these users. A true ORM
3. Real-time data support for ORM needs to have the capabilities for business users to choose
The quality of ORM depends on its data. Good data often data and data context to compose or define new KRIs, and
leads to visionary and profitable decision making. Poor data to get unified data support from any available data sources.
quality is often the cause of bad strategic decisions and
inaccurate financial and management reporting. Because of The key requirement to support business users in
this, most current BI and ORM systems draw data only from a defining KRIs and KPIs on-the-fly is to relieve them from
fixed number of data sources, and it is very difficult and knowing the details of low-level data integration. Data
costly to use data from any new data sources after the should be presented to the measure builders in terms they
systems have been built. In the following sections, we understand. This would address the usual gap between IT
present a system being developed within the BT Research departments and users and business users, who often blame
and Venturing programme, which meets the ORM data each other for project failure. IT personnel are often unable
management requirements. to understand business requirements, while business users

160 BT Technology Journal • Vol 25 No 1 • January 2007


Operational risk management with real-time business intelligence

are unable to articulate their requirements exactly. Thus automated data mismatch reconciliation — this is a way
there is a need to supply the data in context for business of combining data while removing any mismatches
users to define new measures, i.e. KRIs or KPIs, which in turn between the different data sets.
would lead to the broad adoption of ORM and BI.
This type of data layers empowers business users to
select data sources suitable for their applications from a pool
The widespread use of ORM or BI requires a data layer of silo data sources without the risk of misusing them. They
that allows dynamic integration of new data sources, can safely and dynamically define any KRIs based on the
because enterprises cannot afford to build data warehouses latest data including data from external data sources. As the
for every BI application. Thus the technologies must be data layer provides a unified view of selected data sources, it
developed to provide the following: shortens application developments.

unified data layer — a common metadata structure 3.2 Real-time data fusion and data quality
unifies data access by creating a virtual warehouse view platform
of enterprise data so that all users, regardless of their This section overviews the real-time data fusion and data
departments or analytical prowess, have access to the quality platform in the light of ORM requirements. The
same values, field names and sources, system was originally developed to support real-time
business intelligence [4, 8, 9].
streamlined development cycle — this is a step-by-
step guide to creating machine processable metadata System architecture
repositories and a mapping between metadata and Figure 4 shows the overall architecture of the system. The
concept-based data access, semantic data repository includes metadata of all available

business software/
users applications

C Thing
C Customer

business C BroadbandCustomer data


vocabulary address quality

customerName
productName

automatic
fusion

semantic
repository

Fig 4 RTBI data fusion and data quality system.

BT Technology Journal • Vol 25 No 1 • January 2007 161


Operational risk management with real-time business intelligence

data sources. These metadata take the form of a centralised The fusion engine provides the unified view of all
ontology which forms the business vocabulary. Currently the selected data sources. Through this view, users can compose
system supports both relational and XML data sources. For their queries. Upon receiving these queries, the fusion
other data sources, adapters are needed. engine decomposes them through the ontology and
available mappings into sub-queries, which could be
When a data source is plugged into the semantic processed by each data source. Using each data source
repository, IT users use a GUI mapping editor, as shown in description, it computes a query graph using any possible
Fig 5, to publish its metadata in terms of concepts defined in semantic-based joints. This means they are derived through
the centralised ontology. ontology definitions and mappings. Each sub-query is then
sent to the data source to retrieve partial results. Once all
sub-queries are computed, the fusion takes place using the
This is different from data warehouse building in that ETL semantic joint graph. Finally the fused results are provided
needs to resolve heterogeneities of data sources, while our to the end users or end applications through Web Services.
system only requires descriptions of metadata of the data
source one at a time. It does not require IT users to resolve The user can select any data sources from the semantic
any mismatches with other data sources. The mismatches repository through GUI interfaces. There are two kinds of
between data sources are resolved by the system at run time information available to assist end users — a semantic view
by analysing the published metadata, removing the need for of the data for each data source, and a data quality profile.
the repeated ETL processes required by the data warehouse After viewing these details, users click to select or deselect
approach. Moreover, the mappings could be done in a data sources, and save the selection. Then a unified view of
distributed fashion. In addition, they can be changed or selected data is computed and presented to them for
updated easily as they are in declarative form and do not querying.
involve other data sources. One caveat of this is that the
changes made will affect all applications. This could be a Data adapters are software wrappers, which provide
good feature or a bad one depending on the applications. translation between ontology queries and native data source
queries. The pre-built adapters are available for relational
For each data source, the semantic repository includes a databases and XML data sources. For other data sources,
semantic description of its contents, and a profile of its data adapters have to be built. This includes retrieving data
quality, i.e. how good its data is according to certain source metadata, and mapping ontology queries to native
measures. The profiling is done through business rules. We data source queries.
have used machine learning techniques to try to learn these
rules from training data. The system can also use third party The mapping editor provides a set of standard transform-
data quality software through APIs. ation tools. Currently these include pass-through trans-

Fig 5 Mapping editor for publishing data source metadata using ontology.

162 BT Technology Journal • Vol 25 No 1 • January 2007


Operational risk management with real-time business intelligence

formation, filter transformation, expression transformation, Implementation


concatenation and joint transformation. Other transform- The data layer has been implemented using J2EE/Java, and
ations are still in development. deployed on a BEA Weblogic server. All the tools are
available through the Web so that end users do not need to
install any software. The system is currently being trialled by
Ontology as data models
BT Openreach, using several BT data sources.
The system uses ontology as data models. Actually there are
several different ontologies. Figure 6 shows the relationships
between them. 4. Business process management
A key feature of the RTBI framework is that once targets
A shared ontology is a common, agreed vocabulary [10] have been set for operational parameters (as a result of
of both domain users and developers. We treat this ontology target optimisation to achieve a strategic objectives), the
as a superset of vocabulary, which is rich enough to describe operational layer is modified accordingly in order to achieve
any enterprise data sources. Tools have been developed to the targets. This is done either by changing the parameters
update this shared vocabulary. of the business processes, or by modifying the process and
deploying the new version in the operational environment.
This means that there is continuous feedback between the
Source ontologies define the data semantics of their
performance framework (section 2) and the business process
associated data sources. The terms in source ontologies are
management environment, in which information and
often taken from a shared ontology, but their definitions
decisions are fed back and forth in a timely manner. The
could be further constrained. For example, certain attributes
following sections will discuss our BPM environment and
may be sub-typed or have fixed values. New terms may be
how it supports RTBI, then take a look at how BPM is related
defined, over and above those in the shared ontology.
to ORM.

Shared, source, user and application ontologies are the 4.1 The BPM environment within RTBI
critical components for the RTBI data layer to perform The business process management environment is based on
context-based mismatch reconciliation at run time. As an the open source JBPM [11] as the core workflow system
example, products could be priced in different currencies. where the processes are executed and/or simulated. JBPM
The system could automatically convert all product prices defines processes through JPDL, a workflow language that
into pounds sterling for UK users, and US dollars for US allows parallel execution, selective branching and process
users. Of course, the user could force the system not to composition. We have extended the JBPM system to store
resolve any mismatch, if they prefer. the audit data trail in a database in a structured way so that it
can be used in posterior analysis. A process task is defined as
a black box, whose function is performed by an actor
(human or system) and whose results are expressed in terms
applications
of output attributes. These attributes represent any
important information about the tasks such as timing, cost,
or any other attributes that can contribute directly or
user application indirectly to the performance analysis and measurement of
ontology ontology the process. Processes are defined with input and output
attributes, making it easy to use process composition. Task
output attributes can be mapped into process output
shared attributes, connecting the low level implementation of the
ontology process with the external high level view.

The BPM environment is composed of the following


source source source elements:
ontology ontology ontology
process designer — a visual application used to create
the process model, based on JBPM’s process designer,
execution server — a server where the real process can
database Web be executed according to the model,
files
process simulator — an environment that simulates the
process execution using the model created by the
Fig 6 Ontology models for the RTBI data fusion and data quality designer, and the knowledge extracted from the trail of
system. the execution server,

BT Technology Journal • Vol 25 No 1 • January 2007 163


Operational risk management with real-time business intelligence

KPI tool — a tool that is used to define metrics for the resource management system — used to assign tasks to
process and to calculate them against the execution or actors in the system.
simulation data — these metrics link the business
process executions with the high-level goals of the All the process-generated data is captured and stored in
company, the database whether in real execution or in simulation. In
addition, real-time execution data is available through event
business rules — this engine allows the definition of listener interfaces that allow integration into an enterprise
business rules and manages the triggering of the data bus for transporting to other systems or for data
actions associated with them during the execution (real transformation using the data fusion layer. This allows the
or simulated) of the process — these rules represent the monitoring of the process simulation/execution in order to
business constraints set to complement the process evaluate any change to the process, be it through a re-
model created by the designer. engineering effort (e.g. a new resource assignment policy)
or through an unforeseen problem (e.g. failure of suppliers
As shown in Fig 7, the process simulator and the to deliver on time).
execution server constitute the two parallel pillars of the
BPM system. They are fed with the processes created with In order to carry out local process performance analysis,
the process designer and connected to the business goals two tools are provided which allow monitoring of process
and constraints through the KPIs and business rules performance. The first is a KPI tool, which consists of a visual
respectively. editor for KPI definition in terms of task and process
attributes, and an engine that evaluates the current and
The process simulator follows an event-based historic value of a KPI and displays the result as customised
architecture integrated into the workflow system with the charts as shown in Fig 8, where the queue time is monitored
following main components: as part of a client order process.

event scheduler — used to order and fire the events The second performance analysis tool is the business
within the simulation time-scale, rules engine. These rules can be defined and checked against
the process instances to find exceptional situations in
workflow system — ensures that the simulation is
process behaviour.
executed as prescribed by the process model,
process sources — generate new process instances that Three different scopes have been defined according to
constitute the start point of the workflow executions, which the rules can be checked:

task execution estimators — responsible for estimating node — the rule is checked when leaving a certain
the task output attributes and duration, node,

resource
policy process
execution
history
server
process database
execution action
handlers

process process event analysis


business
optimisation KPI tool
designer definition rules listeners
components

action
handlers

simulation
system simulation
resource history
policy database

task process
estimator source

Fig 7 The BPM environment.

164 BT Technology Journal • Vol 25 No 1 • January 2007


Operational risk management with real-time business intelligence

Fig 8 Process KPI monitoring.

process — when the process instance has finished, the regulations for carrying out process tasks, especially in
rule is checked, processes which involve considerable human interaction.
Legal requirements are continuously increasing, leaving
global — rule checking is carried out periodically.
companies facing huge financial penalties as a result of
infringement. On the other hand, people taking short cuts
In this case, the rule can check constrains relevant to the
and working ‘around the process’ in carrying out their tasks
group of processes finished in that period, rather than just
is one of the major reasons for process failure leading to
one single instance. The consequence of the rule is defined
missed targets and increased customer dissatisfaction.
as an interface. The system provides the user with some
basic implementations (e.g. stop the process, send an e-
mail), any other action can be carried out by implementing a The second aspect of the BPM/ORM connection is that
new class (written in Java) that extends that interface. mitigation of operational risks identified within the overall
performance framework (section 2) usually requires actions
4.2 BPM for ORM to be taken on the process level. Examples of such actions
The role of BPM in an ORM implementation was mentioned are complete process re-design, monitoring of process
in section 1, where two aspects were identified. The first execution in terms of performance and ensuring process
aspect is the local ORM within an individual process actors comply with regulations.
(process-oriented ORM), the responsibility of which falls on
the process owner. The process owner is accountable not The requirements of these two aspects are quite
only for the performance of the process (i.e. meeting the interrelated and can be serviced with a similar set of tools to
KPIs), but also for making sure that potential threats to the provide the required functionality. The BPM environment,
process performance and their related risk events and discussed in section 4.1 above, provides many of the
situations are identified, analysed and measures are put in functions needed to carry out the local process risk analysis
place to mitigate their effects. It is important that the local and the risk mitigation requests filtering down from the
risk analysis is done in terms of the operational objectives of performance framework. As an important first tool, the
the process in order to focus the effort of risk analysis on the process simulator allows process owners to evaluate what-if
most relevant issues. In a similar manner to the risk analysis scenarios in the execution of the process, changing process
within the performance framework, discussed in section 2, parameters and including risk events. For example, call
threats should be classified as internal or external, and their centre process owners can evaluate the risk of losing a
relevant KRIs identified and measured. percentage of the staff, e.g. as a result of illness or strike
action, by simulating different scenarios for resources and
Another important issue in process-oriented ORM is incoming calls, thus enabling them to plan the necessary
compliance with both legal requirements and with local actions in case such a risk should materialise. Another

BT Technology Journal • Vol 25 No 1 • January 2007 165


Operational risk management with real-time business intelligence

important tool is the business rule engine that allows flexible presented the tools that assist process owners in carrying out
configuration of compliance rules and monitoring of any process-oriented ORM, and discussed the role of BPM in risk
infringement of these rules in real time, enabling the process mitigation.
owner to take immediate action to rectify the situation and
to prevent the results of the infringement from propagating Despite the development of advanced statistical
further and causing serious damage. Another capability that techniques for ORM, it is currently the view of some experts
facilitates the deployment of mitigating actions is the that, for a large business, the most meaningful analysis must
flexibility of process design and rapid deployment of a new be qualitative in nature because of the difficulty in
version of the process in the operational environment in establishing the accurate quantitative relationships between
order to minimise the damaging effects of a developing risk risk events and strategic business goals. Although we
situation. Finally, the KPI editing and monitoring tools can acknowledge the fact that qualitative analysis has an
also be used for building and monitoring process KRIs in a important role to play, we are of the view that the source of
systematic way so that process managers can easily and the problem is that quantitative risk analysis is not carried
quickly mange performance and risk in an integrated manner. out with the help of a complete performance framework
such as RTBI. RTBI allows detailed modelling of the business,
based on solid information provision and a two-way link to
5. Conclusions the business process layer. Saying that, some of the
This paper has discussed the different components of RTBI relationships within the framework can be qualitative and
and how these can work towards achieving a successful best described by qualitative modelling methods such as
implementation of ORM that is based on a continuum of those provided by the field of soft computing.
objectives setting, monitoring and optimisation. We have
followed broadly the COSO framework for enterprise risk
management, which has established itself as the de facto
References
standard for ORM implementation.
1 ‘International Convergence of Capital Measurement and Capital
Standards: A Revised Framework’, Basel Committee on Banking
The business performance framework is the first Supervision (June 2003) — www.bis.org/publ/bcbs107.pdf
component within RTBI, and is responsible for building
performance models of the organisation, allowing the 2 The Committee of Sponsoring Organisations of the Treadway
Commission (COSO) — www.coso.org/
evaluation of the performance parameters of the business,
as well as providing the ability to carry out what-if analysis
3 ‘Enterprise Risk Management — Integrated Framework’, published by
and target optimisation. These facilities allow business the COSO organisation (2004).
managers to have real-time visibility of the status of their
targets, and to assess the effects of any actions they want to 4 Azvine B, Cui Z, Nauck D and Majeed B: ‘Real Time Business Intelligence
take on performance. The same tools can be used to assess for the Adaptive Enterprise’, in Proceedings of the IEEE Joint
performance given external and internal risks, thus Conference on E-Commerce Technology (CEC'06) and Enterprise
Computing, E-Commerce and E-Services (EEE'06), San Francisco, pp
providing an integrated view of performance and risk. 222—229 (June 2006).

RTBI and ORM cannot succeed without the availability of 5 Vinella P and Jin J: ‘A Foundation for KPI and KRI’, in Davis E (Ed):
information which is clean, timely and relevant. Without it, ‘Operational Risk — Practical Approaches to Implementation’, Risk
operational risk management could be out of date or in Books (2005).
some cases out of synchronisation with the business cycle,
with serious consequences. However, the diversity and 6 Kaplan R S and Norton D P: ‘Balanced Scorecard: Translating Strategy
into Action’, Harvard Business School Press (1996).
typical ad hoc implementation of data sources within large
enterprises makes the task of making the data available very
7 Nauck D , Spott M and Azvine B: ‘SPIDA — a novel data analysis tool’,
difficult. This is where the second component of RTBI comes BT Technol J, 21, No 4, pp 104—112 (October 2003).
into play — the data fusion layer. Based on a common data
model, this layer empowers business users with the ability to 8 Cui Z, Jones D and O’Brien P: ‘Semantic B2B Integration: Issues in
integrate data using any available data sources based on Ontology based Approaches’, ACM Sigmod Record Special Issue on
conceptual views of the underlying data sources. This is ‘Data Management Issues in E-commerce’ (March 2002).
achieved through metadata, abstraction and separation of
low level data from their semantics. Thus IT users can focus 9 Cui Z, Tamma V and Bellifemine F: ‘Ontology management in
enterprises’, BT Technol J, 17, No 4, pp 98—107 (October 1999).
on publishing their data through ontologies, i.e. conceptual
views. As the system retrieves data from the data source
10 Cui Z, Shepherdson J W and Li Y: ‘An ontology-based approach to
directly, this guarantees the freshness of the data. eCatalogue management’ BT Technol J, 21, No 4, pp 76—83 (October
2003).
This was followed by discussion of how the third
component, BPM, fits into the scheme of RTBI and ORM. We 11 Open source JBPM — www.jboss.com/products/jbpm/

166 BT Technology Journal • Vol 25 No 1 • January 2007


Operational risk management with real-time business intelligence

Ben Azvine holds a BSc in mechanical Dr Basim Majeed is a Principal Research


engineeering, an MSc in control engineering, Professional at the Intelligent Systems
a PhD in intelligent control systems from Research Centre within BT Research and
Manchester University and an MBA from Venturing at Adastral Park.
Imperial College, London. Having held research
fellowship and lectureship posts in several He holds a Masters degree (1987) and a PhD
universities, he joined BT to set up a research degree (1992) in Intelligent Control Systems
programme to develop and exploit soft from the University of Manchester.
computing and computational intelligence
techniques within BT. Since then he has held He is part of a team working in the area of
senior, principal and chief research scientist real-time business intelligence and business
posts at Adastral Park where he currently process management. He is a Member of the
leads the business intelligence research IET and IEEE, and a Chartered Engineer.
programme. He is the author of two books,
has published more than 100 scientific papers and is an inventor on more
than 20 patents. He has won four British Computer Society gold medals for IT
innovation and more than ten BT innovation awards, holds visiting professor-
ships at Bristol and Bournemouth Universities, and a visiting fellowship at
Cranfield University. His current research interests include real-time business Martin Spott received a Diploma (MSc) in
intelligence, predictive CRM and next generation smart services. Mathematics in 1995 from the University of
Karlsruhe, Germany.

Zhan Cui received a BSc (1981) and an MSc He continued working in Karlsruhe until 2000
(1985) in Computer Science from Jilin as a research assistant in the Innovative
University of China, and a PhD in Artificial Computing Group of Prof G Goos. He
Intelligence from Academia Sinica in 1988. completed his PhD in Computer Science in
Between 1989 and 1996, he worked as a November 2000 with a dissertation on
research fellow in the areas of artificial ‘Reasoning with Fuzzy Terms’ and joined BT in
intelligence and databases not only for the January 2001 where he works as a Principal
Universities of Edinburgh and Leeds, but also Researcher in the computational intelligence
for the Imperial Cancer Research Fund (now research group.
Cancer Research UK), and as a lecturer for the
Universities of Swansea and Liverpool. He He has published numerous papers in his research area, and is also a regular
joined BT in October 1996. Since then he has member of programme committees for related conferences and a reviewer
been working on R&D projects in software for scientific journals. His current research interests include soft computing,
agents ontology, knowledge management machine learning and data mining.
and the Semantic Web. He is a recognised expert in ontology-based
approaches to semantic integration of disparate information sources. He has Since joining BT, he has worked on several intelligent data analysis projects
authored more than 50 technical papers and is an inventor on more than 12 like travel time prediction, real time business intelligence tools and a
patents. He is currently working on automatic taxonomy generation from platform for automating data analysis, for which he received a BCS medal in
text, text categorisation and fusing data from unstructured data sources the category ‘IT Developer of the Year — Infrastructure’ in 2005.
such as Web pages and documents.

BT Technology Journal • Vol 25 No 1 • January 2007 167

You might also like