Professional Documents
Culture Documents
business intelligence
Operational risk is a function of the complexity of the business and the environment that the business operates in. Such
complexities increase as the business or the environment become more dynamic, i.e. where change is a permanent feature and
a factor to build into the management of the business. The key question that arises is how do businesses respond to such
changes today and, if the nature of the business and the environment is becoming more and more dynamic, what actions can
businesses take to predict and prepare for change. Viewed in this manner, operational risk becomes very closely related to the
operational performance of the enterprise because it can be considered as dealing with changes that have a negative impact
on the operational objectives. It is vital for enterprises to understand in real time how they are performing, and where on the
spectrum of operational risk they are positioned. To accomplish this, it is essential to have a system for establishing the status
of a business at any moment in time in relation to its performance objectives. This is the role of real-time business intelligence
(RTBI), without which operational risk management could be out of date, or in some cases out of synchronisation with the
business cycle, with serious consequences. This paper discusses the cornerstones of RTBI and demonstrates how these are also
essential elements of an effective operational risk management framework.
the deficiencies of BI has led to the development of the RTBI the third issue is related to the ability to fuse and relate
vision. This is then followed by an analysis of the relationship the huge amount of data from the different sources into
between RTBI and ORM. The rest of the paper will be a timely and meaningful source of information,
dedicated to describing the components of RTBI, the including the ability to validate the data and deal with
achievements and technical challenges of these quality issues.
components, and how they fit within the overall ORM
capability. The deficiencies of traditional BI mentioned above can
be addressed by providing capabilities for the seamless
1.1 The evolution of real-time business transition from data into information into action, which we
intelligence refer to as RTBI [4]. This means that RTBI must provide the
As with many generic concepts, BI is not a well-defined same functionality as traditional business intelligence, but
term. Some consider BI as data reporting and visualisation, operate on data that is extracted from operational data
while others include business performance management. sources with adequate speed, and provides a means to
Database vendors highlight data extraction, transformation propagate actions back into business processes in an
and integration. Analysis tools vendors emphasise statistical adequate time-frame. Specifically, RTBI should provide
analysis and data mining. These different views make it very three critical components:
clear that BI has many facets. To capture them, we globally
define BI as the framework for accessing, understanding, real-time information delivery,
and analysing one of the most valuable assets of an
enterprise — raw data — and turning it into actionable real-time business performance analysis,
information in order to improve business performance. real-time action on the business processes.
Current BI systems suffer from a number of obstacles It must be emphasised here that the concept of real time
that prevent the realisation of their envisaged potential: does not necessarily equate to zero latency in the operation
of these three components. The concept of real time
firstly, the transition from data into information is
indicates the timeliness of the ‘information-decision-action’
hindered by the shortage of analysts and experts who
cycle that is relevant to the specific business environment.
are required to configure and run analytical software,
the second issue is the bottle-neck in the transition Figure 1 illustrates the situation of current BI systems.
from information into action, which has traditionally The information flow between operational, tactical and
been of a manual nature because of the lack of strategic layers is broken by manual intervention. The
automatic links back into the business process layer that challenge is to use intelligent technologies to model the
facilitate rapid modification of process parameters to manual intervention present in current systems and
improve performance, automate both the flow of information from operational to
SO
SO strategic objective
KPI key performance indicator
KPI OPM operational performance measure
OPM
strategic
tactical
tactical to strategic layer, representing data to the reporting — these relate to the effectiveness of the
information stage of RTBI, and the actions necessary to internal and external reporting processes, including
translate strategic objectives back to operational drivers to financial or non-financial information,
influence strategic decisions in real time, as shown in Fig 2.
compliance — relating to the entity’s compliance with
applicable laws and regulations.
1.2 The relationship between ORM and RTBI
A close comparison of available implementations of ORM, In a similar manner, RTBI is based on building a hierarchy
particularly if carried out according to the COSO framework of enterprise performance measures starting from the top at
highlighted in the previous section, with the vision and the strategic business objectives and linking these to finer
developments of RTBI, unveils very strong links in terms of and lower-order key performance indicators and operational
requirements, goals and methodologies. measures that emanate from the business process layer, as
shown in Fig 2.
Firstly, in order to have any meaningful ORM process in
place, the enterprise has to have a clear and consistent set of The second point of similarity between ORM and RTBI is
objectives, which according to the COSO framework [3] can evident from looking at the role of key risk indicators (KRIs)
be categorised as: in ORM. A KRI is a metric representing one or more critical
success factors [5]. For example, the age of the IT systems or
strategic — these are the high-level objectives, aligned the number of server failures per unit time, are KRIs to a
with and supporting the entity’s mission/vision, major system failure event. It is sometimes possible to
measure KRIs directly from available data. However this is
operations — these are the operational layer objectives, not always the case. Typically, risk is calculated by analysing
which are related to effectiveness and efficiency of the and modelling of relationships hidden in data. Such analysis
entity’s operations, including performance and is often performed by experts off line, leading to significant
profitability goals, delays and high costs. One of the key features of RTBI is real-
KPI ta
e
rg
c
an
et
rm
s
rfo
pe
OPM
OPM
time discovery of relationships between operation out at an early stage in order to plan suitable mitigating
performance measures, which can be also applied to the actions.
discovery of key risk indicators. RTBI enables real-time
analysis of operational data through continuous and The above discussion makes it clear that ORM is a natural
automated/semi-automated learning resulting in models partner for RTBI. The following sections will go into the
that can be used for what-if analysis, target setting and details of this relationship, highlighting the components of
forecasting of future operational risks. RTBI, current achievements, and focusing on the technical
challenges that should be addressed in order for this
Another important issue is that in order for ORM to be partnership to achieve its goals. Section 2 describes the real-
successful, it requires accurate and timely information about time analytics and performance framework, and how it can
the internal operations of the business and its external contribute to ORM. Section 3 discusses the challenges
environment. Without such information, the impact of risk associated with real-time data fusion and data quality
events cannot be accurately quantified, and risk mitigation management and its impact on ORM. Section 4 focuses on
and control measures will not be able to respond adequately the business process management, its role within ORM and
to threats. With today’s technical advances in IT, and the how real-time technology could reduce change
emergence of the highly dynamic service-oriented implementation time and decrease operational risk.
architecture (SOA-)based enterprise models, the ORM
framework has to deal with huge amounts of data that
change rapidly and that vary in nature from quantitative to 2. RTBI analytics framework
qualitative, and from accurate to lacking in quality. The The role of the analytics part within the overall RTBI/ORM
answer to such a challenge is in adopting the RTBI data framework is concerned with building performance models
fusion and modelling methodologies that establish an of the organisation, allowing evaluation of the performance
information systems infrastructure capable of the timely parameters, given external and internal risks. We will first
capture, aggregation and sourcing of the relevant data. introduce the concept of performance modelling, and then
add threats and risks as a natural extension.
The link between the ORM framework and operational
process levels of the RTBI pyramid exhibits itself in two The main building blocks of a performance framework
aspects. are business entities (BEs), each of which represents exactly
one performance quantity of a part of the organisation.
Risk mitigation Examples are strategic quantities such as customer
satisfaction or profit, and tactical or operational quantities
The first is related to risk mitigation. Once risk is such as ‘average time to clear a fault’ or ‘number of
identified and its impact is quantified, then some action abandoned calls in call centre’. Furthermore, we distinguish
should be taken to reduce or eliminate its impact. In a between:
process-based enterprise the majority of these actions
need to be taken at the process level either requiring internal performance quantities such as the ones just
long-term changes or with immediate effect. To do this, mentioned,
a comprehensive business process management (BPM)
framework is needed to model, simulate, and execute business levers which can be changed in order to
business processes, and to monitor changes at the improve the performance, e.g. the number of call
process level. A robust and well-designed execution centre staff,
environment provides for the necessary compliance
with regulatory requirements. This means that once risk external influences stemming from the business
mitigation action is needed, it can be put into operation environment, i.e. anything related to customers,
with minimum delay at the business process level. competitors or other factors, e.g. weather, which
influence the business.
Risk analysis
The second dimension of the ORM/BPM relationship is 2.1 Defining performance frameworks
a result of the needs for ORM at the business process The first step of building a performance framework is to
level. In BPM, process designers and/or owners should identify relevant performance quantities. The approach is
consider carefully the risk environment affecting the very similar to the ideas formulated by Kaplan and Norton
performance of each individual process. This is done in [6] for balanced scorecards. The search for the right
terms of analysing the effect of risk events on the quantities is usually driven by the strategy of the
achievement of the process performance indicators and organisation, since we are only interested in those quantities
on the compliance of the process actors and procedures that influence the performance at the strategic level.
with the regulatory environment. Modelling and Typically, answering questions like the following helps to
simulation allows a process risk analysis to be carried identify a set of relevant quantities.
How can we express our strategic goals in terms of completely ignore the distribution of customers who are not
measurable quantities? very happy, i.e. we do not measure if most of them are still
quite happy, or if they are utterly unhappy. The second one
What are the influences of strategic quantities at
takes this into account, but still does not tell us anything
tactical levels, and which operational quantities
about the variation of happiness among customers.
influence tactical ones?
What can we control in our business in order to All measures are based on data, e.g. we require survey
influence the performance? data to measure customer satisfaction. Therefore, for each
What are the external influences we have to take into performance measure a data source needs to be specified. In
account? that context, the role of the data fusion layer described in
section 3 is of great importance since the required data is
Once all the relevant quantities have been identified, the typically distributed between a number of data sources that
following step consists of producing a framework that shows can be easily described as ‘disjoint and heterogeneous’. This
how each quantity affects the rest. As the questions above is particularly the case in large organisations. In order to
already suggest, we select quantities such that they obtain the correct measurement of a performance quantity,
influence others in the performance hierarchy. Business it is possible that a combination of data sources need to be
levers and external influences are at the bottom of the accessed in order to assemble the required value. The
hierarchy, linking into operational quantities above them. capabilities of the data fusion layer in terms of
These in turn are linked to tactical ones and finally into understanding the data model and relating the contents of
strategic quantities at the topmost level. Figure 3 shows an different sources, in addition to the management of data
example framework describing a call centre scenario. quality, are crucial for ensuring the validity of the collected
measurements within the performance framework.
At this point of defining the performance framework,
everything has been done at the qualitative level. We have Finally, the relationships between the connected
defined what quantities we want to measure but not how. quantities need to be quantified. If the relationship is
Therefore, the third step is about defining measures for the known, an equation expressing the relationship can be
quantities. For the quantity customer satisfaction, for defined. Many relationships, however, are unknown in
example, we could compute the relative number of very advance or are of a dynamic nature, i.e. changing over time
happy customers according to surveys. An alternative could as the business environment changes. An example of this is
be to measure the average happiness of customers. The the relationship between operational quantities and
decision depends on which definition of measure is more customer satisfaction. Such relationships can be learnt from
relevant for the strategy. In the case of the first one, we historic data as described in section 2.3.
2.4 What-if scenarios, target optimisation and It follows naturally from the above discussion that the
prediction performance platform can be considered as an ORM
Apart from monitoring, two main functions of the RTBI environment through which one can carry out what-if
system are: analysis to predict the effect of risk events, generated by the
threats, on the different performance measures modelled in
running what-if scenarios, the performance hierarchy provided a link/relationship can
be established between the risk event and one of the 3.1 Key ORM data requirements
performance quantities in the graph. It is important to note As discussed in the earlier sections, KRIs are developed by
here that this differs from the traditional way of carrying out analysing the business requirements according to data
operational risk analysis, where the temptation is to link the available from underlying business processes. In the age of
influence of the risk event directly to the higher level the Internet, the problem is not lack of data, but rather in
objectives, which is mainly done by ORM consultants who identifying good data. Since the data required by ORM is
use their expertise in quantifying such relationships. In our produced by many operational applications and is stored in a
opinion such an approach is not suitable for today’s rapidly number of data repositories, careful analysis must be carried
changing and service-oriented business environment out to determine its suitability. The implicit context and
because it requires a constant modification of the functions semantics of data must be made explicit to ORM designers,
describing the links between the risks and the objectives, as well as to business users, to avoid data misuse.
and requires the expensive services of ORM consultants each
time a new threat is discovered that requires evaluation. There are many existing tools for handling so called dirty
data. These tools can adequately tackle syntactical errors of
A much better approach is to link the events to the data, missing data and incorrect data such as non-telephone
performance quantities they immediately influence — this numbers for telephone number columns, and non-
can be done easily by the business expert and does not need numerical data for numerical columns. However, these tools
an ORM expert — and propagate the effects up the are often unable to handle semantic issues associated with
performance hierarchy using the capabilities of the frame- data. This is a serious shortcoming. As mentioned above,
work to evaluate the effect on high-level measures. data is produced with implicit semantics in specific contexts.
For example, the percentage of the churn of broadband
However, up to this point, all the framework customers cannot be generalised to the percentage of the
relationships are expressed by quantitative functions that churn of all the company’s customers. Data has to be taken
assume a deterministic world. In risk analysis, probability in the right context so that all users would have the same
distributions are attached to risk events to reflect the fact interpretation (i.e. semantics) of a set of data no matter
that these events are uncertain, and their impact on the where and how it is used.
performance is similarly uncertain. In other words, rather
than a single performance value, the result of what-if Although data management communities have talked
analysis would be a probability distribution on performance about the importance of data semantics for a long time,
values. The business user can then derive the most likely current vendor solutions have not made data semantics
outcomes, but can also investigate other ones which might explicitly available to end users. For example, data
be less likely but still possible. Uncertainty in the warehouse solutions focus on target schema definitions and
relationships between the framework entities must also be ETL (extract, transform and load). There is hardly any formal
considered and included in the framework, although this is a documentation, i.e. documentation that could be processed
more difficult issue and needs further evaluation. by machines. Even if some informal documentation exists of
target schemas and transformation specifications, it is rarely
available to the end user, because much of the semantics is
Risk countermeasures can also be incorporated in the
still hidden in transformation codes. As these are not made
framework by inserting their influence on the relationships
explicit, KRIs are often defined by dedicated teams who
between the risk event and its entry point to the framework,
understand the business and the data. The high cost
or in some cases in the relationship between two framework
associated with this means only a few KRIs could be defined.
entities. To find the best set of counter measures, what-if
However, there are many occasions that KRIs should be
analysis or optimisation can be used.
defined dynamically by end users who may not know the
implicit semantics. This means that data semantics as well as
contexts have to be available to these users. A true ORM
3. Real-time data support for ORM needs to have the capabilities for business users to choose
The quality of ORM depends on its data. Good data often data and data context to compose or define new KRIs, and
leads to visionary and profitable decision making. Poor data to get unified data support from any available data sources.
quality is often the cause of bad strategic decisions and
inaccurate financial and management reporting. Because of The key requirement to support business users in
this, most current BI and ORM systems draw data only from a defining KRIs and KPIs on-the-fly is to relieve them from
fixed number of data sources, and it is very difficult and knowing the details of low-level data integration. Data
costly to use data from any new data sources after the should be presented to the measure builders in terms they
systems have been built. In the following sections, we understand. This would address the usual gap between IT
present a system being developed within the BT Research departments and users and business users, who often blame
and Venturing programme, which meets the ORM data each other for project failure. IT personnel are often unable
management requirements. to understand business requirements, while business users
are unable to articulate their requirements exactly. Thus automated data mismatch reconciliation — this is a way
there is a need to supply the data in context for business of combining data while removing any mismatches
users to define new measures, i.e. KRIs or KPIs, which in turn between the different data sets.
would lead to the broad adoption of ORM and BI.
This type of data layers empowers business users to
select data sources suitable for their applications from a pool
The widespread use of ORM or BI requires a data layer of silo data sources without the risk of misusing them. They
that allows dynamic integration of new data sources, can safely and dynamically define any KRIs based on the
because enterprises cannot afford to build data warehouses latest data including data from external data sources. As the
for every BI application. Thus the technologies must be data layer provides a unified view of selected data sources, it
developed to provide the following: shortens application developments.
unified data layer — a common metadata structure 3.2 Real-time data fusion and data quality
unifies data access by creating a virtual warehouse view platform
of enterprise data so that all users, regardless of their This section overviews the real-time data fusion and data
departments or analytical prowess, have access to the quality platform in the light of ORM requirements. The
same values, field names and sources, system was originally developed to support real-time
business intelligence [4, 8, 9].
streamlined development cycle — this is a step-by-
step guide to creating machine processable metadata System architecture
repositories and a mapping between metadata and Figure 4 shows the overall architecture of the system. The
concept-based data access, semantic data repository includes metadata of all available
business software/
users applications
C Thing
C Customer
customerName
productName
automatic
fusion
semantic
repository
data sources. These metadata take the form of a centralised The fusion engine provides the unified view of all
ontology which forms the business vocabulary. Currently the selected data sources. Through this view, users can compose
system supports both relational and XML data sources. For their queries. Upon receiving these queries, the fusion
other data sources, adapters are needed. engine decomposes them through the ontology and
available mappings into sub-queries, which could be
When a data source is plugged into the semantic processed by each data source. Using each data source
repository, IT users use a GUI mapping editor, as shown in description, it computes a query graph using any possible
Fig 5, to publish its metadata in terms of concepts defined in semantic-based joints. This means they are derived through
the centralised ontology. ontology definitions and mappings. Each sub-query is then
sent to the data source to retrieve partial results. Once all
sub-queries are computed, the fusion takes place using the
This is different from data warehouse building in that ETL semantic joint graph. Finally the fused results are provided
needs to resolve heterogeneities of data sources, while our to the end users or end applications through Web Services.
system only requires descriptions of metadata of the data
source one at a time. It does not require IT users to resolve The user can select any data sources from the semantic
any mismatches with other data sources. The mismatches repository through GUI interfaces. There are two kinds of
between data sources are resolved by the system at run time information available to assist end users — a semantic view
by analysing the published metadata, removing the need for of the data for each data source, and a data quality profile.
the repeated ETL processes required by the data warehouse After viewing these details, users click to select or deselect
approach. Moreover, the mappings could be done in a data sources, and save the selection. Then a unified view of
distributed fashion. In addition, they can be changed or selected data is computed and presented to them for
updated easily as they are in declarative form and do not querying.
involve other data sources. One caveat of this is that the
changes made will affect all applications. This could be a Data adapters are software wrappers, which provide
good feature or a bad one depending on the applications. translation between ontology queries and native data source
queries. The pre-built adapters are available for relational
For each data source, the semantic repository includes a databases and XML data sources. For other data sources,
semantic description of its contents, and a profile of its data adapters have to be built. This includes retrieving data
quality, i.e. how good its data is according to certain source metadata, and mapping ontology queries to native
measures. The profiling is done through business rules. We data source queries.
have used machine learning techniques to try to learn these
rules from training data. The system can also use third party The mapping editor provides a set of standard transform-
data quality software through APIs. ation tools. Currently these include pass-through trans-
Fig 5 Mapping editor for publishing data source metadata using ontology.
Shared, source, user and application ontologies are the 4.1 The BPM environment within RTBI
critical components for the RTBI data layer to perform The business process management environment is based on
context-based mismatch reconciliation at run time. As an the open source JBPM [11] as the core workflow system
example, products could be priced in different currencies. where the processes are executed and/or simulated. JBPM
The system could automatically convert all product prices defines processes through JPDL, a workflow language that
into pounds sterling for UK users, and US dollars for US allows parallel execution, selective branching and process
users. Of course, the user could force the system not to composition. We have extended the JBPM system to store
resolve any mismatch, if they prefer. the audit data trail in a database in a structured way so that it
can be used in posterior analysis. A process task is defined as
a black box, whose function is performed by an actor
(human or system) and whose results are expressed in terms
applications
of output attributes. These attributes represent any
important information about the tasks such as timing, cost,
or any other attributes that can contribute directly or
user application indirectly to the performance analysis and measurement of
ontology ontology the process. Processes are defined with input and output
attributes, making it easy to use process composition. Task
output attributes can be mapped into process output
shared attributes, connecting the low level implementation of the
ontology process with the external high level view.
KPI tool — a tool that is used to define metrics for the resource management system — used to assign tasks to
process and to calculate them against the execution or actors in the system.
simulation data — these metrics link the business
process executions with the high-level goals of the All the process-generated data is captured and stored in
company, the database whether in real execution or in simulation. In
addition, real-time execution data is available through event
business rules — this engine allows the definition of listener interfaces that allow integration into an enterprise
business rules and manages the triggering of the data bus for transporting to other systems or for data
actions associated with them during the execution (real transformation using the data fusion layer. This allows the
or simulated) of the process — these rules represent the monitoring of the process simulation/execution in order to
business constraints set to complement the process evaluate any change to the process, be it through a re-
model created by the designer. engineering effort (e.g. a new resource assignment policy)
or through an unforeseen problem (e.g. failure of suppliers
As shown in Fig 7, the process simulator and the to deliver on time).
execution server constitute the two parallel pillars of the
BPM system. They are fed with the processes created with In order to carry out local process performance analysis,
the process designer and connected to the business goals two tools are provided which allow monitoring of process
and constraints through the KPIs and business rules performance. The first is a KPI tool, which consists of a visual
respectively. editor for KPI definition in terms of task and process
attributes, and an engine that evaluates the current and
The process simulator follows an event-based historic value of a KPI and displays the result as customised
architecture integrated into the workflow system with the charts as shown in Fig 8, where the queue time is monitored
following main components: as part of a client order process.
event scheduler — used to order and fire the events The second performance analysis tool is the business
within the simulation time-scale, rules engine. These rules can be defined and checked against
the process instances to find exceptional situations in
workflow system — ensures that the simulation is
process behaviour.
executed as prescribed by the process model,
process sources — generate new process instances that Three different scopes have been defined according to
constitute the start point of the workflow executions, which the rules can be checked:
task execution estimators — responsible for estimating node — the rule is checked when leaving a certain
the task output attributes and duration, node,
resource
policy process
execution
history
server
process database
execution action
handlers
action
handlers
simulation
system simulation
resource history
policy database
task process
estimator source
process — when the process instance has finished, the regulations for carrying out process tasks, especially in
rule is checked, processes which involve considerable human interaction.
Legal requirements are continuously increasing, leaving
global — rule checking is carried out periodically.
companies facing huge financial penalties as a result of
infringement. On the other hand, people taking short cuts
In this case, the rule can check constrains relevant to the
and working ‘around the process’ in carrying out their tasks
group of processes finished in that period, rather than just
is one of the major reasons for process failure leading to
one single instance. The consequence of the rule is defined
missed targets and increased customer dissatisfaction.
as an interface. The system provides the user with some
basic implementations (e.g. stop the process, send an e-
mail), any other action can be carried out by implementing a The second aspect of the BPM/ORM connection is that
new class (written in Java) that extends that interface. mitigation of operational risks identified within the overall
performance framework (section 2) usually requires actions
4.2 BPM for ORM to be taken on the process level. Examples of such actions
The role of BPM in an ORM implementation was mentioned are complete process re-design, monitoring of process
in section 1, where two aspects were identified. The first execution in terms of performance and ensuring process
aspect is the local ORM within an individual process actors comply with regulations.
(process-oriented ORM), the responsibility of which falls on
the process owner. The process owner is accountable not The requirements of these two aspects are quite
only for the performance of the process (i.e. meeting the interrelated and can be serviced with a similar set of tools to
KPIs), but also for making sure that potential threats to the provide the required functionality. The BPM environment,
process performance and their related risk events and discussed in section 4.1 above, provides many of the
situations are identified, analysed and measures are put in functions needed to carry out the local process risk analysis
place to mitigate their effects. It is important that the local and the risk mitigation requests filtering down from the
risk analysis is done in terms of the operational objectives of performance framework. As an important first tool, the
the process in order to focus the effort of risk analysis on the process simulator allows process owners to evaluate what-if
most relevant issues. In a similar manner to the risk analysis scenarios in the execution of the process, changing process
within the performance framework, discussed in section 2, parameters and including risk events. For example, call
threats should be classified as internal or external, and their centre process owners can evaluate the risk of losing a
relevant KRIs identified and measured. percentage of the staff, e.g. as a result of illness or strike
action, by simulating different scenarios for resources and
Another important issue in process-oriented ORM is incoming calls, thus enabling them to plan the necessary
compliance with both legal requirements and with local actions in case such a risk should materialise. Another
important tool is the business rule engine that allows flexible presented the tools that assist process owners in carrying out
configuration of compliance rules and monitoring of any process-oriented ORM, and discussed the role of BPM in risk
infringement of these rules in real time, enabling the process mitigation.
owner to take immediate action to rectify the situation and
to prevent the results of the infringement from propagating Despite the development of advanced statistical
further and causing serious damage. Another capability that techniques for ORM, it is currently the view of some experts
facilitates the deployment of mitigating actions is the that, for a large business, the most meaningful analysis must
flexibility of process design and rapid deployment of a new be qualitative in nature because of the difficulty in
version of the process in the operational environment in establishing the accurate quantitative relationships between
order to minimise the damaging effects of a developing risk risk events and strategic business goals. Although we
situation. Finally, the KPI editing and monitoring tools can acknowledge the fact that qualitative analysis has an
also be used for building and monitoring process KRIs in a important role to play, we are of the view that the source of
systematic way so that process managers can easily and the problem is that quantitative risk analysis is not carried
quickly mange performance and risk in an integrated manner. out with the help of a complete performance framework
such as RTBI. RTBI allows detailed modelling of the business,
based on solid information provision and a two-way link to
5. Conclusions the business process layer. Saying that, some of the
This paper has discussed the different components of RTBI relationships within the framework can be qualitative and
and how these can work towards achieving a successful best described by qualitative modelling methods such as
implementation of ORM that is based on a continuum of those provided by the field of soft computing.
objectives setting, monitoring and optimisation. We have
followed broadly the COSO framework for enterprise risk
management, which has established itself as the de facto
References
standard for ORM implementation.
1 ‘International Convergence of Capital Measurement and Capital
Standards: A Revised Framework’, Basel Committee on Banking
The business performance framework is the first Supervision (June 2003) — www.bis.org/publ/bcbs107.pdf
component within RTBI, and is responsible for building
performance models of the organisation, allowing the 2 The Committee of Sponsoring Organisations of the Treadway
Commission (COSO) — www.coso.org/
evaluation of the performance parameters of the business,
as well as providing the ability to carry out what-if analysis
3 ‘Enterprise Risk Management — Integrated Framework’, published by
and target optimisation. These facilities allow business the COSO organisation (2004).
managers to have real-time visibility of the status of their
targets, and to assess the effects of any actions they want to 4 Azvine B, Cui Z, Nauck D and Majeed B: ‘Real Time Business Intelligence
take on performance. The same tools can be used to assess for the Adaptive Enterprise’, in Proceedings of the IEEE Joint
performance given external and internal risks, thus Conference on E-Commerce Technology (CEC'06) and Enterprise
Computing, E-Commerce and E-Services (EEE'06), San Francisco, pp
providing an integrated view of performance and risk. 222—229 (June 2006).
RTBI and ORM cannot succeed without the availability of 5 Vinella P and Jin J: ‘A Foundation for KPI and KRI’, in Davis E (Ed):
information which is clean, timely and relevant. Without it, ‘Operational Risk — Practical Approaches to Implementation’, Risk
operational risk management could be out of date or in Books (2005).
some cases out of synchronisation with the business cycle,
with serious consequences. However, the diversity and 6 Kaplan R S and Norton D P: ‘Balanced Scorecard: Translating Strategy
into Action’, Harvard Business School Press (1996).
typical ad hoc implementation of data sources within large
enterprises makes the task of making the data available very
7 Nauck D , Spott M and Azvine B: ‘SPIDA — a novel data analysis tool’,
difficult. This is where the second component of RTBI comes BT Technol J, 21, No 4, pp 104—112 (October 2003).
into play — the data fusion layer. Based on a common data
model, this layer empowers business users with the ability to 8 Cui Z, Jones D and O’Brien P: ‘Semantic B2B Integration: Issues in
integrate data using any available data sources based on Ontology based Approaches’, ACM Sigmod Record Special Issue on
conceptual views of the underlying data sources. This is ‘Data Management Issues in E-commerce’ (March 2002).
achieved through metadata, abstraction and separation of
low level data from their semantics. Thus IT users can focus 9 Cui Z, Tamma V and Bellifemine F: ‘Ontology management in
enterprises’, BT Technol J, 17, No 4, pp 98—107 (October 1999).
on publishing their data through ontologies, i.e. conceptual
views. As the system retrieves data from the data source
10 Cui Z, Shepherdson J W and Li Y: ‘An ontology-based approach to
directly, this guarantees the freshness of the data. eCatalogue management’ BT Technol J, 21, No 4, pp 76—83 (October
2003).
This was followed by discussion of how the third
component, BPM, fits into the scheme of RTBI and ORM. We 11 Open source JBPM — www.jboss.com/products/jbpm/
Zhan Cui received a BSc (1981) and an MSc He continued working in Karlsruhe until 2000
(1985) in Computer Science from Jilin as a research assistant in the Innovative
University of China, and a PhD in Artificial Computing Group of Prof G Goos. He
Intelligence from Academia Sinica in 1988. completed his PhD in Computer Science in
Between 1989 and 1996, he worked as a November 2000 with a dissertation on
research fellow in the areas of artificial ‘Reasoning with Fuzzy Terms’ and joined BT in
intelligence and databases not only for the January 2001 where he works as a Principal
Universities of Edinburgh and Leeds, but also Researcher in the computational intelligence
for the Imperial Cancer Research Fund (now research group.
Cancer Research UK), and as a lecturer for the
Universities of Swansea and Liverpool. He He has published numerous papers in his research area, and is also a regular
joined BT in October 1996. Since then he has member of programme committees for related conferences and a reviewer
been working on R&D projects in software for scientific journals. His current research interests include soft computing,
agents ontology, knowledge management machine learning and data mining.
and the Semantic Web. He is a recognised expert in ontology-based
approaches to semantic integration of disparate information sources. He has Since joining BT, he has worked on several intelligent data analysis projects
authored more than 50 technical papers and is an inventor on more than 12 like travel time prediction, real time business intelligence tools and a
patents. He is currently working on automatic taxonomy generation from platform for automating data analysis, for which he received a BCS medal in
text, text categorisation and fusing data from unstructured data sources the category ‘IT Developer of the Year — Infrastructure’ in 2005.
such as Web pages and documents.