You are on page 1of 27

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/303696363

A Multi-Criteria Approach for Hospital Capacity Analysis

Article  in  European Journal of Operational Research · May 2016


DOI: 10.1016/j.ejor.2016.05.041

CITATIONS READS

8 113

2 authors:

Robert L. Burdett Erhan Kozan


Queensland University of Technology Queensland University of Technology
47 PUBLICATIONS   571 CITATIONS    154 PUBLICATIONS   2,722 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Reducing Transport Costs Through the Automation of Schedule Generation View project

Coal Terminal Scheduling View project

All content following this page was uploaded by Robert L. Burdett on 31 August 2018.

The user has requested enhancement of the downloaded file.


A Multi-Criteria Approach for Hospital Capacity Analysis
R.L. Burdett, E. Kozan
School of Mathematical Sciences, Queensland University of Technology, Australia

Abstract: Hospitals are critical elements of health care systems and analysing their capacity and
productivity is a very important topic. To perform a system wide analysis of public hospital resources
and capacity, a multi-objective optimization (MOO) approach has been proposed. This approach
identifies the theoretical capacity of the entire hospital and facilitates a sensitivity analysis, for
example of the patient case mix (PCM). It is necessary because the competition for hospital
resources, for example between different patient types and hospital units, is highly influential on the
hospitals productivity. The MOO approach has been extensively tested on a real life case study and
significant worth is shown. In this MOO approach, the epsilon constraint method (ECM) has been
utilized. However, for solving real life applications, with a large number of competing objectives, it
was necessary to devise new and improved algorithms. In addition, to identify the best solution, a
separable programming approach was developed. Multiple optimal solutions are also obtained via
the iterative refinement and re-solution of the model.

Keywords: capacity analysis, epsilon constraint methods, health care, hospitals, hospital resource
planning, multi-criteria, theoretical capacity

1. Introduction

Hospitals are critical elements of the health care system. In recent years the demand for their
services has increased greatly and in response they have become larger and more sophisticated.
Access to hospitals and to health care services is very competitive worldwide. Public hospitals for
instance are rarely constructed for specific services and typically must treat many different types of
patients. There are a variety of different competitions that may be characterised. How this
competition is regulated or otherwise decided, greatly affects the capacity of a hospital and the
outcomes of any analysis of hospital capacity. This article focusses upon that aspect and investigates
whether a multi-objective capacity analysis (MOCA) can be used to identify the theoretical capacity
of a hospital when there are competing capacity metrics. Theoretical capacity is an upper bound and
describes the best possible performance of the hospital in terms of productivity. Public hospitals are
the main focus of this work. Private hospitals however are equally relevant and have not been
excluded. Given the increased pressures and challenges placed upon hospitals worldwide, this article
is believed timely.
There are many ways to regulate competition and a multi-objective approach is believed to be
the best way to perform a sensitivity analysis of hospital capacity. That hypothesis is tested in this
article. The significance of a multi-objective approach is that a variety of competing capacity metrics
can be incorporated. In contrast, an approach involving a single objective, for instance as the total
number of patient cases, with for example no emphasis or meaning given to patients or services of
different type, is avoided. As few if any hospital operates with a single patient type or service, and
those patients are not of equivalent worth, our approach is evidently superior.
The format of this article is as follows. In Section 2 a brief review of the literature is presented.
In Section 3 the multi-objective framework is introduced and appropriate solution techniques are
then developed in Section 4. A numerical investigation has been provided in Section 5 and
demonstrates the application of the proposed MOCA to real life. A summary of this articles
contributions and the conclusions are provided in Section 6.

1
2. Literature Review

In this section, research concerning hospital planning is first discussed, and then approaches for
performing multi-objective optimization are reviewed.

2.1. Hospital Planning

Our review of the literature indicates that approaches for identifying hospital capacity from a multi-
objective viewpoint are limited. In past research, a variety of different hospital capacity planning
problems have been proposed. These differ greatly. Evidently there is no single “standard” hospital
capacity planning problem. Those planning problems have been addressed in a variety of different
ways as discussed in Rechel et al (2010). For example, some approaches have been purely analytical,
and others have been empirical or simulation based. Recent articles that are noteable, insomuch as
they are relevant to the focus of this article, include Abdelaziz and Masmoudi (2012), Ma and
Demeulemeester (2013), Vanberkel et al (2014), Dellaert et al (2015). For example, Abdelaziz and
Masmoudi (2012) developed a multi-objective stochastic mathematical programing model to
determine what number of beds should be assigned to hospital departments in order to satisfy the
random demand. In their bed capacity management approach, three objectives were considered,
namely cost of creating a new bed, and the number of physicians and nurses working in each
hospital. A recourse approach and a goal programming approach were used to transform the multi-
objective stochastic program to a certainty equivalent program. Ma and Demeulemeester (2013)
developed an integrated and iterative multi-level approach for hospital planning. Their approach
consists of three phases. In the first phase (i.e. case mix planning) an optimal patient mix and volume
are selected that brings the maximum profit. Then bed capacity is reallocated and a master surgery
schedule is created. In the third phase simulation is performed to evaluate operational policies.
Optimization models are developed to faciltate the first two phases. Vanberkel et al (2014)
considered how to choose patient case mixes in hospitals in order to achieve the greatest benefit,
and to achieve a specified DRG mix. Hospital capacity and case mix decisions are jointly considered
to facilitate joint decision making over a long term planning horizon. Hospitals are modelled as a
queing system and an integer linear programming model was formulated. The model is solved using
a time discretization and an approximate solution approach (i.e. a heuristic). Dellaert et al (2015)
considered the creation of tactical plans of elective patient surgeries and the utilization of hospital
resources, in order to increase hospital efficiency. The tactical plan is a description of the number of
patients in each category to be operated on for each day of the horizon. They developed methods to
determine the operational performance of tactical plans in hospitals. For example, their method
computes exact waiting time distributions for patients. To reduce waiting times, slack planning and
smoothing have been proposed. Four resources were considered, namely operating theatres, beds
and nurses in the ICU, and beds in a medium care unit. Hence this approach is not holistic and only
focusses upon one part of the entire hospital system.

2.2. Multi-Criteria Optimization

The epsilon constraint method (ECM) is one of the most popular methods for solving multi-objective
optimization problems (MOOP) and to generate the set of non-dominated solutions. In this article it
is used as the basis of the techniques we have developed to perform our multi-objective hospital
capacity analysis. In recent years, a number of articles have applied it to real life applications and
have considered ways to improve it. Laumanns et al (2006) developed an adaptive scheme to
approximate the Pareto set. In their approach the m-1 dimensinal hypergrid is generated
dynamically and is stored as a matrix of vectors. The set of searched regions and infeasible regions is
updated as the search progresses. Ehrgott and Ruzika (2008) considered weaknesses of the epsilon
constraint method. In response they introduced slack variables in the formulation and elasticized

2
constraints. Mavrotas (2009) proposed several augmented versions to reduce redundant iterations,
and to accelerate the search. The production of weakly optimal Pareto solutions is avoided in their
approach. Berube et al (2009) applied an epsilon constraint method to a bi-objective traveling
salesman problem. Özlen and Azizoglu (2009) developed an algorithm to generate all non-
dominated points for MIPs based on the epsilon constraint method. Their method identifies
individual objective efficiency ranges. These are used to improve the search for non-dominated
solutions. Aghaei et al (2011) applied multi-objective techniques to an electricity market clearing
problem. A lexicographic optimization and augmented epsilon constraint method was applied. That
approach was compared with the traditional epsilon constraint method and found to be greatly
superior. Kirklik and Sayin (2014) introduced an algorithm that involves a new partitioning
mechanism. There is no limit on the number of objectives that can be handled by their approach,
however they conclude that as the problem size increases the computational requirement are
unrealistically high. Klamroth et al (2015) investigated how improved local upper bounds can be
obtained for epsilon constraint like methods in order to improve the search for non-dominated
solutions. Two incremental approaches were presented.
Other approaches for solving MOOP exist. For instance Lokman and Köksalan (2013)
presented two algorithms for multi-objective integer programming. Their search procedure is an
extension of a previous approach by Sylva and Crema (2007). They introduce binary variables and
additional constraints to exclude regions dominated by previously generated points.

3. Multi-Criteria Hospital Capacity Analysis (MOHCA)

A multi-objective capacity analysis (MOCA) is presented here for hospitals (i.e. a MOHCA). This
approach builds upon the research in Burdett and Kozan (2006, 2008) and Burdett (2015). In those
articles, optimisation approaches have also been formulated, for the identification of theoretical
capacity in several other domains. The underlying mathematical model that is used as the basis of
our MOHCA is now reviewed.

3.1. The Hospital Capacity Model (HCM)

This section’s HCM is a mixed integer linear programming formulation (MILP). This model is holistic
as it includes the main hospital elements, such as the recovery wards, operating theatres, intensive
care units, and the emergency department. The model’s purpose is to determine the maximum
number of patient treatments that can be performed over a specified period of time T, subject to a
variety of technical constraints. The solution of this model provides a plan that describes how the
hospital’s resources are used. The plan specifies the number of patients that can be processed of
each type 𝛾 ∈ 𝛤. It also determines where those patients are treated within the hospital. In other
words, it describes all resource assignments and resource utilisations.
To apply the HCM, detailed information concerning the types of activities 𝜙 ∈ 𝛷 and their
respective processing times are required for different patient types 𝛾 ∈ 𝛤. Every patient that visits
the hospital receives some type of treatment or care or else participates in some type of diagnostic
or assessment activity. These activities all utilize hospital capacity and are performed by hospital
units. Each hospital unit 𝑢 ∈ 𝑈 is associated with a particular medical or surgical specialty 𝑠 ∈ 𝑆. For
patients of type 𝛾, a variety of patient care plan (PCP) eventuate. They are denoted by 𝛹𝛾 . Each PCP
𝜓 ∈ 𝛹𝛾 is defined in the following way: 𝑃𝐶𝑃 = {(𝜙, 𝑢, 𝑡, 𝑟)|𝜙 ∈ Φ, 𝑢 ∈ 𝑈, 𝑡 ∈ ℝ}. Each tuple
(𝜙, 𝑢, 𝑡, 𝑟) describes the activity type, the hospital unit performing the activity, the time to perform
the activity, and the set of resources required. A PCP task is denoted by 𝑜𝛾,𝜓,𝑘 and the activity, unit,
and time required are denoted by 𝜙𝛾,𝜓,𝑘 , 𝑢𝛾,𝜓,𝑘 , 𝑡𝛾,𝜓,𝑘 . The set of treatment areas and spaces are
denoted by 𝑤 ∈ 𝑊and 𝜋 ∈ 𝛱 respectively. Unit-activity tuples (𝑢, 𝜙) are used to describe the places
(i.e. areas) where the activity can be performed. It is assumed that each hospital unit has specific

3
areas to perform specific types of activities. Hence the following mappings must be defined:
(𝑢, 𝜙) → 𝑤 and (𝜙, 𝑤) → 𝜋.
An explicit mathematical description is now given by equation (1)-(10).

2
𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 ℂ = ∑∀(𝛾,𝜓)∈℘(𝑛𝛾,𝜓 ) [Total number of patients treated] (1)
Subject To
2
𝑛1𝛾 = ∑𝜓∈𝛹𝛾 (𝑛𝛾,𝜓 ) ∀𝛾 ∈ 𝛤 [Patients treated by type] (2)
2 1
𝑛𝛾,𝜓 = ∑𝑤∈𝑊(𝛼𝛾,𝜓,𝑘,𝑤 ) ∀(𝛾, 𝜓, 𝑘) ∈ ℘ [Patients treated by PCP] (3)
2
𝐶𝑀𝑉 = ∑𝛾∈𝛤 ∑∀(𝜓,𝜌)∈𝜇𝛾2 |𝑛𝛾,𝜓 − 𝜌. 𝑛𝛾 | + ∑(𝛾,𝜌)∈𝜇1 |𝑛1𝛾 − 𝜌. 𝔸| ≤ ℧ [Case mix violation]
1
(4)
∑∀(𝛾,𝜓,𝑘)∈℘1|𝜋∈𝛱 2 (𝛽𝛾,𝜓,𝑘,𝜋 . 𝑡𝛾,𝜓,𝑘,𝜋 ) ≤ 𝑇𝜋 ∀𝜋 ∈ 𝛱 [Space utilization restrictions] (5)
𝛾,𝜓,𝑘
𝛼𝛾,𝜓,𝑘,𝑤 = ∑𝜋∈𝛱𝑤3| 𝛽𝛾,𝜓,𝑘,𝜋 ∀(𝛾, 𝜓, 𝑘) ∈ ℘1 , ∀𝑤 ∈ 𝑊 [Comparative relationship] (6)
2
𝛼𝛾,𝜓,𝑘,𝑤 = 0 ∀(𝛾, 𝜓, 𝑘) ∈ ℘1 , ∀𝑤 ∈ 𝑊|𝑤 ∉ 𝑊𝛾,𝜓,𝑘 [No assignment to ward] (7)
2
𝛽𝛾,𝜓,𝑘,𝜋 = 0 ∀(𝛾, 𝜓, 𝑘) ∈ ℘1 , ∀𝜋 ∈ 𝛱|𝜋 ∉ 𝛱𝛾,𝜓,𝑘 [No assignment to space] (8)
1
𝛼𝛾,𝜓,𝑘,𝑤 ≥ 0 ∀(𝛾, 𝜓, 𝑘) ∈ ℘ , ∀𝑤 ∈ 𝑊 [Positivity requirement] (9)
𝛽𝛾,𝜓,𝑘,𝜋 ≥ 0 ∀(𝛾, 𝜓, 𝑘) ∈ ℘1 , ∀𝜋 ∈ 𝛱 [Positivity requirement] (10)

2
In this model 𝑛1𝛾 and 𝑛𝛾,𝜓 respectively are the number of patients treated of type 𝛾, and with patient
care plan (PCP) 𝜓. The set of wards and spaces respectively that may be used for 𝑜𝛾,𝜓,𝑘 is denoted by
2 2
𝑊𝛾,𝜓,𝑘 and 𝛱𝛾,𝜓,𝑘 . The set of treatment spaces within wards is denoted by 𝛱𝑤3 . The number of
patients of type 𝛾 assigned to space 𝜋 and ward w respectively for task k of PCP 𝜓 is 𝛽𝛾,𝜓,𝑘,𝜋 and
𝛼𝛾,𝜓,𝑘,𝑤 . The time availability of treatment spaces is given by 𝑇𝜋 . The proportion of patients of type
𝛾 who have PCP 𝜓 is denoted by 𝜇𝛾2 = {(𝜓, 𝜌)}. The proportions 𝜌 must be chosen so that
∑∀(𝜓,𝜌)∈𝜇𝛾2 (𝜌) = 1 ∀𝛾 ∈ 𝛤. Similarly 𝜇1 = {(𝛾, 𝜌)} is the proportion of patients that are of type 𝛾,
such that ∑(𝛾,𝜌)∈𝜇1(𝜌) = 1. In this formulation it is necessary to define the following sets:

℘ = {(𝛾, 𝜓)|∀𝛾 ∈ 𝛤, ∀𝜓 ∈ 𝛹𝛾 };
℘1 = {(𝛾, 𝜓, 𝑘)|(𝛾, 𝜓) ∈ ℘, ∀𝑘 ∈ [1, 𝐾𝛾,𝜓 ]};
2
℘2 = {(𝛾, 𝜓, 𝑘, 𝑤)|(𝛾, 𝜓, 𝑘) ∈ ℘1 , 𝑤 ∈ 𝑊𝛾,𝜓,𝑘 };
3 1 2
℘ = {(𝛾, 𝜓, 𝑘, 𝜋)|(𝛾, 𝜓, 𝑘) ∈ ℘ , 𝜋 ∈ 𝛱𝛾,𝜓,𝑘 }; (11)

After the model is solved, the following outputs are useful and may also be computed:

2 1
𝑛𝑢3 = ∑∀(𝛾,𝜓)∈℘|𝑢∈𝑈 1 (𝑛𝛾,𝜓 ) ∀𝑢 ∈ 𝑈 where 𝑈𝛾,𝜓 = {𝑢𝛾,𝜓,𝑘 |𝑘 ∈ [1, 𝐾𝛾,𝜓 ]} (12)
𝛾,𝜓
𝑛𝑠4′ = ∑∀(𝛾,𝜓)∈℘,𝑢∈𝑈 1 ′
2
(𝑛𝛾,𝜓 ) ∀𝑠 ′ ∈ 𝑆 (13)
𝛾,𝜓 |𝑠𝑢 =𝑠
5 2
𝑛𝜙 ′ = ∑∀(𝛾,𝜓,𝑘)∈℘1|𝜙𝛾,𝜓,𝑘=𝜙′ (𝑛𝛾,𝜓 ) ∀𝜙 ′ ∈ 𝛷 (14)

These equations describe the number of patients treated by different units, of different specialties,
and of different activities.
Several model variants have been developed. The first assigns PCP tasks to wards and other
treatment areas. The primary decision variable hence describes in which ward each stage of the PCP,
is assigned and how many patients have that treatment. The second model is more detailed and
assigns work to individual treatment spaces within treatment areas. Hence the decision variable of
that model describes in which treatment space each stage of the PCP, is assigned and how many
patients have that treatment. The third model builds upon the second and includes staff and other
medical equipment. The models have a variety of constraints. There are constraints to regulate the
patient case mix and to ensure correct assignments are made. The most important constraints

4
however are those that restrict resource use to be less than or equal to the time available for each
resource.
In summary this HCM assigns work to resources such as beds and treatment spaces subject to
given time availabilities. As it attempts to fully utilize (i.e. saturate) each hospital resource, it is a
form of bottleneck analysis. Resources that are fully utilized can be identified and are deemed
bottlenecks. They restrict the system and further system outputs.
It is worth mentioning that multi-commodity or transportation flow models are often used to
perform capacity analysis. In theory those types of models could conceivably be developed here. The
HCM however is not that type. There are a variety of reasons for this. First, this problem appears to
be more complex. There are a variety of advanced features that need to be added that are not easily
incorporated into traditional flow models and motivate a bespoke approach. Second, past research
on railway capacity analysis indicates that flow models require a greater number of decision
variables than the type of model described in this article. In those models, there are decision
variables to describe the flow across every arc, and conservation of flow constraints to ensure that
flows do not vanish. Often, the flow on each arc is dictated by the flow chosen for predefined paths
and corridors. Hence across those paths and corridors, the flow is the same on each arc and some
decision variables are redundant. In hospitals, patient care plans are equivalent to paths and
corridors. Last it should be noted that in flow models, the network has capacitated arcs. Those
capacities are input and are typically static. That is not the case here. The capacities depend on the
mix of patient treatments performed.

3.2. Objectives to Regulate Competition

Hospitals need to treat many different types of patients with many different types of treatments.
Access to hospital resources is competitive as demand is high, and resource levels are low in
comparison. There are a variety of different competitions that may be characterised and how this
competition is regulated, greatly affects the outcome of any analysis of hospital capacity. An
approach to regulate competition is to define a proportional patient case mix (PCM) and to add
additional constraints to the model to enforce these proportions within the actual case mix. That
approach is sufficient but is imperfect because the hospital has a different level of capacity for every
possible proportional PCM. In addition some proportional PCM can result in very low capacity levels
due to the presence of bottlenecks, i.e. resources with limited availability and capability. Hence
dominated solutions can be identified. In response a multi-objective approach is advocated here to
regulate competition and to perform a complete analysis of the criteria space. In this approach the
PCM constraints are removed from the capacity model. Otherwise the model’s constraints are
unaltered. Without loss of generality the set of objectives to be optimized is as follows: {𝑍𝑚 ∀𝑚 ∈
[1, 𝑀]}. For hospitals, some specific examples of 𝑍𝑚 are demonstrated in Table 1. Only one of the
sets of multiple objectives must be selected.

Table 1
Maximization objectives for hospitals
Competition Objectives Equivalent Label
Type v type 1
{𝑛𝛾 ∀𝛾 ∈ 𝛤} {𝑍𝑚 = 𝑛1𝑚 ∀𝑚 ∈ [1, |𝛤|]} (O1)
2 2
PCP v PCP {𝑛𝛾,𝜓 ∀(𝛾, 𝜓) ∈ ℘} {𝑍𝑚 = 𝑛𝐅(𝑚) ∀𝑚 ∈ [1, |℘|]} (O2)
where 𝐅(𝑚 ): 𝑚 → (𝛾, 𝜓)
3 3
Unit v unit {𝑛𝑢 ∀𝑢 ∈ 𝑈} {𝑍𝑚 = 𝑛𝑚 ∀𝑚 ∈ [1, |𝑈|]} (O3)
4 4
Specialty v specialty {𝑛𝑠 ∀𝑠 ∈ 𝑆} {𝑍𝑚 = 𝑛𝑚 ∀𝑚 ∈ [1, |𝑆|]} (O4)
5 5
{𝑍𝑚 = 𝑛𝑚 ∀𝑚 ∈ [1, |𝛷|]}
Activity v activity {𝑛𝜙 ∀𝜙 ∈ 𝛷} (O5)

The first set regulates the competition between different types of patients. For instance one
possibility is the competition between elective surgical patients and emergency surgical patients.

5
Another example is surgical inpatients versus medical outpatients. The second set of objectives
regulates competition between all the different PCP. In (2): 𝐅 = [𝐵1 |𝐵2 | … . |𝐵|𝛤| ] where 𝐵𝛾 =
[(𝛾, 1), (𝛾, 2), … , (𝛾, |𝛹𝛾 |)]. The third and fourth are similar. They regulate competition between
hospital units or between specialties. However it should be noted that in most hospitals each unit
generally focusses upon a specific specialty. Patient type characterizations may also be made in
accord to medical or surgical speciality; hence (O1) would be equivalent to (O4) in that situation. The
fifth set of objectives (i.e. given by O5) regulates the competition between different health care
activities.
Upon reflection, it is just as important to maximize the overall number of treatments as it is to
maximize the number of treatments performed for instance of each specialty or of each patient
type. In that event, an additional objective could be added, which is the maximization of the total
number of treatments (i.e. ℂ). The addition of this objective (i.e. in this way) however implies that it
is of equal value to the other objectives. This additional objective can be added to (O1), (O2), (O3)
and (O4). For this scenario the upper bound on the capacity of the hospital is required. It is denoted
1 1
by ℂ and is computed by ℂ = ∑𝛾 𝑛𝛾 , where 𝑛𝛾 is an upper bound on the number of patients treated
of type 𝛾. Numerical testing has shown that the introduction of this additional objective does have
an effect on the MOHCA and results in a different set of non-dominated solutions.
The multi-objective scenario described in (O2) may be problematic as the total number of
objectives may be excessive. This may also be true of (O1) if many patient types are defined.
Evidently objectives (O3) and (O4) may be most applicable in real life.

3.2. Hierarchies of Multi-objectives

It is possible for secondary “sub competitions” to occur. In theory these could also be regulated. For
example consider the competition between patient types at one level and the secondary
competitions between the PCP within each patient type categorization at another lower level:

Level 1: Maximize {𝑛1𝛾 ∀𝛾 ∈ 𝛤} [Maximize patients treated of each type] (15)


2
Level 2: Maximize {𝑛𝛾,𝜓 ∀𝜓 ∈ 𝛹𝛾 } ∀𝛾 ∈ 𝛤 [Maximize PCP cases of each patient type] (16)

Each Pareto optimal solution obtained at Level 1 would have a set of Pareto optimal solutions. The
collection of all those solutions would constitute the set of Pareto optimal solutions for the
combined hierarchical problem. The situation where a second criterion to maximize the total
number of treatments is important and should also be considered:

Level 1: Maximize {𝑛1𝛾 ∀𝛾 ∈ 𝛤} [Maximize patients treated of each type] (17)


Level 2: Maximize ℂ [Maximize total number of patients] (18)

The additional criterion here is first assumed to not be of equal value and hence it should not be
added at Level 1. In this situation, solutions of minimal Euclidean distance can be extracted and then
simply sorted in terms of ℂ.

4. Solving the Multi-Objective Model

In this section the solution of the multi-objective models formulated in Section 3.2 is focussed upon.
Without loss of generality the goal of multi objective optimization (MOO) is to identify the set of
non-dominated “Pareto” optimal solutions (i.e. the Pareto frontier). Pareto optimality is a well-
established concept. In retrospection, Pareto optimality is a state in which it is impossible to make
any one objective better without making at least one objective worse. The Pareto frontier can be
pre-computed in order to facilitate the evaluation of different preferences at a later time. Once the
Pareto frontier has been obtained, solutions can be selected by decision makers in a variety of ways.

6
The particular shape of the frontier can influence that choice. An important component of multi-
objective optimization is the utopia point. This is an ideal solution where each objective attains its
highest value. How close Pareto optimal solutions are to that ideal point is important when choosing
a solution. An alternative to determining the Pareto frontier is hence to solve for the best solution.
In other words, the solution that is closest to the utopia point (i.e. of minimum distance) can be
identified.
A variety of techniques can be used to generate the Pareto frontier and to solve the multi-
objective models. In this article variants of the standard epsilon constraint method (ECM) are
utilized. The ECM is a good approach for approximating the Pareto frontier and that is why it has
been chosen (Marler and Arora, 2004). An adaptive version however is required as the traditional
ECM performs poorly when the number of competing objectives is large. For example it is very easy
for infeasible mesh points to be created. The number of mesh points required is also extremely
prohibitive. For the aforementioned scenarios in Table 1 it is evident that there may be a large
number of competing objectives. This makes the identification of the Pareto frontier
computationally challenging for health care applications.

4.1. Bound Analysis

The ECM traditionally involves the systematic solution of the multi-objective model for different
values of parameter 𝜀𝑚 . Constraints of the following form are added: 𝑍𝑚 ≥ 𝜀𝑚 𝑚 = 2, … , 𝑀. In
order to properly apply the epsilon constraint method, the ranges of at least 𝑀 − 1 objectives
functions are required. To obtain these values a payoff table 𝑃 is typically constructed. Lexicographic
approaches such as the one described in Mavrotas (2009) have been shown to be superior for this
bound analysis as it eliminates weakly optimal points and provides a smaller denser region for
searching. That approach involves the repeated solution of the multi-objective model with each
objective and the inclusion of additional epsilon type constraints. The complexity of the
lexicographic bound analysis is 𝑂(𝑀2 ). A summary of the lexicographic approach is as follows:

∀𝑚 ∈ [1, 𝑀]: 𝑃𝑚,𝑚 = 𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑍𝑚 (𝑥) such that 𝑥 ∈ 𝑋


∀𝑚, 𝑚 ′ ∈ [1, 𝑀]|𝑚′ ≠ 𝑚: 𝑃𝑚,𝑚′ = 𝑀𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑍𝑚′ (𝑥) such that 𝑥 ∈ 𝑋 and 𝑍𝑚 (𝑥) ≥ 𝑃𝑚,𝑚
(𝐿𝐵𝑚 , 𝑈𝐵𝑚 ) = (min𝑚′∈[1,𝑀] 𝑃𝑚′,𝑚 , 𝑃𝑚,𝑚 )

For our MOHCA, the lower bounds should be usable as a starting point for the type of adaptive
search strategy previously utilized in Burdett et al (2015). However preliminary numerical testing has
shown that approach to be deficient. For example we have found that a feasible starting point is not
obtained. Hence the criteria space cannot be truncated in this way. It is necessary for some objective
function values to be zero in order to obtain a feasible patient case mix.

4.2. Identifying Solutions of Minimum Euclidean Distance

To identify the Pareto solution that is closest to the utopia point, the capacity model with the
following Euclidean distance metric can be solved:

2
𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝐷 = ∑𝑚(1 − 𝑍̃𝑚 ) where 𝑍̃𝑚 = (𝑍𝑚 − 𝑍𝑚 )/(𝑍𝑚 − 𝑍𝑚 ) ∈ [0,1] (19)

This objective replaces the m separate ones previously described in Table 1. The solution with the
smallest Euclidean distance is called an equitable patient case mix (EQPCM). As this objective
function is non-linear, two separable programming approaches can be proposed, one that minimizes
the squared Euclidean distance and one that minimizes the Euclidean distance. The former is
sufficient as the optimization of both functions is equivalent (see Appendix A). The idea behind this

7
separable programing approach is to approximate the non-linear function by a piecewise linear
function as shown in Fig. 1. The proposed optimization model is as follows:

Minimize 𝐷2 = ∑𝑚 𝑌𝑚 = ∑𝑚 ∑𝑖∈[1,𝐼] 𝛿𝑚,𝑖 . 𝑦̇ 𝑚,𝑖 (20)


Subject to
HCM constraints
∑𝑖∈[1,𝐼] 𝛿𝑚,𝑖 = 1 [Choose an interval] (21)
𝛿𝑚,𝑖 ∈ {0,1} ∀𝑚 ∈ [1, 𝑀], ∀𝑖 ∈ [1, 𝐼] [Binary variables for interval selection] (22)
𝑋𝑚 = 1 − 𝑍̃𝑚 = ∑𝑖∈[1,𝐼] 𝛿𝑚,𝑖 𝑥̇ 𝑚,𝑖 ∀𝑚 ∈ [1, 𝑀] [Compute the difference] (23)
𝛿𝑚,𝑖 . 𝑦𝑚,𝑖−1 ≤ 𝑦̇ 𝑚,𝑖 ≤ 𝛿𝑚,𝑖 . 𝑦𝑚,𝑖 ∀𝑚 ∈ [1, 𝑀], ∀𝑖 ∈ [1, 𝐼] [Bound on 𝑦̇ 𝑚,𝑖 ] (24)
𝛿𝑚,𝑖 . 𝑥𝑚,𝑖−1 ≤ 𝑥̇ 𝑚,𝑖 ≤ 𝛿𝑚,𝑖 . 𝑥𝑚,𝑖 ∀𝑚 ∈ [1, 𝑀], ∀𝑖 ∈ [1, 𝐼] [Bound on 𝑥̇ 𝑚,𝑖 ] (25)
(𝑦̇ 𝑚,𝑖−𝑦𝑚,𝑖)
𝑥̇ 𝑚,𝑖 ≤ 𝑥𝑚,𝑖 + 𝑔𝑚,𝑖
+ (1 − 𝛿𝑚,𝑖 ). 𝐵𝐼𝐺 ∀𝑚 ∈ [1, 𝑀], ∀𝑖 ∈ [1, 𝐼] [Link 𝑥̇ 𝑚,𝑖 , 𝑦̇ 𝑚,𝑖 ] (26)
(𝑦̇ 𝑚,𝑖−𝑦𝑚,𝑖)
𝑥̇ 𝑚,𝑖 ≥ 𝑥𝑚,𝑖 + 𝑔𝑚,𝑖
− (1 − 𝛿𝑚,𝑖 ). 𝐵𝐼𝐺 ∀𝑜 ∈ [1, 𝑀], ∀𝑖 ∈ [1, 𝐼] [Link 𝑥̇ 𝑚,𝑖 , 𝑦̇ 𝑚,𝑖 ] (27)
𝑍̃𝑚 = (𝑍𝑚 − 𝑍𝑚 )/(𝑍𝑚 − 𝑙𝑏𝑚 ) [Normalization] (28)

2
𝑦̇ 𝑚,3 = (1 − 𝑍̃𝑚 ) 𝛿𝑚,3 = 1

𝑦𝑚,2
𝑦𝑚,1

𝑥𝑚,1 𝑥𝑚,2 𝑥𝑚,3


𝑥̇ 𝑚,3 = 1 − 𝑍̃𝑚

Fig. 1. Summary of piecewise linear function and the variables

The purpose of this model is to identify a Pareto optimal solution closest to the utopia point as given
2
by (𝑍1 , 𝑍2 , … , 𝑍𝑀 ). The sum of the squared differences should be minimized, i.e. ∑𝑚∈[1,𝑀](1 − 𝑍̃𝑚 )
where 𝑍𝑚 has been normalized as 𝑍̃𝑚. In the model 𝑌𝑚 is introduced to represent the values
2
(1 − 𝑍̃𝑚 ) . This squared term is well approximated by a piecewise linear function. The domain of
the difference is hence divided for each objective into intervals. Each linear segment has a gradient
𝑔𝑚,𝑖 that is bounded by (𝑥𝑚,𝑖 , 𝑦𝑚,𝑖 ). In this model it is necessary to choose the interval where the
value occurs. This is given by 𝛿𝑚,𝑖 . The constraints ensure the selected value (𝑥̇ 𝑚,𝑖 , 𝑦̇ 𝑚,𝑖 ) lies within
the correct range of values. Further constraints are added to ensure that 𝑦̇ 𝑚,𝑖 = 𝑔𝑚,𝑖 𝑥̇ 𝑚,𝑖 + 𝑐𝑚,𝑖 . In
other words there is a strict linear relationship between 𝑦̇ 𝑚,𝑖 and 𝑥̇ 𝑚,𝑖 . If an interval is not selected
then 𝑦̇ 𝑚,𝑖 = 0 and 𝑥̇ 𝑚,𝑖 = 0. The data requirements for this separable programming approach are as
follows:
2 (𝑦 −𝑦 )
∆𝑚 = 1/𝐼; 𝑥𝑚,𝑖 = 𝑖. ∆𝑚 ; 𝑦𝑚,𝑖 = 𝑥𝑚,𝑖 ; 𝑔𝑚,𝑖 = (𝑥𝑚,𝑖−𝑥 𝑚,𝑖−1) ∀𝑚 ∈ [1, 𝑀] (29)
𝑚,𝑖 𝑚,𝑖−1

To find solutions of a given Euclidean distance, a constraint of the following form, 𝐷2 ≥ 𝐷𝑡𝑎𝑟𝑔𝑒𝑡 can
be added.
A goal programming approach could also be used. In that approach the normalized objective
function values are chosen to be as close to one as possible. As 𝑍̃𝑚 ≤ 1 ∀𝑚, a selectable slack
variable is introduced to facilitate that choice:

𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑒 ∑𝑚 𝜆𝑚 Such that 𝑍̃𝑚 + 𝜆𝑚 = 1 ∀𝑚 and 𝜆𝑚 ≥ 0 ∀𝑚 (30)

8
As this approach is equivalent to a weighted sum method, it may not be able to identify non convex
areas of the Pareto frontier and the best Pareto optimal solution (Marler and Arora, 2004).
In Section 3.2 we reported that solutions of the original capacity model may be dominated
solutions. This is because the enforcement of the proportional case mix is strict. To obtain a non-
dominated solution, the process shown in Fig. 2 is advocated and involves the solution of the
aforementioned separable programming model.

1. Solve HCM with proportional case mix constraint, i.e. obtain 𝑛1𝛾 . Set 𝑡𝑎𝑟𝑚 = 𝑛1𝑚 ∀𝛾 ∈ 𝛤, 𝑚 = 𝛾
2. Apply MOHCA with Euclidean distance metric and demand (i.e. target) constraints, i.e. 𝑍𝑚 ≥ 𝑡𝑎𝑟𝑚

Proportional Capacity Case Mix Multi Objective Case Mix


Case Mix Model [Dominated] Capacity Model [Non Dominated]

Fig. 2. Steps involved in determining a non-dominated case mix

This approach ensures that minimum requirements are met via the demand constraints, and free
capacity elsewhere is assigned equitably.

4.3. Finding All Solutions of Minimum Euclidean Distance

There may be a large number of alternative Pareto optimal solutions of minimum Euclidean
distance. For planning purposes there is merit in identifying some or all of these. An iterative
approach is therefore proposed here to identify a subset of these solutions. It uses the
aforementioned model in Section 4.2. This approach involves the solution of the model 𝐾 times,
where 𝐾 is the number of alternate optimal solutions to be obtained. Each time the model is solved
the solution is recorded and then avoided by adding an additional set of constraints. Let 𝑍́𝑚 𝑘

represent the mth objective function value in the kth solution. Let 𝑙𝑡𝑘,𝑚 = 1 if 𝑍𝑚 < 𝑍́𝑚
𝑘
, 𝑒𝑞𝑘,𝑚 = 1
if 𝑍𝑚 = 𝑍́𝑚 and 𝑔𝑡𝑘,𝑚 = 1 if 𝑍𝑚 > 𝑍́𝑚 . Let 𝑘 be the current iteration number (i.e. solve). Let ℧ be a
𝑘 𝑘 ∗

sufficiently large value. The following constraints are added to the mathematical model:

∑𝛾 𝑒𝑞𝑘,𝑚 ≤ 𝑀 − 1 ∀𝑘 ∈ [1, 𝐾] [Solution similarity] (31)


𝑙𝑡𝑘,𝑚 + 𝑒𝑞𝑘,𝑚 + 𝑔𝑡𝑘,𝑚 = 1 ∀𝑘 ∈ [1, 𝐾]|𝑘 < 𝑘 ∗ , ∀𝑚 ∈ [1, 𝑀] [Comparison] (32)
𝑍𝑚 − 𝑍́𝑚
𝑘
≤ (1 − 𝑒𝑞𝑘,𝑚 )℧ ∀𝑘 ∈ [1, 𝐾]|𝑘 < 𝑘 ∗ , ∀𝑚 ∈ [1, 𝑀] [Relation “=”] (33)
𝑍𝑚 − 𝑍́𝑚 ≥ (𝑒𝑞𝑘,𝑚 − 1)℧ ∀𝑘 ∈ [1, 𝐾]|𝑘 < 𝑘 ∗ , ∀𝑚 ∈ [1, 𝑀] [Relation “=”]
𝑘
(34)
𝑍𝑚 − 𝑍́𝑚
𝑘
> (𝑔𝑡𝑘,𝑚 − 1)℧ ∀𝑘 ∈ [1, 𝐾]|𝑘 < 𝑘 ∗ , ∀𝑚 ∈ [1, 𝑀] [Relation “>”] (35)
𝑍𝑚 − 𝑍́𝑚
𝑘
< (1 − 𝑙𝑡𝑘,𝑚 )℧ ∀𝑘 ∈ [1, 𝐾]|𝑘 < 𝑘 ∗ , ∀𝑚 ∈ [1, 𝑀] [Relation “<”] (36)

Proof: If 𝑒𝑞𝑘,𝑚 = 1 in constraint (33) and (34) then 𝑍𝑚 = 𝑍́𝑚 𝑘


. Otherwise if 𝑒𝑞𝑘,𝑚 = 0 then
−℧ < 𝑍𝑚 − 𝑍́𝑚 < ℧. Unfortunately −℧ < 𝑍𝑚 − 𝑍́𝑚 < ℧ is also valid for the case where 𝑍𝑚 = 𝑍́𝑚
𝑘 𝑘 𝑘
.
́ 𝑘
Hence (35) and (36) must be introduced. They ensure that if 𝑍𝑚 ≠ 𝑍𝑚 (i.e. 𝑒𝑞𝑘,𝑚 = 0) then
|𝑍𝑚 − 𝑍́𝑚
𝑘|
> 0. For example if 𝑔𝑡𝑘,𝑚 = 1 then 𝑍𝑚 > 𝑍́𝑚 𝑘
. If 𝑔𝑡𝑘,𝑚 = 0 then 𝑍𝑚 − 𝑍́𝑚𝑘
> −℧. If
𝑙𝑡𝑘,𝑚 = 1 then 𝑍𝑚 < 𝑍́𝑚 . If 𝑙𝑡𝑘,𝑚 = 0 then 𝑍𝑚 − 𝑍́𝑚 < ℧.
𝑘 𝑘

As most optimization software does not facilitate constraints with “<” and “>” operators, the
following reformulations are required for (35) and (36) and involve a parameter 𝜔.

𝑍𝑚 − 𝑍́𝑚
𝑘
≥ 𝜔 + (𝑔𝑡𝑘,𝑚 − 1)℧ ∀𝑘 ∈ [1, 𝐾]|𝑘 < 𝑘 ∗ , ∀𝑚 ∈ [1, 𝑀] [Relation “>”] (37)
𝑍𝑚 − 𝑍́𝑚
𝑘
≤ −𝜔 + (1 − 𝑙𝑡𝑘,𝑚 )℧ ∀𝑘 ∈ [1, 𝐾]|𝑘 < 𝑘 ∗ , ∀𝑚 ∈ [1, 𝑀] [Relation “<”] (38)

9
The choice of 𝜔 is important, as a value that is too large may cause a solution to be obtained that
does not have the minimum Euclidean distance. This approach requires three binary decision
variables per objective, for each previous solution, i.e. 3𝑀𝑘 ∗ variables in total.
Alternative approaches are possible that involve fewer binary variables. One example
involving absolute values is as follows:

|𝑍𝑚 − 𝑍́𝑚
𝑘|
≥ 𝜔 − (𝑒𝑞𝑘,𝑚 )℧ and |𝑍𝑚 − 𝑍́𝑚
𝑘|
≤ (1 − 𝑒𝑞𝑘,𝑚 )℧ (39)

Proof: If 𝑒𝑞𝑘,𝑚 = 1, then −℧ ≤ |𝑍𝑚 − 𝑍́𝑚


𝑘|
≤ 0, i.e. 𝑍𝑚 = 𝑍́𝑚
𝑘
. If 𝑒𝑞𝑘,𝑚 = 0, then 𝜔 ≤ |𝑍𝑚 − 𝑍́𝑚
𝑘|

℧.

As constraint (39) is non–linear the following linearization is needed in conjunction with the
application of constraint (33) and (34):

𝑍𝑚 − 𝑍́𝑚
𝑘
+ 𝜔(1 − 𝑒𝑞𝑘,𝑚 ) ≤ (1 − 𝜏𝑘,𝑚 )℧ (40)
́𝑍𝑚
𝑘
− 𝑍𝑚 + 𝜔(1 − 𝑒𝑞𝑘,𝑚 ) ≤ (𝜏𝑘,𝑚 )℧ (41)

Proof: If 𝑒𝑞𝑘,𝑚 = 0 then 𝑍𝑚 − 𝑍́𝑚 𝑘


≤ −𝜔 + (1 − 𝜏𝑘,𝑚 )℧ and 𝑍́𝑚 𝑘
− 𝑍𝑚 ≤ −𝜔 + (𝜏𝑘,𝑚 )℧. If
𝜏𝑘,𝑚 = 1 then 𝜔 ≤ 𝑍𝑚 − 𝑍𝑚 ≤ ℧, i.e. 𝑍𝑚 < 𝑍𝑚 as required. If 𝜏𝑘,𝑚 = 0 then 𝜔 ≤ 𝑍𝑚 − 𝑍́𝑚
́ 𝑘 ́ 𝑘 𝑘
≤ ℧,
i.e. 𝑍𝑚 > 𝑍́𝑚 as required. If 𝑒𝑞𝑘,𝑚 = 1 then 𝑍𝑚 − 𝑍́𝑚 ≤ (1 − 𝜏𝑘,𝑚 )℧ and 𝑍́𝑚 − 𝑍𝑚 ≤ (𝜏𝑘,𝑚 )℧. In
𝑘 𝑘 𝑘

other words, −(𝜏𝑘,𝑚 )℧ ≤ 𝑍𝑚 − 𝑍́𝑚 𝑘


≤ (1 − 𝜏𝑘,𝑚 )℧ . If 𝜏𝑘,𝑚 = 1 then −℧ ≤ 𝑍𝑚 ≤ 𝑍́𝑚
𝑘
. This includes
́ 𝑘 ́ 𝑘 ́ 𝑘
𝑍𝑚 = 𝑍𝑚 as required. If 𝜏𝑘,𝑚 = 0 then 0 ≤ 𝑍𝑚 − 𝑍𝑚 ≤ ℧. This includes 𝑍𝑚 = 𝑍𝑚 as required.

This approach requires 2𝑀𝑘 ∗ binary variables in total, i.e. 𝑀𝑘 ∗ less than the former approach.
Interestingly implementation and numerical testing in ILOG CPLEX has shown this approach to be
inferior in terms of computation time, even though less binary decision variables are present. Hence
the first approach is advocated and utilized in the remainder of this article.

4.4. Generating Non Dominated Solutions

In this section adaptive versions of the ECM are used to approximate the Pareto frontier. This is
because the objective space is vast and the possibility of generating many infeasible grid points
occurs when there are many multi-objectives (Burdett (2015)). The approach of Laumanns et al
(2006) and Kirklik and Sayin (2014) were first investigated as possible solution approaches, to solve
this article’s multi-objective hospital capacity models. They are quite similar and their ratio of model
solves to non-dominated solutions is highly competitive if not superior to other strategies. The
algorithm of Kirklik and Sayin (2014) involves a clever partitioning of the (m-1) dimensional objective
space. It explicitly records unsearched regions as opposed to Laumanns et al (2006) which records
searched and infeasible regions. Preliminary testing however has illuminated a few weaknesses and
has shown that both approaches struggle when the number of objectives is large. The Kirklik and
Sayin (2014) approach results in 2𝑚−1 regions after the first non-dominated solution has been
obtained. If 𝑚 > 10 then this number becomes very large and all of those regions have to be
recorded. Further steps just continue to escalate the number of regions. Laumann et al’s (2006)
approach similarly has to search through a rapidly increasing number of intervals as the search
progresses.
The adaptive approach of Burdett (2015) was then tested. That approach expands outwards
from a lower bound starting point to identify non dominated solutions. It is based upon the
observation that infeasible points are those that occur closest to the utopia point. Those points that
are further away are most likely to be feasible, for instance at each of the boundary (i.e. corner)
points in the objective space, where only a single metric is considered. The algorithms expansion is
stopped at points that have been identified as infeasible. The proof of this is given in Appendix B, for

10
this hospital capacity scenario. On large problems it seems to be favourable in the sense that it does
not explicitly need to store searched regions or unsearched regions. Preliminary testing has shown
that this approach works reasonably well. However, the time requirements are very large on our
hospital capacity problem. Another approach is therefore warranted to generate non dominated
solutions for practical usage.

4.4.1. A Random Corrective ECM Approach (RCECM)

In preceding sections, techniques like ECM and its variants have been discussed. Those techniques
rely upon the enumeration of the decision space, in a detailed methodical way; but this is a large and
time consuming job. The quality of those techniques and its variants is highly dependent upon the
generation of suitable grid points. In summary the greatest downside of those approaches is that
the same Pareto solution may be obtained at different grid points. Furthermore certain grid points
may be infeasible and hence many solves may be performed for little benefit.
As the aforementioned approaches are theoretically but not practically relevant on larger real
life applications, an alternative approach is proposed. The idea behind it is shown in Fig. 3 and the
exact details are shown in Algorithm 1. It is best described as a random corrective approach as a
number of points (i.e. say N) are randomly generated, and then converted into Pareto optimal
solution. It should be noted that each point is a vector (𝜀2 , 𝜀3 , … , 𝜀𝑀 ) where 𝜀𝑚 ~𝑈(𝑍𝑚 , 𝑍𝑚 ). Each
point is also an input to the model used by the ECM (i.e. hereby denoted as Model A). During this
algorithm if model A fails to solve, for example as the point is infeasible, then a second model (i.e.
Model B) is introduced and applied to find the Pareto optimal solution closest to this infeasible
point. In other words the purpose of the second model is to determine the 𝑍𝑚 that minimizes
∑𝑚∈[2,𝑀](𝜀𝑚 − 𝑍𝑚 )2 subject to 𝑍𝑚 ≤ 𝜀𝑚 ∀𝑚 ∈ [2, 𝑀] and 𝑍𝑚 ≥ 𝑍𝑚 ∀𝑚. Model B constitutes a
corrective mechanism. This RCECM is promising because the user selects how many solutions they
want and each stage (i.e. iteration) provides a guaranteed Pareto optimal solution.

UB Obj B

Infeasible
Range of Pareto points
Obj B Feasible Frontier
points

LB Obj B
LB Obj A Range of Obj A UB Obj A

Fig. 3. Idea of the RCECM with respect to a 2 dimensional problem

Alg. 1. Random Corrective ECM [Version 1]


Begin
℧ = ∅; // Initialise set of non-dominated solutions
for(𝑖 ∈ [1, 𝑁]) // Perform N iterations
begin
𝑖
𝜀𝑚 ~𝑈(𝑍𝑚 , 𝑍𝑚 ) ∀𝑚 ∈ [2, 𝑀]; // Generate a grid point
𝑍 = 𝒔𝒆𝒕𝒖𝒑_𝒔𝒐𝒍𝒗𝒆_𝑨(𝜀 𝑖 ); // Solve Model A to find a Pareto optimal point
if 𝑍 𝑑𝑜𝑒𝑠 𝑛𝑜𝑡 𝑒𝑥𝑖𝑠𝑡: // The point is infeasible
𝑍 = 𝒔𝒆𝒕𝒖𝒑_𝒔𝒐𝒍𝒗𝒆_𝑩(𝜀 𝑖 ); // Find the nearest Pareto optimal solution
if 𝑍 𝑒𝑥𝑖𝑠𝑡𝑠: // A new Pareto optimal solution has been found
℧ = ℧ ∪ 𝑍; // Add solution to the set if not present already
end

11
End

To facilitate Model B, the aforementioned separable programming approach of Section 4.2 is again
used. In this situation however ∆𝑚 = (𝜀𝑚 − 𝑍𝑚 )/𝐼 and the following constraints are required:

𝑋𝑚 = 𝜀𝑚 − 𝑍𝑚 = ∑𝑖∈[1,𝐼] 𝛿𝑚,𝑖 𝑥̇ 𝑚,𝑖 ∀𝑚 ∈ [2, 𝑀] [Compute the difference] (42)


𝑍𝑚 ≤ 𝑍𝑚 ≤ 𝜀𝑚 𝑚 ∈ [2, 𝑀] (43)

Algorithm 1 can be simplified and reduced for larger problems. As it is highly unlikely that any
randomly generated grid point will be feasible, the call to Model A can be removed. This property
has been observed from numerical testing and observations. In this revised approach the grid point
is (𝜀1 , 𝜀2 , … , 𝜀𝑀 ) and the range of m is [1, 𝑀], for example in equation (42) and (43). The grid point
includes the first objective because the separable programing approach as previously described does
not include nor maximize 𝑍1 , and it should be. In this revised algorithm, the idea is hence to identify
a Pareto optimal solution closest to the randomly generated solution.
If a randomly generated point by some small chance is feasible then a solution will be
obtained that is not Pareto optimal. These can be identified in a post processing state, see Algorithm
2. For instance the sorting algorithm of Mishra and Haritt (2010) can be used to partition the
population of solutions into dominated and non-dominated solutions. Having identified these
solutions, Model A can then be applied to identify further Pareto optimal solutions.

Alg. 2. Post processing


Begin
Partition(℧, ℧D ); // Remove dominated solution from ℧. Store these in ℧D .
foreach(𝜀 ∈ ℧D )
begin
𝑍 = 𝒔𝒆𝒕𝒖𝒑_𝒔𝒐𝒍𝒗𝒆_𝑨(𝜀); // Solve Model A to find a new Pareto optimal point
if 𝑍 𝑒𝑥𝑖𝑠𝑡𝑠: // A new Pareto optimal solution has been found
℧ = ℧ ∪ 𝑍; // Add solution if not present already
end
End

The RCECM can be further adapted to analyse specific objectives in more detail. For example a
specific objective may be uniformly searched. The other objectives are generated randomly as
previously described. The exact details of that approach are shown in in Algorithm 3. The benefit of
Algorithm 3 is that it provides more detailed information about the influence and effect of a single
objective on the systems performance and work.

Alg. 3. Random Corrective ECM [Version 2]


Input: 𝑚̂ ∈ [1, 𝑀]
Begin
𝕄 = {𝑚|𝑚 ∈ [1, 𝑀], 𝑚 ≠ 𝑚 ̂ }; // Define set of objectives that are to be randomly selected
for(𝑖 ∈ [1, 𝐼])
begin
𝑖
𝜀𝑚 ̂ = 𝑍𝑚 ̂ + (𝑖 − 1)∆𝑚
̂ ; // Update the epsilon value of objective 𝑚 ̂
(
for 𝑗 ∈ [1, 𝐽])
begin
𝑖
𝜀𝑚 ~𝑈(𝑍𝑚 , 𝑍𝑚 ) ∀𝑚 ∈ 𝕄; // Generate the other epsilon values randomly
𝑍 = 𝒔𝒆𝒕𝒖𝒑_𝒔𝒐𝒍𝒗𝒆_𝑩(𝜀 𝑖 ); // Find the nearest Pareto optimal solution
end
if 𝑍 𝑒𝑥𝑖𝑠𝑡𝑠: // A new Pareto optimal solution has been found

12
℧ = ℧ ∪ 𝑍; // Add solution if not present already
end
End

It should be noted that in Model B constraint (42) and (43) should be altered so the condition is
𝑚 ∈ [1, 𝑀]|𝑚 ≠ 𝑚 ̂.

5. Numerical Analysis

To demonstrate the application of the different methods, a real hospital is analysed. Our case study
is based upon de-identified data from the Princess Alexandra Hospital, Brisbane, Qld, Australia. It is a
metropolitan tertiary referral and university and college affiliated teaching hospital. The hospital is
spread over a number of different floors. It provides acute and elective adult medical and surgical
care and emergency department services. There are 21 competing surgical specialities (i.e. surgical
units). A MOHCA with 𝑀 = 21 objectives was therefore performed. The objectives are as follows:
𝑍𝑚 = 𝑛1𝑚 ∀𝛾 ∈ 𝛤|𝑚 = 𝛾.
ILOG OplStudio 12.6 was used to solve the different models on a quad core Dell personal
computer (PC) with a 2.6 Ghz processor and 16 GB memory under Windows 7. The proposed
algorithms however have been encoded in C++. Calls to ILOG are facilitated using ILOG’s concert
technology.

5.1. Bound Analysis

The lexicographic bound analysis was first applied to identify the range of the objective functions.
This requires 441 solves (i.e. 212 ) as opposed to 42 (i.e. 2x21) traditionally. The CPU time required to
perform those solves was 25 minutes. The results are shown in Table 2 and Fig. 4.

Table 2
Lower and upper bounds produced by lexicographic bound analysis
s ASU BE COL CSU DENT ENT FMAX GAS GYN HPB
LB 3852.20 0.00 5155.30 6580.50 7121.60 6226.60 0.00 4563.50 0.00 0.00
UB 14190.00 12782.00 5155.30 7231.10 15532.00 16345.00 14994.00 7548.90 19374.00 5455.50
Diff 10337.80 12782.00 0.00 650.60 8410.40 10118.40 14994.00 2985.40 19374.00 5455.50

s LTPT NSUR OPHT ORTH PLAS RESP RTPT TRMA UGI UROL VASC
LB 0.00 3210.20 2986.90 2164.70 3922.60 3577.80 0.00 1058.00 0.00 7921.90 3433.10
UB 3036.40 7706.20 22507.00 10108.00 15937.00 6942.50 4757.60 1058.00 8132.60 7921.90 4376.40
Diff 3036.40 4496.00 19520.10 7943.30 12014.40 3364.70 4757.60 0.00 8132.60 0.00 943.30

25000.00

20000.00

15000.00
#Treatments

10000.00

5000.00

0.00
ASU B&E COLO CSU DENT ENT FMAX GAS GYN HPB LTPT NSUR OPHT ORTH PLAS RESP RTPT TRMA UGI UROL VASC
Surgical Specialty

Fig. 4. Lower and upper bound on the number of treatments achievable for each surgical specialty

13
These results demonstrate that BE, FMAX, GYN, HPB, LTPT, RTPT and UGI are heavily restricted by
the activities of other units. In contrast, units like DENT, ENT, GAS are less restricted. In addition,
units like COLO, TRMA and UROL are not restricted at all as they share no common treatment areas
and hence their LB equals their UB. The units with the greatest difference are GYN, FMAX, OPHT.
Upon closer inspection, the solution defined by the lower bound is not feasible and cannot be used
as a starting point for the ECM. It constitutes a total of 61774 treatments.

5.2. Identifying Minimum Distance Solutions

Before applying the ECM to obtain the Pareto optimal solutions, the most equitable PCM (i.e.
EQPCM) can be identified. To find that solution the model was solved with a minimum Euclidean
distance metric and the aforementioned separable programming approach. Almost no CPU time was
required to solve the model with 100 intervals (i.e. divisions). Other numbers of divisions were also
tested to ensure that our choice of divisions is accurate enough; and it is. The total number of
treatments was 63843 and the Euclidean distance was 2.98. The results are shown in Fig. 5, in
comparison to the activities historically performed. In summary, these charts show how the
competition between different specialties has been regulated. The first chart in Fig. 3 shows the
number of treatments of each specialty that can be achieved. Three of the specialties, namely
(OPHT, ORTH, PLAS) have fewer suggested treatments than actually occurred. Otherwise the
suggested number of treatments is larger, and in some cases very much larger (i.e. for GAS, GYN,
RESP, CSU, UROL). The second chart shows the normalized treatment numbers and demonstrates
where the values lie with respect to the upper bound. Five specialties are allowed to use about 70%
or more of their capacity to treat patients. A further seven use between 30 and 70%. The remaining
specialties use much less capacity. On average across all 21 specialties, the norm is 0.422. The third
chart shows the proportional breakup of specialties across of all treatments. It indicates that OPHT,
ORTH and PLAS were highly represented historically. However in the CECM obtained, this is not so. It
should be noted that there may be many different case mixes with that Euclidean distance. The
attributes of those case mixes may be very different to the attributes shown in Fig. 5.

14000.00

Equitable Case Mix (63843.3)


12000.00
Historical Case Mix (19174)

10000.00 Corrected Case Mix (63955.76)


# Treatments

8000.00

6000.00

4000.00

2000.00

0.00

Specialty

Fig. 5. Separable programing results

14
1

0.9 Equitable Case Mix


Historical Case Mix
0.8
Corrected Case Mix

0.7

Norm # Treatments
0.6

0.5

0.4

0.3

0.2

0.1

Specialty

0.2

Equitable Case Mix

Historical Case Mix


0.15
Corrected Case Mix
% of Treatments

0.1

0.05

Specialty

Fig. 5. Separable programing results (continued)

The goal programming approach was applied and a comparison in shown in Fig. 6.
20000.00

18000.00

16000.00

14000.00 Min Euclidean Dist

Goal Programming
12000.00
# Treatments

10000.00

8000.00

6000.00

4000.00

2000.00

0.00

Specialty

Fig. 6. Comparison with goal programming approach

15
An increased number of treatments eventuates (i.e. 70953.4) however the Euclidean distance is
larger (i.e. 3.295). The goal programming approach has also zeroed the treatment numbers of 10
specialities as opposed to the three via the other approach.
To see whether the historical case mix is a dominated solution, the capacity model was solved
with additional target constraints. A solution was obtained with 63956 treatments and a Euclidean
distance of 3.02. These results are also shown in preceding figures and labelled as “Corrected Case
Mix”. They demonstrate that the historical case mix is indeed a dominated solution and a superior
case mix can be obtained.
During numerical testing the capacity model identified that with the historical PCM the
theoretical capacity is 27351 patients. To see whether this case mix is a dominated solution, target
constraints were added and the model was resolved to obtain a non-dominated patient case mix.
The total number of treatments obtained was 64098 with a Euclidean distance of 3.08. The
difference between the two is shown in Fig. 7. Hence it is evident that the previous solution was a
dominated solution.
16000.00

14000.00

12000.00
Case Mix
# Treatments

10000.00
Non Dominated Case Mix
8000.00

6000.00

4000.00

2000.00

0.00

Specialty
Fig. 7. Adding target constraints to obtain a superior case mix

The effect of introducing a 22nd objective as the total number of treatments was investigated next. A
difference of 776.9 treatments was observed. For example ℂ = 64620.2 as opposed to 63843.3
when there are 21 objectives. The exact difference between the patient case mixes is shown in Fig.
8. The Euclidean distance for the 22 objective problem was 3.06, and a slight increase over that of
the 21 objective.
800.00

600.00

400.00
Difference

200.00

0.00

-200.00

-400.00
FMAX

UROL
ASU

GAS
GYN

NSUR

PLAS
RESP
B&E

DENT
COLO

ENT

LTPT

OPHT
ORTH
CSU

HPB

RTPT
TRMA
UGI

VASC

Specialty

Fig. 8. Difference in patient case mix

16
In summary, small changes to the PCM can result in a reasonably large increase to the total number
of treatments. There is some evidence to suggest that it is better to add the 22nd objective as the
total number of treatments. Six specialties, namely (ASU, ENT, FMAX, LTPT, UROL) had reduced
patient numbers, but some large increases were possible for (B&E, GYN, NSUR, ORTH).

5.3. Identifying Alternate Pareto Optimal Solution

The approach to generate Pareto solutions of minimal distance was applied. A variety of 𝜔 values
were tested to see what effect that parameter has. In particular: 𝜔 ∈ {1,10,100,500,1000}. Fifty
solutions were first extracted for each value of 𝜔. The CPU requirements are shown in Table 2. The
CPU time decreases as 𝜔 is increased.

Table 2
CPU times in minutes to obtain 50 alternate solutions [ 𝐼 = 20 ]
Approach 𝝎 =1 10 100 200 500 1000
3 binary 19.916 15.616 18.35 17.433 7.1833 5.416
For each of the 21 objectives, the range of values and the standard deviation are shown in Fig. 9.

900

800
w=1
700
w=10
w=100
600
Change in # Treatments

w=200
500

400

300

200

100

Specialty

Fig. 9. Observed change in treatment numbers across the 50 obtained solutions

These charts shows which specialities are most changed. For small values of 𝜔, the treatment
numbers of only a small number of specialities are changed. This changes when 𝜔 is bigger. The total
number of treatment for each of the alternate solutions is shown in Fig. 10. Again the range of those
values increases as 𝜔 is increased (see Fig. 11). It should be noted that when 𝜔 is 500 and 1000, a
significant number of solution are obtained with a slightly larger Euclidean distance. In other words
they do not have the minimum Euclidean distance. The reason for this is not completely understood.
63690
w=1

w=10
63670
w=100

w=200

63650
# Treatments

63630

63610

63590

63570

63550
0 5 10 15 20 25 30 35 40 45 50
Alternate Optimal

Fig. 10. A summary the 50 alternate solutions for each value of 𝜔

17
67000
66000
65000

# Treatments
64000
63000
62000
61000
60000
59000
1 10 100 200 500 1000
Omega

Fig. 11. Range in the total number of treatments

To further test the applicability of the proposed iterative approach, a larger number of alternate
solutions were sought. For 𝜔 = 10 and 𝐾 = 150, the memory requirements were sufficient, just
under 15 Gb. The CPU time was 52 minutes. Some important results are shown in Fig. 12.

63800 8.8968
8.8966
Squared Euclidean Distance

63750 8.8964
8.8962
# Treatments

63700 8.896
8.8958
63650 8.8956
8.8954
63600 8.8952
8.895
63550 8.8948
0 20 40 60 80 100 120 140 0 20 40 60 80 100 120 140
Alternate Optimal Alternate Optimal

Fig. 12. A summary of the 150 solutions

The first chart shows that the majority of the solutions differ by about 100 patient treatments of
each other, i.e. they lie between 63550 and 63650. The second chart shows that as the search
progresses, the solutions obtained have larger Euclidean distances. However, the difference is only
0.001. In conclusion these numerical investigations indicate that there are many Pareto optimal
solutions of minimum Euclidean distance. There is little possibility of identifying all of them. Besides
there seems to be no practical reason to have all of them as no special structure or properties have
been observed for our case study.

5.4. Generating Non Dominated Solutions

In this section the application of algorithms to determine Pareto optimal patient case mixes
(POPCM), otherwise described as non-dominated patient case mix (NDPCM), is reported. For this
purpose the adaptive ECM was first applied. Its application resulted in a vast number of solutions.
After 47 hours the algorithm was halted. The AECM was not run to completion because the CPU
requirements were becoming too large. At that point 16690 non dominated solutions had been
obtained. The queue still had 74000 grid points yet to evaluate. The RCECM was applied next for
10000 iterations. The CPU time was 7.77 hours (i.e. 27999 seconds). The results of the AECM and the
RCECM are individually shown Fig. 13 but jointly in Fig. 14.

18
4
4.3
3.9
4.1
Euclidean Distance

3.8
3.7

Distance
3.9
3.6
3.7 3.5
3.4
3.5
3.3
3.3 3.2
18000 28000 38000 48000 58000 68000 40000 45000 50000 55000 60000 65000 70000 75000
#Treatments # Treatments

a) AECM b) RCECM
Fig. 13. Pareto optimal solutions: Euclidean distance versus # treatments

Fig. 14. AECM and RCECM results

These charts show the distribution of the obtained Pareto solutions in terms of the total number of
treatments and their distance from the utopia point. The minimum distance Pareto optimal solution
has a value of 2.98. That solution is not shown or yet identified in Fig. 12. In comparison the AECM is
able to identify solutions further away from the utopia point. This occurs because that approach
starts from the furthest point in the objective space before expanding outwards. The RECECM
however has however identified more of the solutions with a higher total number of treatments and
more solutions with a smaller Euclidean distance. Considered together these algorithms have
provided a good summary of the multi-objective objective space. How comprehensive this summary
is, cannot be commented upon at this stage and is open to debate.
The histograms in Fig. 15 and Fig. 16 show the distribution of the non-dominated solutions
obtained by both approaches. For the AECM most of the non-dominated solutions comprise 40000 –
47000 treatments in total. This histogram however is quite bimodal. If it was unimodal then we
anticipate that the highest frequency would occur in the middle at approximately 43000 total
treatments. The RCECM results are different. Roughly speaking the frequency is similar for all
treatment numbers between 51000 and 65000.

19
1000
1600
900
1400
800
1200
Frequency

1000 700

800 600

Frequency
600 500
400 400
200
300
0
200
24700.1469
26728.5521
28756.9573
30785.3625
32813.7677
34842.1729
36870.5781
38898.9833
40927.3885
42955.7937
44984.1989
47012.6041
49041.0093
51069.4145
53097.8197
55126.2249
57154.6301
59183.0353
61211.4405
63239.8457
100

3.36

3.44
3.48
3.52
3.56

3.64
3.68
3.72
3.76

3.84
3.88
3.92
3.96

4.04
4.08
4.12
4.16

4.24
4.28
3.8

4
3.4

3.6

4.2
# Treatments Distance

Fig. 15. AECM histograms

100 400

90
350
80

70 300

60
Frequency

250
50
Frequency
40 200

30
150
20

10 100

0
50

0
3.23
3.25
3.27
3.29
3.31
3.33
3.35
3.37
3.39
3.41
3.43
3.45
3.47
3.49
3.51
3.53
3.55
3.57
3.59
3.61
3.63
3.65
3.67
3.69
3.71
3.73
3.75
3.77
3.79
3.81
3.83
3.85
3.87
3.89
3.91
3.93
3.95
3.97
3.99
4.01
# Treatments Distance

Fig. 16. RCECM histograms

The maximum proportion of each of the surgical specialties that was observed in the list of non-
dominated solutions is summarised in Fig. 17. For example, ORTH patients constituted anywhere
from zero to 13.9% of the total case mix. TRMA however constituted no more than 2.3 % of the total
case mix across all of the non-dominated solutions. The shape of this chart is very similar to Fig. 4.
For example, the peaks and troughs align quite well with the capacity of each surgical specialty. This
figure implies that in some respects there are limits on the amount of work that should be done by
the different specialties. Those limits perhaps should not be violated if the hospital is to be
effectively and equitably used by different specialities.

0.5

0.45

0.4

0.35
Max Proportion

0.3

0.25

0.2

0.15

0.1

0.05

SPECIALTY

Fig. 17. Range of the proportional mix of surgical specialties

An analysis of each objective has been performed. This is shown in Table 3 and Fig. 18.

20
Fig. 18. Boxplot of treatment numbers for each specialty
Table 3.
Statistical properties of the 21 objectives
UNIT Min Q1 Median Q3 Max UB UB/2
ASU 0.186 1735.005 3429.145 5389.41 12939.3 14190 7095
BE 0.102 3196.378 6314.975 9493.588 12781.7 12782 6391
COLO 0 0 0 1074.77 5153.54 5155.3 2577.65
CSU 0 0 0 0 7221.97 7231.1 3615.55
DENT 0 0 0 0 3995.31 15532 7766
ENT 1.551 2342.518 4458.8 6850.36 16243.3 16345 8172.5
FMAX 0 0 0 244.008 14914.2 14994 7497
GAS 2.389 1911.138 3785.455 5697.863 7548.07 7548.9 3774.45
GYN 0.736 4891.588 9751.695 14517.68 19373.9 19374 9687
HPB 0 0 0 0 5286.92 5455.5 2727.75
LTPT 0 0 0 0 2453.81 3036.4 1518.2
NSUR 0 980.748 2978.26 5177.89 7699.74 7706.2 3853.1
OPHT 0 5315.36 9568.835 13530.78 22445 22507 11253.5
ORTH 0 0 0 0 8586.14 10108 5054
PLAS 0 0 0 0 10153.5 15937 7968.5
RESP 1.161 1791.545 3510.565 5278.943 6940.56 6942.5 3471.25
RTPT 0.056 1165.478 2374.28 3548.515 4756.95 4757.6 2378.8
TRMA 0 0 0 0 1027.21 1058 529
UGI 0 1102.218 2366.75 4080.828 8104.23 8132.6 4066.3
UROL 2.575 1862.063 3867.055 5901.463 7919.14 7921.9 3960.95
VASC 0.371 1105.67 2184.175 3294.615 4375.96 4376.4 2188.2

These results indicate that the values of many of the objectives are quite uniformly
distributed. In other words, values between zero treatments and the upper bound have occurred
equally within the set of non-dominated solutions obtained by the RCECM. Nine of the 21
specialities have ranges which are symmetric and that have medians in the middle close to half the
upper bound. All of this evidence suggests that the objective space has been searched sufficiently
well. This has occurred in contrast to the standard ECM approach that needs a multi-dimensional
“hypergrid” to be explicitly generated and evaluated.
The RCECM variant described in Algorithm 3 was also applied. Objective 14 (i.e. ORTH) was
chosen for demonstrative purposes. For a choice of 100 divisions and 100 random solutions per
division (i.e. 10000 iterations in total), the CPU time required was 7.96 hrs (i.e. 28652 seconds). The
results are shown in Fig. 19 in comparison to the non-dominated solutions obtained earlier.

21
Fig. 19. Comparison of the two RCECM strategies

This chart shows that the results are quite similar. However more solutions of minimum
Euclidean distance have been identified. If anything, Algorithm 3 has produced a bigger region. Upon
reflection this is not surprising as by design, Algorithm 3 is forced to more comprehensively search
through a chosen dimension. Hence for a modest increase in computational effort, there is evidence
to suggest that it is a better approach. The idea behind that approach should be further investigated
as a means to develop a superior algorithm.

5.5. Managerial Implications

In this section we discuss the practical implications of this research, to managers and other decision
makers of hospitals. First let us recall that the purpose of the MOHCA is to determine what level of
capacity can be achieved if there are competing capacity metrics. In this article this amounts to
analysing the effect of treating different PCM. The outcome of the MOHCA is therefore a set of
POPCM that are good alternatives. These POPCM regulate the competition between different
specialties. The relative time and resource requirements of the different specialties have been
weighed up to obtain these solutions
From a practical point of view it is our belief that the MOHCA could be used to support
strategic decision making concerning how the hospital should be operated in the future. For instance
the MOHCA could be used to help strategically select a PCM to adhere to, and to base future
operations upon this mix of patient treatments. In theory the PCM selected using our MOHCA could
dictate the order in which patients are selected from the waiting lists. In addition the MOHCA could
be used to identify whether an actual case mix, perhaps obtained from the elective surgical waiting
list, results in an equitable usage of hospital resources or not. In the event that it is not, this analysis
could initiate the creation of a revised case mix that is more equitable.
The selected PCM directly affects the creation of the master surgical schedule for the
operating theatres. The number of blocks, for instance assigned to each specialty, and the duration
of these blocks, will reflect the number of patients treated of each specialty. The selected PCM will
also affect the amount of resources to be assigned to the blocks.
To further understand how hospitals can best use a MOHCA such as this, let us review what it
provides.

22
i. The MOHCA identifies an upper bound on the amount of work that each surgical specialty should
be able to perform over a defined period of time, for instance when it does not have to share
hospital resources and facilities with other specialties.
ii. The MOHCA determines a set of equitable cases mixes (EQCM). These case mixes regulate the
competition between the surgical specialties in the fairest way possible.
iii) The MOHCA determines a set of Pareto optimal “non-dominated” case mixes. These case mixes
may bypass the assignment of resources to certain specialties in order to obtain a higher number of
overall treatments.
In regard to i) it is worth mentioning that most hospital units must share resources with other
units. A typical example is the shared usage of operating theatres. Hence these values are at first
thought, only of academic interest. Upon reflection however this viewpoint is a little hasty. There are
units like the emergency department (ED) which have their own resources. It is theoretically possible
that other specialties will be given their own resources in the future, for example if the need is great
enough, costs are low enough, or additional space is made available. In those circumstances this
information is very useful. This information is most useful however to compare how fair a given PCM
is. For example, each hospital unit can individually compare “what productivity they are permitted to
achieve” with “what productivity they could theoretically have achieved”, if for instance competition
had been regulated completely in their favour. If this comparison is not favourable then a review of
the PCM may be requested. A review of the PCM could result in an alteration of the PCM and a re-
assignment of resources to some specialties. The affected specialty perhaps would receive additional
support in order to increase its outputs.
In regard to ii) it is worth mentioning that the EQPCM describes what work level can be
achieved if no specialty is unduly biased or prioritized. This is not to say that all specialties have an
equal share of the resources. In fact some specialties will be assigned more and some less. It just
means that over the specified time period, resources will have been distributed as fairly as possible
and each specialty will be sufficiently productive. Our numerical investigations have shown that the
EQPCM are quite similar. For example, the difference in individual treatment numbers is relatively
close. The overall number of treatments also varies slightly.
In regard to iii), it is worth mentioning that the POPCM are highly productive in specific ways,
but some solutions are not very equitable. An analysis of the POPCM is needed to choose a single
PCM that will be the “model” upon which the hospital will adhere, abide and emulate, when
performing its day to day operations. In this analysis managers are required to choose case mixes
and preference certain specialties. They will however have a better knowledge and understanding of
the hospitals goals and agendas to make this selection. The EQPCM will be favoured highly in such
an analysis, as treatment numbers are high overall, and individual specialties are also quite
productive.

6. Conclusions

This article introduces a multi-objective approach for analysing the capacity of hospitals. This is
required because within hospitals, there are many competing service priorities, and these require
the regulation of services and treatments. This article demonstrates how to analyse the effect of
different regulatory possibilities. It incorporates the competition between different patient types,
specialities and alternative capacity metrics. This multi-objective hospital capacity analysis (MOHCA)
is also necessary because the mix of patients that use the services provided in hospitals is variable,
and planning activities should not be based purely upon the analysis of a single patient case mix. As a
sensitivity analysis of all patient case mixes is performed by this approach, it is very valuable and
applicable to real hospitals.
The hospital capacity model (HCM) used within the MOHCA identifies the theoretical capacity
of a hospital and at the same time provides a “plan” of how the hospital is best utilized. The plan

23
describes how many patients are treated of each type, and where those patients should be treated.
These plans can be translated into detailed schedules or else used for other planning purposes, like
infrastructure expansion planning. The HCM can be used to measure the effectiveness of hospital
scheduling techniques. For example, a modified version of the HCM can approximate the time
required to process a particular PCM. That time is a benchmark to a schedule’s horizon (i.e.
makespan).
The proposed multi-objective capacity model (MOHCM) may be solved to provide a set of
candidate non dominated solutions. These solutions regulate the hospitals capacity to treat patients
of different types. The solutions of the MOHCM are classified as Pareto optimal case mix (POPCM).
They are otherwise described as non-dominated case mix (NDPCM).
In summary we advocate that our MOHCA is used for the following purposes:
 The MOHCA can be used to correct the case mix provided by the standard HCM. The solution of
the HCM is not Pareto optimal and is dominated by other case mixes.
 The MOHCA can be used to identify a single solution that optimizes all objectives as best it can.
In other words it determines which case mix is most equitable from all the other possibilities. It
is a very difficult task to identify a case mix that regulates the competition between specialities.
 The MOHCA performs an analysis of the hospitals capacity for all possible patient case mixes.
The work done by the hospital varies with the patient case mix. The number of possible case
mixes is vast if not infinite. No other approach can identify these effects.
 The set of non-dominated case mixes can be stored for future reference. Different preferences
can be evaluated easily by scaling and sorting these solutions.
 The MOHCA can transform any infeasible PCM to a Pareto optimal PCM.
 The effect of not treating (i.e. cancelling) patients from different surgical specialties can be
identified.

This article’s numerical investigation has shown that the aforementioned multi-objective
approach is a very good approach for performing a sensitivity analysis of hospital capacity. It also
demonstrates that a lexicographic bound analysis is not appropriate for multi-objective hospital
capacity problems and it cannot be used to reduce the size of the objective space for epsilon
constraint type methods. We have also demonstrated that it is perhaps better to add an additional
objective as the total number of treatments. Jointly maximizing the number of treatments in each
specialty type is not an equivalent situation.
This article’s accomplishments include a method for determining alternative Pareto optimal
solutions of a particular Euclidean distance. Several new algorithms have been proposed to generate
non dominated solutions. These robust techniques have been specifically developed for problems
with a large number of multiple objectives and for real life applications. The numerical investigations
have demonstrated that they are effective.

Acknowledgements

This research was funded by the Australian Research Council (ARC) Linkage Grant LP 140100394 and
supported by the Princess Alexandra Hospital, Brisbane, Australia. We would like to thank Dr Andy
Wong for his assistance in obtaining data for our case study. We would also like to thank Michael
Sinnot, David Cook, Sean Birgan and other staff at the PAH for their considerable feedback and their
time.

References

Abdelaziz, F.B., and Masmoudi, M., 2012. A multi-objective stochastic program for hospital bed planning.
Journal of the Operational Research Society 63(4), 530-538.
Aghaei J., Amjady, N., Shayanfar, H.A., 2011. Multi-objective electricity market clearing considering dynamic

24
security by lexicographic optimization and augmented epsilon constraint method. Applied Soft
Computing 11, 3846-3858.
Berube, J.F., Gendreau, M., Potvin, J.Y., 2009. An exact e-constraint method for bi-objective combinatorial
optimization problems: application to the travelling salesman problem with profits. European Journal of
Operational Research 194, 39-50.
Burdett, R.L., Kozan, E., 2006. Techniques for absolute capacity determination in railways. Transportation
Research B 40(8), 616-632.
Burdett R.L. and Kozan E., 2008. Analysing the performance of an automated pathology specimen handling
system. Journal of Intelligent Manufacturing 19(2), 175-189.
Burdett, R.L., 2015. Multi-objective models and techniques for railway capacity analysis. European Journal of
Operational Research. DOI: 10.1016/j.ejor.2015.03.020.
Dellaert, N., Cayiroglu, E., and Jeunet, J., 2015. Assessing and controlling the impact of hospital capacity
planning on the waiting time. International Journal of Production Research, 1-12.
Ehrgott, M., Ruzika, S., 2008. Improved e-constraint method for multiobjective programming. J Optim Theory
Appl 138, 375-396.
Kirklik, G., Sayin, S., 2014. A new algorithm for generating all nondominated solutions of multi-objective
discrete optimization problems. European Journal of Operational Research 232, 479-488.
Klamroth K., Lacour, R., Vanderpooten, D., 2015. On the representation of the search region in multi-objective
optimization. European Journal of Operational Research 245, 767-778.
Kozan, E., Burdett, R.L., 2005. A railway capacity determination model and rail access charging
methodologies. Transportation Planning and Technology 28(1), 27-45.
Laumanns, M., Thiele, L., Zitler, E., 2006. An efficient adaptive parameter variation scheme for meta-heuristics
based on the epsilon constraint method. European Journal of Operational Research. 169, 932-942.
Lokman, B., Köksalan, M., 2013. Finding all nondominated points of multi-objective integer programs.
Journal of Global Optimization 57(2), 347-365.
Ma, G., and Demeulemeester, E., 2013. A multilevel integrative approach to hospital case mix and capacity
planning. Computers and Operations Research 40, 2198–2207.
Marler, R.T., Arora, J.S., 2004. Survey of multi-objective optimization methods for engineering. Structural
Multidisciplinary Optimisation 26, 369–395.
Mavrotas, G., 2009. Effective implementation of the e-constraint method in multi-objective mathematical
programming problems. Applied Mathematics and Computation 213, 455-465
Mishra, K.K, Harit, S., 2010. A fast algorithm for finding the non-dominated set in multi-objective optimisation.
International Journal of Computer Applications 1 (25), 35-39.
Özlen, M., Azizoğlu, M., 2009. Multi-objective integer programming: a general approach for generating all
non-dominated solutions. European Journal of Operational Research 199(1), 25-35.
Rechel, B., Wright, S., Barlow, J., McKee, M., 2010. Hospital capacity planning: from measuring stocks to
modelling flows. Bulletin of the World Health Organization 88(8), 632-636.
Sylva, J., Crema, A., 2007. A method for finding well-dispersed subsets of non-dominated vectors for multiple
objective mixed integer linear programs. European Journal of Operational Research 180, 1011–1027.
Vanberkel, P.T., Boucherie, R.J., Hans, E.W., Hurink, J.L., 2014. Optimizing the strategic patient mix combining
queueing theory and dynamic programming. Computers and Operations Research 43, 271–279.

Appendix A.

Theorem 1: 𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑖𝑛𝑔 𝐷 ≡ 𝑀𝑖𝑛𝑖𝑚𝑖𝑧𝑖𝑛𝑔 𝐷2


2
Proof: Let 𝐷(𝑍̃1 , 𝑍̃2 , … , 𝑍̃𝑀 ) = √∑𝑚 (1 − 2𝑍̃𝑚 + (𝑍̃𝑚 ) ) and 𝐷′ (𝑍̃1 , 𝑍̃2 , … , 𝑍̃𝑀 ) = ∑𝑚 (1 − 2𝑍̃𝑚 +
2
(𝑍̃𝑚 ) ). The partial derivatives are zero at the optimal solution. As the partial derivatives are zero at
the same place, the two functions have the same optimal solution:

𝜕𝐷 1 −2+2𝑍̃𝑀 𝜕𝐷 1 −2+2𝑍̃𝑀
𝜕𝑍̃𝑚
=2 . Hence: 𝜕𝑍̃ = 0 ⇒ 2 = 0, 𝑖. 𝑒. 𝑍̃𝑀 = 1
√∑𝑚(1−2𝑍̃𝑚 +(𝑍̃𝑚 )2) 𝑚 √∑𝑚 ̃𝑚 +(𝑍̃𝑚 )2 )
(1−2𝑍
𝜕𝐷 ′ 𝜕𝐷 ′
𝜕𝑍̃𝑚
= −2 + 2𝑍̃𝑀 . Hence 𝜕𝑍̃ = 0 ⇒ −2 + 2𝑍̃𝑀 = 0, 𝑖. 𝑒. 𝑍̃𝑀 = 1
𝑚

25
If 𝑍̃ ∗ is the optimal solution of 𝐷′ , then 𝑍̃ ∗ is an optimal solution of 𝐷 iff 𝐷(𝑍̃ ∗ ) ≤ 𝐷(𝑍̃) ∀𝑍̃. Hence it
2 2
is necessary to prove that √∑𝑚 (1 − 2𝑍̃ ∗ 𝑚 + (𝑍̃ ∗ 𝑚 ) ) ≤ √∑𝑚 (1 − 2𝑍̃𝑚 + (𝑍̃𝑚 ) ) ∀𝑍̃ . If both
2 2
sides are squared then: ∑𝑚 (1 − 2𝑍̃ ∗ 𝑚 + (𝑍̃ ∗ 𝑚 ) ) ≤ ∑𝑚 (1 − 2𝑍̃𝑚 + (𝑍̃𝑚 ) ) . As 𝑍̃ ∗ is the optimal
2
solution of 𝐷′ then by definition this means that, ∑𝑚 (1 − 2𝑍̃ ∗ 𝑚 + (𝑍̃ ∗ 𝑚 ) ) ≤ ∑𝑚 (1 − 2𝑍̃𝑚 +
2
(𝑍̃𝑚 ) ). Hence the proof is complete.

Appendix B

For the HCM, Theorem 2 is proposed. It demonstrates that further expansion from infeasible points
is not needed. Before those details are presented let us define a vector 𝛽̃ of all 𝛽𝛾,𝜓,𝑘,𝜋 values and
recall that 𝛽𝛾,𝜓,𝑘,𝜋 , is the primary decision for the number of patients of type 𝛾 assigned to space 𝜋
for task k of PCP 𝜓.
Theorem 2: If 𝛽̃ is an infeasible assignment (i.e. solution) then 𝛽̃′ = 𝛽̃ + ∆̃ is also infeasible
regardless of how ∆̃ is defined, where ∆̃ is a vector of all the values ∆𝛾,𝜓,𝑘,𝜋 ≥ 0.

Proof: Let 𝐿𝐻𝑆𝜋 = ∑∀(𝛾,𝜓,𝑘,𝜋)∈℘3(𝛽𝛾,𝜓,𝑘,𝜋 . 𝑡𝛾,𝜓,𝑘,𝜋 ). As 𝛽̃ is infeasible it can be concluded that


𝐿𝐻𝑆𝜋 > 𝑇𝜋 for at least one space 𝜋 ∈ 𝛱. The difference between the LHS and RHS is the level of the
infeasibility. The LHS for the new solution 𝛽̃′ is as follows:

𝐿𝐻𝑆𝜋′ = ∑∀(𝛾,𝜓,𝑘,𝜋)∈℘3 (𝛽′ 𝛾,𝜓,𝑘,𝜋 . 𝑡𝛾,𝜓,𝑘,𝜋 ) = ∑∀(𝛾,𝜓,𝑘,𝜋)∈℘3 ((𝛽𝛾,𝜓,𝑘,𝜋 + ∆𝛾,𝜓,𝑘,𝜋 ). 𝑡𝛾,𝜓,𝑘,𝜋 )


= 𝐿𝐻𝑆𝜋 + ∑∀(𝛾,𝜓,𝑘,𝜋)∈℘3(∆𝛾,𝜓,𝑘,𝜋 . 𝑡𝛾,𝜓,𝑘,𝜋 ) ≥ 𝐿𝐻𝑆𝜋

Hence 𝐿𝐻𝑆𝜋′ = 𝐿𝐻𝑆𝜋 if and only if ∆𝛾,𝜓,𝑘,𝜋 = 0 ∀(𝛾, 𝜓, 𝑘, 𝜋) ∈ ℘3. If a single additional patient of
type (𝛾, 𝜓, 𝑘) was added to space 𝜋 then ∆𝛾,𝜓,𝑘,𝜋 = 1 and 𝐿𝐻𝑆𝜋′ ≥ 𝐿𝐻𝑆𝜋 > 𝑅𝐻𝑆𝜋 . Hence 𝛽̃ is
infeasible.

26

View publication stats

You might also like