Professional Documents
Culture Documents
net/publication/335852919
CITATIONS READS
20 267
3 authors:
Kwok-Leung Tsui
City University of Hong Kong
327 PUBLICATIONS 14,013 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
General Research Fund: Reliability and Degradation Modeling for Rechargeable Battery View project
Theme-based Research Scheme, Hong Kong: Safety, Reliability, and Disruption Management of High Speed Rail and Metro Systems View project
All content following this page was uploaded by Wenjie Chen on 23 May 2022.
To cite this article: Wenjie Chen, Hainan Guo & Kwok-Leung Tsui (2020) A new medical staff
allocation via simulation optimisation for an emergency department in Hong Kong, International
Journal of Production Research, 58:19, 6004-6023, DOI: 10.1080/00207543.2019.1665201
A new medical staff allocation via simulation optimisation for an emergency department in
Hong Kong
a
Wenjie Chen , Hainan Guob∗ and Kwok-Leung Tsuic
a Department of Systems Engineering and Engineering Management, City University of Hong Kong, Hong Kong, China; b Research
Institute of Business Analytics & Supply Chain Management, College of Management, Shenzhen University, Shenzhen, China; c School
of Data Science, City University of Hong Kong, Hong Kong, China
(Received 26 September 2018; accepted 2 September 2019)
Whether triage targets can be achieved has been an imperative assessment of service qualities for an emergency department
in healthcare management. In this research, we focus on triage targets and try to fully meet the target of fast emergency
response for critical patients subject to triage requirements for other category patients by optimising the medical staff allo-
cation in the emergency department. Main challenges stem from multiple stochastic constraints and the time-consuming
simulation. To solve the stochastically constrained discrete optimisation via simulation problem, we develop a discrete-
event simulation model and propose a simulated-annealing-based algorithm called ConSA that adopts a special searching
mechanism and an efficient simulation budget allocation rule to find a high-quality configuration of medical staff. A case
study based on the data from a public hospital in Hong Kong is carried out. Numerical experiments demonstrate that our
algorithm leads to a 38.28% improvement in the main performance compared to the current staff allocation and dominates
other algorithms in terms of computational efficiency and output accuracy. It indicates that our method is a good decision
tool for hospital managers.
Keywords: healthcare management; medical resource allocation; discrete optimisation via simulation; stochastic constraint;
simulated annealing
1. Introduction
An emergency department (ED) plays a unique role of the gatekeeper in healthcare systems. It opens the door to unscheduled
patients with diverse degrees of illness or injury, offers urgent treatment, and guides patient admissions to the hospital.
Unfortunately, this gatekeeper has been pressurised into a ‘breaking point’ due to the problem of overcrowding (Di Somma
et al. 2015).
There are plenty of measurements on the overcrowding in the ED such as the length of stay (LOS), waiting time for
physicians (WT), and throughput (Hwang et al. 2011). Apart from focussing on the measurements for the whole patients,
more and more hospital authorities build up a triage system and propose corresponding triage targets which provide more
specific service indicators for patients in different acuity levels. Valid triage targets can not only prioritise those who are in
need of immediate care but also reduce the delay of treatment and the waste of resources (Tanabe et al. 2004). However,
achieving all the triage targets is still a herculean challenge for EDs. In the United States, only 31% of EDs reached triage
targets for over 90% of patients (Horwitz, Green, and Bradley 2010). The Hong Kong healthcare system also applies a
5-level triage system and sets up a series of requirements to lessen the congestion and manage patient flows (Hong Kong
Hospital Authority 2018). Category I (critical) patients should get immediate treatment. 95% of category II (emergency)
patients are treated within 15 min. 90% of patients sorted as category III (urgent) are treated within 30 min. For category
IV (semi-urgent) patients, there is an unofficial waiting time target that 75% of them should get medical resources within
120 min. There is no target set for category V (non-urgent) patients because non-acute patients are suggested to seek medical
care in public or private clinics.
When the ED serves different categories of patients, the primary duty of the ED is to deal with the most urgent cases. It
should give the highest priority to critical patients and allocate all the necessary medical resources to them as soon as possible
because every minute counts for the survival of critical patients. However, the emergency response time for critical patients
is always influenced by some factors like limited medical resources and a mass influx of non-critical patients. Therefore,
achieving the triage targets, especially the target of rapid emergency response for critical patients, seems imperative for ED
operations.
Motivated by this issue, we formulate a simulation optimisation problem based on the ED in Hong Kong that minimises
the proportion of critical patients whose waiting time exceeds the threshold subject to other category patients’ waiting time
targets by reconfiguring medical staff. To tackle the discrete optimisation via simulation (DOvS) problem with stochastic
constraints, we propose a simulated-annealing-based random search heuristic called ConSA to find a high-quality solution
in a reasonable amount of time.
The main contributions of this study are threefold.
• According to the triage targets from the Hong Kong Hospital Authority, we present a novel problem which has a
significant value in practice. To our best knowledge, this is the first paper that focuses on all the triage targets in
practical ED operations and that fully improves the emergency response capability for critical patients.
• We extend simulated annealing (SA) for the stochastically constrained DOvS problem and propose the ConSA
method that can handle the multiple stochastic constraints and the time-consuming simulation effectively and
efficiently.
• An empirical study based on the ED at the Princess Margaret Hospital in Hong Kong is carried out. The results
indicate that our algorithm reaches a better solution than existing approaches and reveal the inherent management
laws for hospital stakeholders.
The paper is organised as follows. Section 2 reviews the related research on ED operations from an operations research
point of view and the mainstream approaches to tackling the DOvS problem with stochastic constraints. In Section 3,
we describe the discrete-event simulation (DES) model based on a public ED in Hong Kong and analyse the collected
data. Section 4 presents the problem formulation and the ConSA algorithm. An empirical evaluation to compare different
algorithms is provided in Section 5. Remarks and further research are discussed in Section 6.
2. Literature review
We review pertinent literature from two aspects: the practical aspect of ED operations and the methodological aspect of
simulation optimisation. The originalities and distinctions of our work are further stated.
EDs are suffering from considerable crowding which cannot be well handled by empirical management. Thus, innovative
system engineering methods to enhance the service capability of the ED have drawn a broad interest from operations
researchers (Xie and Lawley 2015). The existing research mainly focuses on (but not limited to) the length of stay (Feng,
Wu, and Chen 2017), waiting time (Sinreich, Jabali, and Dellaert 2012), the leave without being seen (LWBS) (Green
et al. 2006), throughput (Ahmed and Alkhamis 2009), and boarding time (Shi et al. 2016). Hwang et al. (2011) and Hoot and
Aronsky (2008) gave more detailed overviews on outcome measurements of the ED in operations research and applications.
Unlike the overall measurements such as LOS and WT, triage targets provide more specific service indicators for different
categories of patients which are more and more popular in healthcare systems. However, most of the related studies neglect
triage targets. Only a few works (Ahmed and Alkhamis 2009; Guo et al. 2017) paid attention to the triage targets, and they
just concentrated on part of the triage targets or overlooked the target of fast emergency response for critical patients. In this
paper, we fill in the blank of studies on triage targets, formulate a stochastically constrained DOvS problem, and present the
ConSA method to find a high-quality medical staff configuration to improve the emergency response capability for critical
patients with the constraints on other patients’ service qualities.
There have been several studies in the past decades to deal with simulation optimisation with stochastic constraints
problems. These methods can be roughly classified into two categories: (i) methods for relatively small systems (Andradóttir
and Kim 2010; Pujowidianto, Lee, and Chen 2013; Gao et al. 2018) and (ii) methods for large-scale systems. To tackle a
complex practical problem, we focus on the techniques for large-scale systems. One of the mainstream approaches for large-
scale systems is the metamodel simulation optimisation technique. Kleijnen, Van Beers, and Van Nieuwenhuyse (2010)
and Mohammad Nezhad and Mahlooji (2014) combined the surrogate model with mathematical programming to solve
stochastically constrained optimisation problems. Although their methods are efficient for a simulation model, they cannot
scale up their performance to more complex problems (for example, the dimension of decision variables is large.) because
a lot of design points are needed to support each metamodel. Another approach is based on random search heuristics. The
proposed ConSA algorithm selects the heuristic SA to guide the search process because of its good flexibility in industrial
problems and its special mechanism to jump out of local optimums.
In our problem, the main difficulties lie in multiple stochastic constraints and the time-consuming simulation. To
tackle the stochastic constraints, some researchers develop advanced ranking-and-selection-based methods. Andradóttir
and Kim (2010) used indifference zones to identify and remove the infeasible and poor feasible solutions. Gao et al. (2018)
6006 W. Chen et al.
proposed a new ranking-and-selection (R&S) method using regression metamodels to find the best feasible solution effi-
ciently considering stochastic constraints. Lee et al. (2012) also focussed on the R&S problem to maximise the probability of
correct selection of the best feasible solution for stochastically constrained DOvS problems. Batur and Kim (2005) and Gao
and Chen (2016) concentrated on feasibility determination for the problems with the deterministic objective and stochas-
tic constraints. The approaches mentioned above are suitable for a finite number of systems (usually less than 500). For
large-scale systems, Tsai and Sheng (2014) incorporated two genetic-algorithm-based searching mechanisms with ranking
and selection procedures to address the problem with the stochastic objective function and one stochastic constraint. Apart
from these, some researchers transform a constrained problem into the unconstrained one because of mature techniques
for unconstrained simulation optimisation problems. Luo and Lim (2013) utilised the Lagrangian method for stochastically
constrained problems. However, their method needs to estimate the partial derivatives of the Lagrangian function which
is difficult for stochastic problems. Park et al. (2014) combined a search algorithm with a novel approach called penalty
function with memory (PFM) (Park and Kim 2015). Nevertheless, the PFM method is more appropriate for active con-
straints. Li, Sava, and Xie (2009) integrated COMPASS (Hong and Nelson 2006; Xu, Nelson, and Hong 2010), a moving
observation area, and a penalty function to solve the DOvS problem which only had one stochastic constraint with strict
inequality. For the multiple stochastic constraints in our problem, we construct a new fitness function tailored for SA using
a dynamic penalty function with increasing penalty factors. The dynamic penalty function ensures that the penalty factor
goes to infinity with the iteration increasing. This function can pledge the output to converge to the feasible solution and the
probability of accepting infeasible solutions to converge to zero.
Considering the time-consuming simulation, Pujowidianto et al. (2016) integrated Nested Partition (NP) (Shi and Ólafs-
son 2000) with the optimal computing budget allocation for constrained optimisation rule called OCBA-CO (Lee et al. 2012)
to tackle a practical DOvS problem with stochastic constraints. The OCBA-CO rule in Lee et al. (2012) is derived from an
optimisation problem that minimises the probability of correct selection (PCS) with the constraint on fixed total simulation
budgets. However, in practical applications, directly using the OCBA-CO rule in Lee et al. (2012) is not an optimal choice
because we cannot guarantee that the PCS equals to one in finite time, and it is hard to decide how many total simulation
budgets we should spend. To make the OCBA-CO rule more applicable to practical problems, we propose a pragmatic
simulation budget allocation problem which minimises the total simulation replications with the constraint on the PCS. Our
formulation is more grounded in reality because an acceptable PCS and minimal simulation budgets seem efficient enough
for practical problems.
Our ConSA algorithm is composed of two phases. Phase I uses SA to guide the search direction while considering
stochastic constraints. It first iteratively allocates simulation budgets to the current solution and neighbour solution accord-
ing to the efficient OCBA-CO rule to distinguish the better one. Then, we construct a new fitness function using a dynamic
penalty function with increasing penalty factors. Based on the new fitness, we determine the acceptance probability accord-
ing to the Metropolis criterion (Metropolis et al. 1953). This process continues until the termination condition of Phase I is
met. Phase II invokes a final selection procedure using the efficient OCBA-CO rule to correctly estimate the best one from
all the searched solutions with limited simulation budgets.
Figure 1. Patient arrival rate. (a) category III patients, (b) category IV patients, and (c) category V patients.
proportions of patients. Table 1 presents the patient category distribution. The curves in Figure 1 show the actual daily
patterns of patient arrivals by category III, IV, and V patients. Category III and IV patients hold similar arrival patterns.
Their daily arrival rates appear a peak around 10 a.m. and slowly drop from 3 p.m. Category V patients surge from 8 a.m.
because they usually go to the ED for sick notes in the early morning. For category I and II patients, the patterns of the
arrival rate remain relatively stable. To model the patient arrival, we impose a non-homogeneous Poisson distribution and
assign a constant arrival rate to each hourly time slot empirically according to the real data.
• Registration and triage: The ED allocates registration nurses to help patients record their information. After the
registration, patients undergo the triage process operated by triage nurses. These two processes are not necessary
for the whole critical patients and around 80% of emergency patients who are eager for all the possible medical
resources.
• Resuscitation: Critical patients have the highest priority to occupy all kinds of resources to get rescued. There
are three resuscitation rooms. Senior doctors and senior nurses are in charge of emergency treatment. Detailed
operation procedures like the anaesthetic and closing the incision are beyond the scope of our study.
• Consultation: The remaining emergency patients and urgent patients are treated by senior doctors in cubicles while
semi-urgent patients and non-urgent patients are diagnosed by junior doctors in consultation rooms. This area
consists of fifteen cubicles, ten consultation rooms, and two waiting halls. For each queue, the patients with the
same category follow the First In First Out (FIFO) rule, and the patients with different categories are treated
according to their priorities. During the treatment, physicians decide whether patients need extra laboratory tests
or imaging investigations according to the patient’s vital signs. When the patients get the testing results, they turn
back to the same physician for the reassessment or the third assessment. The consultation-test process repeats
until enough treatments are completed. We add the revisiting procedure into our model to simulate the complex
diagnosis process as realistically as possible. After the diagnosis, nurses are allocated to conduct a brief follow-up
operation to patients if necessary. Finally, physicians arrange patients in observation rooms, hospitals, or to be
discharged.
• Laboratory test and imaging investigation: Some patients are in need of further laboratory test and imaging inves-
tigation. The senior and junior nurses are mandated to aid the process of testing. The lab test and imaging
6008 W. Chen et al.
investigation cost no more than thirty minutes while it takes more than two hours to get the results. After-
ward, patients will turn around to the same physician for further diagnosis. We regard the testing process as a
whole module and omit the details such as X-Ray and ultrasound. This simplification has few effects on our
study.
• Observation: There are 4.9% of category I patients, 5.3% of category II patients, 10.9% of category III patients,
4.4% of category IV patients, and 1.3% of category V patients in need of further observations. To occupy the beds
in observation rooms, patients follow the same rules as those in consultation rooms that the FIFO rule is carried
out among the same category patients and the priority principle is obeyed by the patients with different categories.
We build a simple model of the observation unit because our cooperative hospital only provides us with rough data
and a little information about this unit.
To model the service process, a key component is the duration of the service. However, the data is not available from
the ED. After consulting with hospital managers, we decide to use the parameters and distributions in Table 2 as inputs to
the simulation model.
Table 4. Output comparison between real data and the simulated result.
CAT 1 CAT 2 CAT 3 CAT 4 CAT 5
WTT (h) Real data – 0.04 0.09 0.11 0.11
Simulated result – 0.05 0.08 0.11 0.11
WTP (h) Real data – 0.11 0.31 1.65 2.07
Simulated result – 0.09 0.28 1.60 2.12
LOS (h) Real data 1.30 1.23 3.53 4.15 3.13
Simulated result 1.32 1.20 3.54 4.14 3.15
Discharge destination (%) Discharged home/others Real data 67.47
Simulated result 73.85
Admitted to hospitals Real data 32.53
Simulated result 26.15
4. Optimisation algorithm
4.1. Problem formulation
Our goal is to find a high-quality configuration of the medical staff in the ED in order to minimise the proportion of critical
patients whose waiting time exceeds the specified value subject to the constraints on other service qualities. The model is
described as follows:
min f1 (x) = E[F1 (x, ω)]
⎧
⎪
⎪ fk (x) = E[Fk (x, ω)] ≤ αk k = 2, . . . , m,
⎪
⎪
⎪
⎪ n
⎨g(x) = ci xi ≤ c i = 1, 2, . . . , n, (1)
s.t.
⎪
⎪
i=1
⎪
⎪ li ≤ xi ≤ ui i = 1, 2, . . . , n,
⎪
⎪
⎩
xi ∈ N i = 1, 2, . . . , n.
where the decision variable xi (i = 1, 2, . . . , n) represents the number of type i medical staff with the lower bound li and
upper bound ui , x = [x1 , x2 , . . . , xn ] denotes the vector of the decision variable xi (i = 1, 2, . . . , n), and n is the total number
of medical staff types.
Fk (x, ω) (k = 1, 2, . . . , m) represents an estimate of the proportion of category k patients whose waiting time exceeds
the threshold (we denote the threshold wtk ) given solution x and random noise ω. Function fk (x) (k = 1, 2, . . . , m) is the
expectation of Fk (x, ω) (k = 1, 2, . . . , m). The objective is to minimise f1 (x), and stochastic constraints restrict fk (x) (k =
2, . . . , m) by the upper bound αk (k = 2, . . . , m). Here, αk (k = 2, . . . , m) and wtk (k = 2, . . . , m) are specified according to
the requirements of the Hong Kong Hospital Authority as shown in Table 5. Because wt1 is not explicitly defined by ED
managers, we determine 30s as the threshold of emergency response time for category I patients. g(x) represents the total
labour cost of medical staff which is bounded by the maximum cost c, and ci (i = 1, 2, . . . , n) is the labour cost for type i
medical staff. In this problem, the objective function and constraints comprise stochastic elements that have no analytical
forms and can only be estimated via the simulation.
In our cooperative hospital, ED managers impose three 8 h-shift staffing plans for a day and set the minimal possible
staffing level at one admission nurse (AN), one triage nurse (TN), two senior nurses (SN), one junior nurse (JN), two senior
doctors (SD), and two junior doctors (JD), and the maximal possible staffing level at two ANs, two TNs, six SNs, six JNs, six
SDs, and six JDs. The solution space is (2 × 2 × 5 × 6 × 5 × 5)3 = 2.7 × 1010 . Suppose that we use the naive method to
enumerate the overall solution space and provide 50 simulation replications to each solution with 0.6 s for one replication.
The total simulation time will be around 25,685 years which is unacceptable. Therefore, an effective and efficient simulation
optimisation method is expected by operations research experts to tackle this kind of problems.
Simulated annealing is widely used in solving combinatorial optimisation problems. It first generates a neighbour solu-
tion xn from a neighbourhood structure which assigns a set of neighbour solutions N(xc ) to the current solution xc . Then,
based on the energy difference and temperature, a special acceptance rule, the Metropolis criterion (Metropolis et al. 1953),
is applied to determine the acceptance probability. Namely, if the fitness f (xn ) is better than f (xc ), the neighbour solu-
+
tion replaces the current solution; otherwise, the neighbour solution is accepted with the probability e−[f (xn )−f (xc )] /T . The
Metropolis criterion enables a jump move from the better solution to the worse solution with a certain probability to avoid
falling into a local optimum. At high temperatures, more unknown states are explored; at low temperatures, exploitation
happens to search around the current best state. As the temperature goes down, more states gather in the global optimal
region. Although such a search process is easy to operate, SA is susceptible to noisy function values and even performs
like a random search, as is the case in simulation optimisation. Thus, we propose a novel SA-based heuristic to enhance the
search efficiency and output accuracy for DOvS problems with stochastic constraints.
Table 6. Notations for the asymptotically optimal computing budget allocation rule for paired solutions.
Notation Meaning
b better solution
w worse solution
s total number of simulation replications (budgets)
λx proportion of total simulation replications allocated to solution x
sx number of simulation replications allocated to solution x, i.e. sx = λx s
Fk (x, ωj ) the jth simulation sample of the kth performance for solution x ∈ , j = 1, 2, . . . , sx , k = 1, 2, . . . , m
fk (x) mean of Fk (x, ω)
σk2 (x) variance of Fk (x, ω)
sx
F k (x) sample mean of the kth performance for solution x, i.e. F k (x) =(1/sx ) Fk (x, ωj )
j=1
sx
Sk2 (x) sample variance of the kth performance for solution x, i.e. Sk2 (x) =(1/(sx − 1)) (Fk (x, ωj ) − F k (x))2
j=1
Metropolis criterion. Phase I ends up with a constraint on total iterations. For more details, we define the notations to
describe the asymptotically optimal computing budget allocation rule for paired solutions in Table 6.
We assume that simulation output samples are independent across different solutions and replications and that outputs
follow the identical and independent normal distribution, i.e. Fk (x, ω) ∼ N(fk (x), σk2 (x)) (k = 1, 2, . . . , m). Here, f1 (x) rep-
resents the objective and fk (x) (k = 2, . . . , m) denotes the performance in stochastic constraints. We use the sample mean
and the sample variance to estimate the mean and variance of each performance. Note that F k (x) ∼ N(fk (x), σk2 (x)/sx ) (k =
1, 2, . . . , m). We also assume that constraint measures are not equal to constraint limits, i.e. fk (x) = αk (k = 2, . . . , m), to
make sure that the probability of correct selection (PCS) approaches to one with simulation budgets increasing.
In our problem, there are two cases when comparing the current solution and neighbour solution. One is that both
solutions are infeasible, and the other is that at least one solution is feasible. In the former case, the better solution cannot be
clearly defined, so we allocate the same simulation replications to the paired solutions. In the latter case, the better solution
should remain feasible and better than any other feasible solution based on the sample mean performance. Therefore, we
aim to minimise the overall computing budgets to achieve the desired probability of correct selection (PCS) of the better
solution b from the current solution and neighbour solution.
min s b + sw
sb ,sw
(2)
s.t. PCS ≥ β
Directly solving problem (2) is rather intractable because there is no closed form for PCS. Thus, we adopt a fast and
inexpensive way of approximation – Bonferroni inequality to give a lower bound for PCS. That is,
m
m
PCS ≥ P(F k (b) ≤ αk ) − P (F k (w) ≤ αk ) ∩ P(F 1 (b) > F 1 (w)) + (2 − m)
k=2 k=2
m
≥ P(F k (b) ≤ αk ) − min min P(F k (w) ≤ αk ), P(F 1 (b) > F 1 (w)) + (2 − m)
k=1
k=2
= APCS
International Journal of Production Research 6013
min sb + sw
sb ,sw
(3)
s.t. APCS ≥ β
m
1 −δk (b) δk (b) −1/2 −1/2 1 −δb,w δb,w σ12 (b)
1+ν ϕ σ (b)
λ b s − ϕ =0 if w ∈ O .
2 k=2 √k σk (b) 2 σb,w (σb,w ) λb s
2 3/2 2 2
λb s
Note that our problem (minimizing the simulation cost given a constraint on APCS) and the problem in the work of
Lee et al. (2012) (maximizing APCS given a fixed simulation cost) hold the same KKT conditions. Then, following the
similar idea of derivation, we get the identical allocation strategy to asymptotically minimise the overall simulation budgets
to achieve the desired approximate probability of correct selection for two solutions.
⎧ 2
⎪
⎪ 1 σ1 (b) σqb (b) δb,w
⎪
⎪ , if w ∈ O and ≥ ·
⎪
⎪ σ1 (b) σ1 (w) δqb (b) σ1 (w)
⎪
⎪ 1 +
⎪
⎪ σ1 (w)
⎪
⎪ 2
⎪
⎪ 1 σ1 (b) σqb (b) δb,w
⎨ if w ∈ O and < ·
2 ,
λw = σqb (b) δb,w σ1 (w) δqb (b) σ1 (w)
⎪
⎪ 1+ ·
⎪
⎪ δqb (b) σ1 (w)
⎪
⎪
⎪
⎪ 1
⎪
⎪ 2 , if w ∈ F
⎪
⎪ σ (b) δqw (w)
⎪
⎪ qb
⎩1 + ·
δqb (b) σqw (w)
λb = 1 − λw
In practice, means and variances can be estimated by sample means and sample variances which are consistent estimates,
i.e. F k (x) → fk (x) and Sk2 (x) → σk2 (x) as sx → ∞. We iteratively provide the paired solutions with incremental simulation
replications and use the efficient OCBA-CO rule to allocate simulation replications. This process continues until the
desired level of APCS is reached or the number of simulation replications allocated to the paired solutions is over a threshold.
After correctly selecting the better solution from the paired solutions, we calculate the new fitness considering stochastic
constraints. One of the approaches to handling stochastic constraints is to convert the constrained optimisation problem into
an unconstrained optimisation problem via penalty functions where penalty coefficients are the key points. For the SA-
based algorithm, improper penalty parameters may result in a malformed accepting probability which seriously influences
the effectiveness of the search and the validity of the output. We have to make a balance between the objective and penalised
constraints to guarantee the final output to converge to a true optimal solution. Motivated by the issue, we tailor a novel
6014 W. Chen et al.
m
h(x) = E[F1 (x, ω)] + wk viter · max {0, E[Fk (x, ω)] − αk }
k=2
where wk (k = 2, . . . , m) is the penalty coefficient. viter denotes the penalty factor and viter = viter−1 /γ if l ≥ lmax , otherwise
viter = viter−1 . The initial penalty factor v0 = 1/γ . (iter represents the iteration counter, γ means the cooling ratio, l is the
epoch length counter, and lmax is the maximum epoch length in SA.)
This penalty function captures the features of SA. When the solution is feasible, the goal remains to minimise the
original objective function E[F1 (x, ω)]. For infeasible solutions, the fitness consists of penalty terms which become larger
as the iteration increases. The penalty factor viter guarantees the probability of accepting the infeasible solution to converge
zero
to as iter → ∞. Here, we use the sample mean of E[Fk (x, ω)] (k = 1, 2, . . . , m) to estimate h(x), i.e. H(x) = F 1 (x) +
m
k=2 wk viter · max{0, F k (x) − αk }.
In Phase II, each searched solution is loaded into a candidate set. We continually allocate the incremental simulation
replications final to select the estimated best solution from the candidate set until the specified approximate probability of
correct selection β is reached. The necessity of Phase II lies in exactly estimating the best solution from all the searched
solutions. In other words, without Phase II, we cannot ensure that the feasible solution with the minimum F 1 (x) is the final
best solution among all the searched solutions even though the paired solutions can be distinguished correctly in Phase
I. Let ηb = σqb (b)/δqb (b), p = (σ12 (b)/λb )/(σ12 (x)/λx ), ηx = σ1 (x) (p + 1)/δb,x if x ∈ O or ηx = σqx (x)/δqx (x) if x ∈ F .
We further assume that λb λx∈O . Then p → 0 and ηx ≈ σ1 (x)/δb,x if x ∈ O . The simulation cost can be asymptotically
minimised by the following allocation strategy (Lee et al. 2012) for multiple solutions.
2
λx1 ηx1
= ∀ x1 = x2 = b
λx2 ηx2
International Journal of Production Research 6015
λb = max(λOb , λFb )
where λOb = σ1 (b) x∈O (λx /σ1 (x)) and λFb /λx = (ηb /ηx ) ∀ x = b.
2 2 2
Above all, we have described the mechanism of the ConSA algorithm. We summarise it in Algorithm 2 and Figure 3.
5. Experiment results
In this section, we design numerical experiments to identify the effectiveness and efficiency of ConSA. These experiments
are conducted with Arena version 14.0 via VBA codes on a desktop computer with Intel Core i5-4590 processor, 3.30 GHz,
and 8.00 GB RAM.
5.1. Comparison between the current and the ConSA staffing levels
In this experiment, we evaluate the improvement in ED service qualities using the solution found by the proposed algorithm.
The ED in our problem employs three shifts for a day. Each is 8 h. Current medical staff is composed of the admission nurse
(AN), triage nurse (TN), senior nurse (SN), junior nurse (JN), senior doctor (SD), and junior doctor (JD). Their configuration
6016 W. Chen et al.
Table 7. Comparison between the current and the ConSA staffing levels.
Resource configuration Service quality
Shift x1 x2 x3 x4 x5 x6 F 1 (x) (%) F 2 (x) (%) F 3 (x) (%) F 4 (x) (%) Cost (LC) WT (h)
Current Morning 1 2 4 2 3 2 28.37 2.87 3.57 12.25 8650 1.05
Evening 1 2 4 2 3 2
Night 1 1 2 1 2 2
ConSA Morning 1 1 3 3 4 2 17.51 2.30 6.51 13.30 8650 1.09
Evening 1 1 4 1 4 2
Night 1 1 2 1 2 2
Service quality improvement (%) 38.28 19.86 − 82.35 − 8.57 0 − 3.81
Note: a ‘Current’ represents the staffing level that the ED is adopting.b ‘WT’ means the average waiting time for physicians of all the
patients.c x1 , x2 , x3 , x4 , x5 , and x6 represent the numbers of AN, TN, SN, JN, SD, and JD, respectively.
is shown in Table 7. We use the labour cost unit (LC) to describe the staffing cost. The labour cost for each type of staff is
100 LC for AN, 150 LC for TN, 300 LC for SN, 200 LC for JN, 300 LC for SD, and 200 LC for JD.
We simulate the original model with 500 replications and obtain current service performance with F 1 (x) = 28.37%,
F 2 (x) = 2.87%, F 3 (x) = 3.57%, F 4 (x) = 12.25%, and the total cost 8650 LC. Under the current resource configuration,
there exist 28.37% of critical patients who get late access to medical treatment. This result implies the room for improvement
in the service quality through the resource reconfiguration under the fixed funding for the ED.
To apply the ConSA approach, we parameterise the maximum labour cost with 8650 LC and upper bounds of the
proportion of patients whose waiting time exceeds the threshold for category II, III, and IV patients to be 5%, 10%, and
25%, respectively. In Table 7, the result indicates the excellent performance of our method. F 1 (x) dramatically decreases
from 28.37% to 17.51% with a 38.28% reduction, and F 2 (x) decreases by 19.86%. Although F 3 (x) and F 4 (x) increase by
82.35% and 8.57%, their service performance is still within requirements. In addition, there is no change in Cost. We also
compare the overall service quality WT for all the patients under the current and recommended configurations. The overall
service quality is composed of the waiting time of category I patients (included in the objective function) and the waiting
time of other categories of patients (included in the stochastic constraints). Although there is a slight increase in WT, the
recommended solution can make full use of medical resources to cut down the waiting time of the most critical patients
and control the service qualities of other patients within the requirements. These changes in the service qualities of different
patients are also what the hospital managers expect (Hong Kong Hospital Authority 2018). Above all, the medical staff
reconfiguration can balance the service qualities of patients with different acuity levels given a fixed labour cost and ensure
that there is no significant change in the overall service quality for the whole patients.
5.3. Comparison among ConSA, PFM, and NP + OCBA-CO algorithms for different test problems
As stated in Section 2, the methods to DOvS problems with multiple stochastic constraints and large-scale systems are
not fully developed. Among those methods, PFM (Park and Kim 2015) and NP + OCBA-CO (Pujowidianto et al. 2016)
are two typical approaches which focus on tackling stochastic constraints and improving computational efficiency,
respectively. In this subsection, we intend to explore the robustness and effectiveness of ConSA compared with these
two methods.
International Journal of Production Research 6017
Figure 4. Relationship between the iteration and the best performance. (a) Tmax = 0.75 and γ = 0.9. (b) Tmax = 0.75 and γ = 0.95.
(c) Tmax = 0.85 and γ = 0.95.
The setting of parameters is problem-dependent. For the ConSA approach, we set initial simulation replications s0 to be
10, incremental simulation replications in Phase I to be 20, and incremental simulation replications final in Phase II to
be 100. When the current solution and the neighbour solution are both infeasible, we allocate 100 simulation replications
for each solution. The key parameter, specified approximate probability of correct selection β, influences simulation time
and selection reliability. We make a tradeoff and set β = 95%. The ConSA method holds the initial temperature 0.75 and
the cooling factor 0.95. Moreover, based on the nature of our problem, we set the penalty coefficients w2 , w3 , and w4 to be
18.09, 17.18, and 4.52, respectively.
PFM is a penalty-type method that is constructed by a measure of violation and a new penalty parameter called the
penalty sequence. For a minimisation problem, the penalty sequence converges to zero in probability or almost surely for
feasible solutions and diverges to infinity almost surely for infeasible solutions. Because PFM itself is just a method to
handle multiple stochastic constraints, it has to be combined with a search algorithm to find the optimum for the DOvS
problem with stochastic constraints. We choose the genetic algorithm (GA) as the search method. Here, we denote the
combined method GA + PFM. For the GA part, the selection mechanism across generations is the standard roulette wheel.
The evolutionary crossover follows the one-point intersection, and the crossover operator is set 0.5. We also choose one
point for mutation and set the mutation operator 0.5. The population size of designs
√ is 40, and the simulation replications
√ for
each design is 100. For the PFM part, we choose the appreciation factor θa = 1.5 and the depreciation factor θb = 0.1 1.5
to guarantee convergence and efficiency of the algorithm. The setting of parameters has been proved to be good enough in
industrial problems (Park et al. 2014).
The NP + OCBA-CO method utilises NP as the search algorithm and OCBA-CO to allocate fixed simulation budgets to
the designs in one iteration. For the NP part, we randomly choose three axes and partition each axis of the most promising
region into two parts. Namely, there are 8 subregions. Then, 1 sample is taken from each region randomly. For the OCBA-
CO part, we run 10 initial simulation replications to each selected design. Afterward, we allocate 90 simulation replications
to these designs repeatedly until the 900 computing budgets are exhausted in each iteration. For these three methods, we
determine the same termination condition that the estimated best feasible solution remains unchanged in consecutive 200
iterations.
Experiment 1: We design three test problems with different values of the constraint on Cost. The case that the Cost
constraint is 8650 LC is considered as a scenario with a low percentage of feasible solutions in the whole solution space.
The case that the Cost constraint is 8950 LC is considered as a scenario with a medium percentage of feasible solutions in
the whole solution space. In the same way, the case that the Cost constraint is 9250 LC represents a scenario with a high
percentage of feasible solutions in the whole solution space. Table 8 summarises the performance of ConSA, GA + PFM,
and NP + OCBA-CO in these three scenarios. It is observed that the ConSA approach compares quite favourably with
GA + PFM and NP + OCBA-CO. When the Cost constraint is 8650 LC, ConSA leads to reductions of 46.17% and 65.09%
on simulation replications and reductions of 23.44% and 18.52% on objective values compared with GA + PFM and
NP + OCBA-CO, respectively. For the Cost constraint 8950 LC, there are 63.51 % and 23.27% reductions on simula-
tion replications and 34.37% and 40.51% reductions on objective values when we compare ConSA with GA + PFM and
NP + OCBA-CO. The advantage of ConSA still remains when the Cost constraint is 9250 LC. ConSA results in reductions
6018 W. Chen et al.
of 61.68% and 1.53% on simulation replications and reductions of 19.48% and 44.59% on objective values compared with
GA + PFM and NP + OCBA-CO, respectively.
Experiment 2: In this experiment, we design three test problems with different numbers of stochastic constraints to
demonstrate the efficiency of the dynamic penalty function in ConSA. These three problems have the same objective func-
tion. For the first problem, there is one stochastic constraint on WT. The second problem is the same as the original problem,
and the last problem contains five stochastic constraints on the service qualities of category II, III, and IV patients, WT, and
LOS. We use the current values (with the current staffing level) of WT (1.05 h) and LOS (3.81 h) as the upper bounds of
WT and LOS because there are no specific restrictions on them. Table 9 shows the performance of ConSA, GA + PFM,
and NP + OCBA-CO methods for three test problems. The result reveals that ConSA performs better than GA + PFM and
NP + OCBA-CO. When there is only one stochastic constraint, ConSA needs fewer simulation replications to obtain a bet-
ter result than GA + PFM and NP + OCBA-CO methods with reductions of 12.44% and 44.28% on simulation budgets and
30.47% and 15.31% on objective values. For the problem with three stochastic constraints, ConSA leads to reductions of
46.17% and 65.09% on simulation replications and reductions of 23.44% and 18.52% on objective values compared with
GA + PFM and NP + OCBA-CO, respectively. For more stochastic constraints, there are 74.04% and 37.83% reductions on
simulation replications and 15.88% and 36.18% reductions on objective values when we compare ConSA with GA + PFM
and NP + OCBA-CO. Besides, it is interesting that less stochastic constraints almost make it more difficult to search for a
good solution in a limited amount of time. This is because the solution space becomes larger and more complex. However,
the ConSA algorithm is still more robust to the different numbers of stochastic constraints compared with GA + PFM and
NP + OCBA-CO methods.
The advantages of ConSA mainly stem from two aspects. On the one hand, the performance of candidate solutions
can be correctly distinguished considering the stochastic constraints, which significantly influences the search process of
ConSA. The ConSA method, to a great extent, can resist the impact and avoid being caught in a local optimum. On the
other hand, the estimated-best solution is robust to the disturbance of randomness from the simulation because ConSA can
properly allocate simulation replications to accurately estimate the best solution from all the searched solutions. To better
demonstrate the second advantage of our method, we define another performance measure APCSfinal in Tables 8 and 9.
APCSfinal represents the approximate probability of correct selection of the best feasible solution from all the searched
solutions. The results indicate that the ConSA method pledges APCSfinal to be over 95%. Namely, the final estimated best
solution found by the ConSA is overall optimum with the APCSfinal over 95%. APCSfinal of GA + PFM is always less than
95% because PFM combined with GA concerns more about the stochastic constraints, and there is no efficient simulation
computing budget allocation mechanism. In terms of NP + OCBA-CO, APCSfinal can reach more than 99% for most of the
International Journal of Production Research 6019
Table 9. Comparison among the ConSA, GA + PFM, and NP + OCBA-CO optimal staffing
levels with different numbers of stochastic constraints.
Method
Number of stochastic
constraints Performance measure ConSA GA + PFM NP + OCBA-CO
1 F 1 (x) (%) 19.58 28.16 23.12
WT (h) 1.03 1.04 0.75
Cost (LC) 8650 8600 8650
APCSfinal (%) 96.11 38.68 1.00
Simulation replications 36,863 42,100 66,153
3 F 1 (x) (%) 17.51 22.87 21.49
F 2 (x) (%) 2.30 3.17 2.60
F 3 (x) (%) 6.51 8.07 6.41
F 4 (x) (%) 13.30 13.84 0.81
Cost (LC) 8650 8650 8650
APCSfinal (%) 95.09 70.95 99.98
Simulation replications 25,893 48,100 74,173
5 F 1 (x) (%) 17.32 20.59 27.14
F 2 (x) (%) 2.40 1.15 4.35
F 3 (x) (%) 7.29 6.44 6.31
F 4 (x) (%) 3.26 10.47 10.55
WT (h) 0.85 1.02 1.02
LOS (h) 3.30 3.71 3.68
Cost (LC) 8650 8650 8550
APCSfinal (%) 97.60 68.36 1.00
Simulation replications 30,135 116,100 48,470
Note: a ‘WT’ means the average waiting time for physicians of all the patients.b ‘LOS’ means the
length of stay of all the patients.
test problems. However, NP + OCBA-CO spends more simulation replications than ConSA. When the Cost constraint is
9250 LC, high APCSfinal cannot be guaranteed again although NP + OCBA-CO spends relatively few simulation replications
to get an estimated-best solution. This result illustrates the importance of the new computing budget allocation problem
which minimises the total simulation replications with the constraint on the PCS. Our formulation is more applicable to
practical problems because there is no need for users who are not familiar with simulation optimisation to decide how many
simulation budgets they should spend.
Figures 5 and 6 report the relationship between the total simulation replications and performance of the best solutions
found by ConSA, GA + PFM, and NP + OCBA- CO with different Cost constraints and different numbers of stochastic
constraints, respectively. In these experiments, ConSA quickly converges to a high-quality solution. The computational
advantage of ConSA tends to increase with the percentage of feasible solutions in the whole solution space. Given the same
simulation replications, ConSA can almost find a better solution than GA + PFM and NP + OCBA-CO. Therefore, although
Figure 5. Relationship between simulation replications and the best performance of ConSA, GA + PFM, and NP + OCBA-CO. (a) The
Cost constraint is 8650 LC. (b) The Cost constraint is 8950 LC. (c) The Cost constraint is 9250 LC.
6020 W. Chen et al.
Figure 6. Relationship between simulation replications and the best performance of ConSA, GA + PFM, and NP + OCBA-CO. (a) The
number of stochastic constraints is one. (b) The number of stochastic constraints is three. (c) The number of stochastic constraints is five.
we only design one termination condition that the best solution remains unchanged in consecutive 200 iterations, we have
faith that under the other termination condition that the total simulation replications are fixed, ConSA can still dominate
GA + PFM and NP + OCBA-CO.
5.4. Recommended staffing levels with multiple arrival rates and cost limits
In EDs, the arrival rate always fluctuates within a certain range because of some factors like different weeks and seasons.
In this subsection, we test the flexibility of our method with multiple arrival rates and cost limits and give some suggestions
for hospital managers to rationally allocate medical staff with the limited expenditure in response to various arrival rates.
As shown in Table 10, for a constant arrival rate, the more labour cost results in better service quality. In addition, with
the same labour costs, ED managers can use multiple strategies of the medical staff allocation to maximise service qualities
under the daily changing arrival rate. However, in some situations, fixed human resource investment cannot handle the
augment of patients. For example, with 8350 LC, there is no feasible solution when the arrival rate is 1.2λ(t). One measure
Table 10. Recommended staffing levels with various arrival rates and costs.
Arrival rate
0.8λ(t) λ(t)
Cost Shift x1 x2 x3 x4 x5 x6 F 1 (x) (%) x1 x2 x3 x4 x5 x6 F 1 (x) (%)
8350 LC Morning 1 1 3 2 4 2 13.05 1 1 3 2 4 2 23.97
Evening 1 1 3 2 3 2 1 1 2 3 4 2
Night 2 1 2 2 2 2 1 1 2 1 2 2
8650 LC Morning 1 1 2 5 4 2 11.23 1 1 3 3 4 2 17.51
Evening 2 1 3 2 3 2 1 1 4 1 4 2
Night 1 1 2 2 2 2 1 1 2 1 2 2
8950 LC Morning 1 1 3 2 4 2 7.90 1 1 3 3 4 2 13.29
Evening 1 1 3 2 4 2 1 1 3 3 4 2
Night 2 1 2 2 3 2 1 1 2 2 2 2
1.1λ(t) 1.2λ(t)
8350 LC Morning 2 2 2 3 3 2 40.47 – – – – – – –
Evening 1 1 3 3 3 3 – – – – – –
Night 1 1 2 1 2 2 – – – – – –
8650 LC Morning 2 2 2 4 4 2 35.03 2 2 3 3 3 2 50.02
Evening 1 1 2 3 3 2 1 1 3 1 4 3
Night 2 1 2 1 2 3 2 1 2 1 2 2
8950 LC Morning 1 2 5 1 4 2 27.24 2 2 3 2 4 3 37.36
Evening 1 1 3 2 3 3 1 1 3 2 4 2
Night 1 1 2 1 2 2 2 1 2 1 2 2
Note: a No feasible solution is found when the labour cost is 8350 LC and the arrival rate is 1.2λ(t).
International Journal of Production Research 6021
is to invest more labour costs for booming patients and fewer costs for the moderate ED input. Thus, we suggest that hospital
managers should impose a combinatorial strategy to seek a balance between the labour cost and the service quality.
Acknowledgments
We thank Siyang Gao, the assistant professor with the Department of Systems Engineering and Engineering Management in City Uni-
versity of Hong Kong, for giving us support from the methodological perspective. We also thank the anonymous reviewers and the
Department Editor for their constructive comments and suggestions which have greatly improved the exposition of this paper.
Disclosure statement
No potential conflict of interest was reported by the authors.
Funding
This work was supported by National Natural Science Foundation of China (NSFC) [grant number 71701132] and Research Grants
Council (RGC) Theme-Based Research Scheme [grant number T32-102/14-N].
ORCID
Wenjie Chen http://orcid.org/0000-0003-4949-8949
References
Ahmed, Mohamed A., and Talal M. Alkhamis. 2009. “Simulation Optimization for An Emergency Department Healthcare Unit in
Kuwait.” European Journal of Operational Research 198 (3): 936–942.
Andradttir, Sigrún, and Seong-Hee Kim. 2010. “Fully Sequential Procedures for Comparing Constrained Systems Via Simulation.” Naval
Research Logistics (NRL) 57 (5): 403–421.
Batur, Demet, and Seong-Hee Kim. 2005. “Procedures for Feasibility Detection in the Presence of Multiple Constraints.” In Proceedings
of the 37th Conference on Winter Simulation, 692–698. Winter Simulation Conference.
6022 W. Chen et al.
Di Somma, Salvatore, Lorenzo Paladino, Louella Vaughan, Irene Lalle, Laura Magrini, and Massimo Magnanti. 2015. “Overcrowding in
Emergency Department: An International Issue.” Internal and Emergency Medicine 10 (2): 171–175.
Feng, Yen-Yi, I-Chin Wu, and Tzu-Li Chen. 2017. “Stochastic Resource Allocation in Emergency Departments with a Multi-objective
Simulation Optimization Algorithm.” Health Care Management Science 20 (1): 55–75.
Gao, Siyang, and Weiwei Chen. 2016. “Efficient Feasibility Determination with Multiple Performance Measure Constraints.” IEEE
Transactions on Automatic Control 62 (1): 113–122.
Gao, Fei, Siyang Gao, Hui Xiao, and Zhongshun Shi. 2018. “Advancing Constrained Ranking and Selection with Regression in Partitioned
Domains.” IEEE Transactions on Automation Science and Engineering 16 (1): 382–391.
Green, Linda V., Joao Soares, James F. Giglio, and Robert A. Green. 2006. “Using Queueing Theory to Increase the Effectiveness of
Emergency Department Provider Staffing.” Academic Emergency Medicine 13 (1): 61–68.
Guo, Hainan, Siyang Gao, Kwok-Leung Tsui, and Tie Niu. 2017. “Simulation Optimization for Medical Staff Configuration At Emergency
Department in Hong Kong.” IEEE Transactions on Automation Science and Engineering 14 (4): 1655–1665.
Guo, Hainan, David Goldsman, Kwok-Leung Tsui, Yu Zhou, and Shui-Yee Wong. 2016. “Using Simulation and Optimisation to Charac-
terise Durations of Emergency Department Service Times with Incomplete Data.” International Journal of Production Research
54 (21): 6494–6511.
Ho, Shinn-Jang, Shinn-Ying Ho, and Li-Sun Shu. 2004. “OSA: Orthogonal Simulated Annealing Algorithm and Its Application to Design-
ing Mixed H2 /H∞ Optimal Controllers.” IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 34
(5): 588–600.
Hong, L. Jeff, and Barry L. Nelson. 2006. “Discrete Optimization Via Simulation Using COMPASS.” Operations Research 54 (1):
115–129.
Hong Kong Hospital Authority. 2018. [Online]. Available: http://www.ha.org.hk/visitor/ha_visitor_index.asp?Content_ID = 10051&Lang
= ENG&Dimension = 100&Ver = HTML.
Hoot, Nathan R., and Dominik Aronsky. 2008. “Systematic Review of Emergency Department Crowding: Causes, Effects, and Solutions.”
Annals of Emergency Medicine 52 (2): 126–136.
Horwitz, Leora I., Jeremy Green, and Elizabeth H. Bradley. 2010. “US Emergency Department Performance on Wait Time and Length of
Visit.” Annals of Emergency Medicine 55 (2): 133–141.
Hwang, Ula, Melissa L. McCarthy, Dominik Aronsky, Brent Asplin, Peter W. Crane, Catherine K. Craven, Stephen K. Epstein, et al. 2011.
“Measures of Crowding in the Emergency Department: A Systematic Review.” Academic Emergency Medicine 18 (5): 527–538.
Kirkpatrick, Scott, C. Daniel Gelatt, and Mario P. Vecchi. 1983. “Optimization by Simulated Annealing.” Science 220 (4598): 671–680.
Kleijnen, Jack P. C., Wim Van Beers, and Inneke Van Nieuwenhuyse. 2010. “Constrained Optimization in Expensive Simulation: Novel
Approach.” European Journal of Operational Research 202 (1): 164–174.
Lee, Loo Hay, Nugroho Artadi Pujowidianto, Ling-Wei Li, Chun-Hung Chen, and Chee Meng Yap. 2012. “Approximate Simulation
Budget Allocation for Selecting the Best Design in the Presence of Stochastic Constraints.” IEEE Transactions on Automatic
Control 57 (11): 2940–2945.
Li, Jie, Alexandre Sava, and Xiaolan Xie. 2009. “Simulation-based Discrete Optimization of Stochastic Discrete Event Systems Subject
to Non Closed-form Constraints.” IEEE Transactions on Automatic Control 54 (12): 2900–2904.
Luo, Yao, and Eunji Lim. 2013. “Simulation-based Optimization Over Discrete Sets with Noisy Constraints.” IIE Transactions 45 (7):
699–715.
Metropolis, Nicholas, Arianna W. Rosenbluth, Marshall N. Rosenbluth, Augusta H. Teller, and Edward Teller. 1953. “Equation of State
Calculations by Fast Computing Machines.” The Journal of Chemical Physics 21 (6): 1087–1092.
Mohammad Nezhad , Ali, and Hashem Mahlooji. 2014. “An Artificial Neural Network Meta-model for Constrained Simulation
Optimization.” Journal of the Operational Research Society 65 (8): 1232–1244.
Park, Chuljin, and Seong-Hee Kim. 2015. “Penalty Function with Memory for Discrete Optimization Via Simulation with Stochastic
Constraints.” Operations Research 63 (5): 1195–1212.
Park, Chuljin, Ilker T. Telci, Seong-Hee Kim, and Mustafa M. Aral. 2014. “Designing An Optimal Water Quality Monitoring Network
for River Systems Using Constrained Discrete Optimization Via Simulation.” Engineering Optimization 46 (1): 107–129.
Pujowidianto, Nugroho A., Loo Hay Lee, and Chun-Hung Chen. 2013. “Minimizing Opportunity Cost in Selecting the Best Feasible
Design.” Proceedings of the 2013 Winter Simulation Conference, Washington, DC, USA, 898–907.
Pujowidianto, Nugroho A., Loo Hay Lee, Giulia Pedrielli, Chun-Hung Chen, and Haobin Li. 2016. “Constrained Optimization for Hospi-
tal Bed Allocation via Discrete Event Simulation with Nested Partitions.” Proceedings of the 2016 Winter Simulation Conference,
Arlington, Virginia, USA, 1916–1925.
Qiu, Yunzhe, Jie Song, and Zekun Liu. 2016. “A Simulation Optimisation on the Hierarchical Health Care Delivery System Patient Flow
Based on Multi-fidelity Models.” International Journal of Production Research 54 (21): 6478–6493.
Shi, Pengyi, Mabel C. Chou, J. G. Dai, Ding Ding, and Joe Sim. 2016. “Models and Insights for Hospital Inpatient Operations: Time-
dependent ED Boarding Time.” Management Science 62 (1): 1–28.
Shi, Leyuan, and Sigurdur Ólafsson. 2000. “Nested Partitions Method for Global Optimization.” Operations Research 48 (3): 390–407.
Sinreich, David, Ola Jabali, and Nico P. Dellaert. 2012. “Reducing Emergency Department Waiting Times by Adjusting Work Shifts
Considering Patient Visits to Multiple Care Providers.” IIE Transactions 44 (3): 163–180.
Tanabe, Paula, Rick Gimbel, Paul R. Yarnold, and James G. Adams. 2004. “The Emergency Severity Index (version 3) 5-level Triage
System Scores Predict ED Resource Consumption.” Journal of Emergency Nursing 30 (1): 22–29.
International Journal of Production Research 6023
Tsai, Shing Chih, and Yang Fu Sheng. 2014. “Genetic-algorithm-based Simulation Optimization Considering a Single Stochastic
Constraint.” European Journal of Operational Research 236 (1): 113–125.
Xie, Xiaolan, and Mark A. Lawley. 2015. “Operations Research in Healthcare.” International Journal of Production Research 53 (24):
7173–7176.
Xu, Jie, Barry L. Nelson, and Jeff Hong. 2010. “Industrial Strength COMPASS: A Comprehensive Algorithm and Software for
Optimization Via Simulation.” ACM Transactions on Modeling and Computer Simulation (TOMACS) 20 (1): 3.