Professional Documents
Culture Documents
2021048
CONTROL AND OPTIMIZATION
Volume 12, Number 1, March 2022 pp. 15–29
Hsin-Min Sun∗
Department of Applied Mathematics
National University of Tainan, Tainan 70005, Taiwan
Yu-Juan Sun
Department of Physics
National Cheng Kung University, Tainan 70101, Taiwan
15
16 HSIN-MIN SUN AND YU-JUAN SUN
bounds [8], constrained matrix problems [12], integer quadratic knapsack prob-
lems [5, 6], integer and continuous quadratic optimization over submodular con-
straints [23], and Lagrangian relaxation via subgradient optimization [21].
Patriksson [34] gave a survey on the continuous nonlinear resource allocation
problem. And later, Patriksson and Strömberg [35] provided an up-to-date exten-
sion of the above survey. Many kinds of methods are proposed for solving (CQK).
Kiwiel [28] studied variable fixing method for the problem. Dai and Fletcher [13]
gave an efficient algorithm based on a secant approximation. Cominetti, Mascaren-
has, and Silva [9] proposed a semismooth Newton method to solve the problem,
which is similar to Dai and Fletcher’s secant method but using a Newton’s step in-
stead of the secant step. Helgason et al. [22] gave an O(n log n) algorithm based on
sorting all the breakpoints. Several breakpoint searching methods based on binary
search achieve linear-time complexity: Brucker [7], Calamai and Moré [8], Pardalos
and Kovoor [32], Kiwiel [26, 27], and Maculan et al. [29]. Kiwiel’s algorithm is
similar to Pardalos and Kovoor’s algorithm and is faster than the one by Bruck-
er and the one by Calamai and Moré. Davis, Hager, and Hungerford [14] used a
Newton-type method to develop a hybrid algorithm for this problem, where a heap
data structure is implemented. Frangioni and Gorgone [18] presented a small open-
source library for its solution. Jeong [24, Chap.2] gave a comprehensive review of
separable quadratic knapsack programming.
There are other related works. Nielsen and Zenios [31] developed iterative al-
gorithms for separable convex nonlinear programs with a single linear constraint
and bounded variables. Three of the four algorithms are very efficient when imple-
mented on the massively parallel system. Pardalos, Ye, and Han [33] considered
t
the following
Pn related quadratic program formation: minimize f (x) = x Qx, sub-
ject to i=1 xi = 1, and xi ≥ 0, 1 ≤ i ≤ n, where Q is an n × n symmetric
matrix and xt = (x1 , x2 , . . . , xn ). Based on interior point method, they gave two
algorithms. The first solves convex quadratic problems; while the second computes
a stationary point for the indefinite case. We remark here that for the general
quadratic programs with positive definite Q, the interior point method can solve
them in polynomial time. Dussault et al. [15] dealt with the special case for a
positive semidefinite Q by solving a sequence of separable subproblems. Megiddo
and Tamir [30] demonstrated how the technique of Lagrangian relaxation provides
linear-time algorithms for separable convex quadratic program with a fixed number
of linear constraints, based on the multidimensional search procedure. Edirisinghe
and Jeong [17] gave linear-time algorithm that finds lower and upper bounds on
indefinite separable knapsack programs with closed box constraints.
The (CQK) problem can be treated by transforming it to equivalent forms, and
then solving the latter using weighted average, here we will explore this process
from the viewpoint of variable fixing.
Pn Pn Pn f 2
Notice that i=1 12 di x2i − fi xi = i=1 21 di (xi − dfii )2 − i=1 2dii . Therefore, let
yi ← xi − dfii , we then have the equivalent form
nP o
n Pn Pn βi fi
min i=1 2
1
di y 2
i i=1 βi y i = r − i=1 di ; `i − fi ≤ yi ≤ ui − fi , i ∈ I .
di di
QUADRATIC KNAPSACK PROBLEM 17
Next set yi = 2β
di zi , we can arrive a format of singly constrained quadratic pro-
i
2. The QWMVA Algorithm. The idea of variable fixing methods is at each iter-
ation an estimate is computed by solving a subproblem in which the box constraints
are ignored. In this way the optimal value of at least one variable can be obtained;
such variable(s) are fixed and then removed for the next iteration. The method
is considered by [2, 6, 28, 36, 39]. These algorithms have a theoretical worst-case
complexity of O(n2 ). However, in practice they perform well and are competitive
with the other methods.
As noted in the last section, the (CQK) problem can be transformed into the
simpler (SCQP) problem, which is equivalent to the (WMVA) problem. While the
complexity of the (WMVA) model has been analyzed in [38] recently.
Now we introduce the implementation of the WMVA algorithm to the (CQK)
problem. Note that here `i = −∞ and/or ui = ∞ are allowed. So we will explain
the algorithm from the viewpoint of variable fixing, and not just by the uniform
distribution property described in [38], which is presented for finite upper and lower
bounds. Steps 1–4 are the so-called WMVA algorithm.
Algorithm QWMVA.
Input: r, di > 0, fi , βi > 0, `i , ui , for i ∈ I defining the problem (CQK)
Output: Optimal solution (x∗1 , x∗2 , . . . , x∗n ) for (CQK)
Step 0. (Initialization.)
Set I ← {1, 2, . . . , n}.
Pn 2β 2 i −fi
Set c ← r − i=1 βdi fi i , ci ← dii , ai ← di `2β i
, bi ← di u2βi −f
i
i
, ∀i ∈ I.
Step 1. (StartingP the WMVA Process.)
Set m ← i∈I ci .
c
Step 2. Set avg ← m , d ← c. Then, P
L ← {i ∈ I | bi < avg}, nL ← |L|, sumL ← i∈L ci biP ;
A ← {i ∈ I | ai ≤ avg ≤ bi }, nA ← |A|, sumwts P A ← i∈A ci ;
U ← {i ∈ I | ai > avg}, nU ← |U |, sumU ← i∈U ci ai .
Step 3. (Four situations.)
(a) If nA = |I|, then zi∗ ← avg for each i ∈ A; I ← ∅.
(b) Otherwise, set T estsum ← avg ∗ sumwtsA + sumL + sumU .
(i) For c > T estsum,
zi∗ ← bi for each i ∈ L, d ← d−sumL , m ← m− i∈L ci ; I ← U ∪ A.
P
(ii) For c < T estsum,
zi∗ ← ai for each i ∈ U , d ← d−sumU , m ← m− i∈U ci ; I ← L ∪ A.
P
(iii) For c = T estsum, zi∗ ← bi for each i ∈ L; zi∗ ← ai for each i ∈ U ;
zi∗ ← avg for each i ∈ A (when PnA 6= 0); I ← ∅.
Step 4. When I 6= ∅, reset c ← d and m ← i∈I ci . Go to Step 2.
Step 5. Return x∗i ← (2βi zi∗ + fi )/di , i ∈ {1, 2, . . . , n}.
QUADRATIC KNAPSACK PROBLEM 19
c
Illustration. Step 3(a) is obvious. When the weighted average m is feasible to all
demand intervals, it is the minimum variance allocation for everybody. In Step
3(b)(i), c > T estsum, it means that the testing solution is not feasible. Some
c
j ∈ A, zj = m or some k ∈ U , zk = ak needs to be increased. However, since at
this stage all variable(s) in L attain their upper bound(s), which are less than those
assigned values of variables in A and U , it is reasonable to peg these variables in L
and continue the trial solution for the next stage. By a similar argument, when it
comes to Step 3(b)(ii), it is reasonable to fix zk∗ = bk , ∀k ∈ U for the trial solution.
Finally, it is easy to see that, in Step 3(b)(iii), the trial solution that i ∈ L, zi = bi ;
c
j ∈ A, zj = m ; k ∈ U , zk = ak is feasible. We have seen that, at each iteration,
the algorithm determines at least one value of the zi∗ , whereas the undetermined
ones constitute a smaller (WMVA) of exactly the same problem structure. The
algorithm should terminate correctly with the unique optimal solution after at most
n iterations.
We remark that for (CQK) with finite upper and lower bounds, the explanation-
s using uniform distribution property and generalized weighted median described
in [38] give convincing reasons of the correctness of this algorithm.
Eaves improved Frank and Wolfe’s result which indicates that a quadratic func-
tion bounded below on a polyhedral convex set attains its infimum. By the Eaves
Theorem [16, Theorem 3 and Corollary 4] we infer that problem (CQK) attains its
infimum. Here we give the result related to Theorem 1 of [38].
Theorem 2.1. Problem (CQK) with some unbounded upper/lower bounds of the
variables is equivalent to a formation with all finite upper and lower bounds.
Proof. As considered by its equivalent problem (SCQP), let
I1 = {i ∈ I | −∞ < ai < bi < ∞}, I2 = {i ∈ I | −∞ = ai < bi < ∞},
otherwise b0i = bi . We then have i∈I ci a0i < c < i∈I ci b0i . Therefore, zi∗ will
P P
never attain a0i for i ∈ I2 ∪ I4 , nor will zi∗ attain b0i for i ∈ I3 ∪ I4 , according to the
UDP property. Thus, any previously unbounded variable will not attain the newly
assigned bound(s) in the optimal solution on the new restricted bounded feasible
region. On the other hand, if in the original feasible region of (SCQP) there is zi∗
such thatPzi∗ ≤ a0i for some i ∈ I2 ∪ I4 , then zi∗ = a0i = ai , ∀i ∈ I1 ∪ I3 . So we have
zi∗ = η/ i∈I2 ∪I4 ci > a0i , ∀i ∈ I2 ∪ I4 , which leads to a contradiction. Similarly,
zi∗ ≥ b0i for some i ∈ I3 ∪I4 also leads to a contradiction. Hence the optimal solution
on the new restricted bounded feasible region is also the optimal solution for the
original (SCQP) and the statement follows.
Our method for obtaining the unique solution has the advantages: (1) It has
a short programming code— this is very important when one needs to integrate
the WMVA method with the specific applications; (2) Making variable fixing by
weighted average is deterministic by its own nature, and therefore it is easier to
handle with in numerical analysis. Besides, it is still efficient in the sense of low
complexity. That is, this algorithm is a good choice for the real-time applications.
20 HSIN-MIN SUN AND YU-JUAN SUN
Remark 1. For the solution {z1∗ , . . . , zn∗ } of (WMVA), there is the generalized
weighted median y ∗ among them, defined in [38], such that zi∗ = bi if bi < y ∗ and
zi∗ = ai if ai > y ∗ , for i ∈ {1, 2, . . . , n}. The properties of y ∗ are characterized in
[38, Corollary 1 and Remark 3].
The last average avg in the last iteration is related to y ∗ as follows: either
∗
y = avg is the unique generalized weighted median, or there are exactly two
candidates of y ∗ which lie on the opposite sides of avg.
Therefore, the following improvements of Algorithm QWMVA can further save a
little computation time when the problem dimension is large. We omit the variable
zi∗ and those assignments for zi∗ in (a), (b)(i), (b)(ii), (b)(iii) of Step 3; while we
make the assignments for x∗i directly at Step 5 by
ui ,
if bi < avg,
∗
xi = `i , if ai > avg,
(2βi avg + fi )/di , otherwise,
where avg is the average in the last iteration. This approach is implemented in the
actual programming to obtain the experimental results in Section 6.
3. Relation with Kiwiel’s Variable Fixing Method. Bitran and Hax [2] pro-
posed variable fixing method for solving general convex and separable nonlinear
objective functions. Kiwiel [28] improved Bitran and Hax’s algorithm for dealing
with (CQK). We therefore describe Kiwiel’s algorithm as follows.
At each iteration, the algorithm partitions the variables into three parts with
index set Ik , Uk , and Lk ,1 which indicate that Uk is the set of variables that can be
pegged to upper bound ui , Lk is the set of variables that can be pegged to lower
bound `i , and Ik is the set of the remaining free variables. By fixing the values in
Uk and Lk , we consider a restricted version of the problem for the remaining free
variables to see if there is a feasible solution. Otherwise, at least one variable can be
pegged to upper bound or lower bound; so the fixed variable(s) are removed from
the free variables and the process repeats until an optimal solution is determined.
After fixing the values in Uk and Lk , at the k th iteration the reduced subproblem
without the box constraints is considered.
X 1 X
xkIk = arg min{ ( di x2i − fi xi ) | βi xi = rk }.
2
i∈Ik i∈Ik
then the variable(s) in Iku are pegged to ui , i ∈ Iku ; while the variable(s) in Ik` are
pegged to `i if ∆k < ∇k .
Theorem 3.1. Algorithm QWMVA and Kiwiel’s variable fixing algorithm gener-
ate the same iterates; so they have the same number of iterations. Especially, the
conditions c > T estsum, c < T estsum, and c = T estsum in Algorithm QWMVA
are equivalent to ∆k > ∇k , ∆k < ∇k , and ∆k = ∇k in Kiwiel’s variable fixing,
respectively.
Proof. Notice that for Kiwiel’s variable fixing, at the k th iteration the key factor is
tk . In Algorithm QWMVA, the weighted average at the k th iteration is
P
c(kth ) rk − i∈Ik βi fi /di
avgk = = , (2)
2 i∈Ik βi2 /di
P P
i∈Ik ci
while to avgk there corresponds the trial solution xi = (2βi avgk + fi )/di . By (1)
and (2), the key factor tk in Kiwiel’s variable fixing only differs from the key factor
avgk in Algorithm QWMVA by a fixed ratio −2 in each stage, i.e., tk = −2avgk .
It happens that at the k th stage the trial solution xki in variable fixing equals the
trial solution xi in Algorithm QWMVA. Therefore, Algorithm QWMVA generates
exactly the same iteration steps as the Kiwiel’s variable fixing method does.
Since the iteration steps coincide for these two algorithms, the analysis for Algo-
rithm QWMVA also applies to Kiwiel’s variable fixing algorithm. Hence, although
variable fixing has a worst-case O(n2 ) complexity theoretically, the worst case can-
not happen on a 64-bit computer when the problem dimension is greater than 129
subject to slight conditions. We conclude that Kiwiel’s variable fixing algorithm
also behaves very much like an O(n) algorithm with a small constant. Meanwhile,
Algorithm QWMVA is always quicker than Kiwiel’s variable fixing algorithm due
to its simpler structure.
Notice that the semismooth Newton method [9] and the hybrid algorithm [14]
both incorporate the variable fixing ideas. Also, the semismooth Newton method
and Kiwiel’s variable fixing method coincide when only lower bounds or upper
bounds are present. Therefore, it gives a convincing reason why these algorithms
perform well even though they have the theoretical complexity of O(n2 ).
The inner infimum is then solved with optimalPsolution xi (λ) = mid(`i , (βi λ +
n
fi )/di , ui ) for i ∈ I. Next we have ϕ(λ) := i=1 βi xi (λ) = r from the KKT
conditions for (CQK). Therefore,
n
X
ϕ(λ) = βi mid(`i , (βi λ + fi )/di , ui ).
i=1
22 HSIN-MIN SUN AND YU-JUAN SUN
Pn
Notice that ϕ(λ) plays the role of T estsum; that is, ϕ(λ)− i=1 βi fi /di = T estsum
in its (SCQP) formation. There are 2n breakpoints zi in ∪i∈I {zi− , zi+ }, where
zi− = (di `i − fi )/βi = 2ai and zi+ = (di ui − fi )/βi = 2bi . Basically, the following
strategies are used to update the multiplier λ if it is in the direction of λ∗ :
r−ϕ(λk )
• When r > ϕ(λk ), set λN ← λk + ϕ0+ (λk ) ;
r−ϕ(λk )
• when r < ϕ(λk ), set λN ← λk + ϕ0− (λk ) ,
For example, when entering Step 3(b)(i) with situation c > T estsum, if nL > 0
and nA > 0, then the variable(s) indexed in L are determined as their upper bounds.
We then use avg + (c − T estsum)/sumwtsA = (c − sumL − sumU )/sumwtsA for
the next approximation value instead of using d/m = (c − sumL )/(sumwtsA +
sumwtsU ). Note that these two values coincide if nU = 0. Also, in case it happens
that there is no any variable fixed under such iteration, i.e., nL = 0, then the value
d/m has to be used, since using it will fix at least one variable in the next iteration.
Similarly, when entering Step 3(a) with the approximation value not being d/m, we
need to restore the approximation value to d/m and recheck the situation so that
the final execution step is assured.
Notice that the improvements mentioned in Remark 1 are also carried. That is,
no any variable is actually assigned until only after the last average avg is found,
and then all the variables can be decided at once. Although there is no any variable
fixed before the last stage, it is still in the spirit of variable fixing which results the
correct answers.
The modified algorithm is as follows, which in average is quicker than Algorithm
QWMVA from the experiments.
Algorithm QWMVA2.
Input: r, di > 0, fi , βi > 0, `i , ui , for i ∈ I defining the problem (CQK)
Output: Optimal solution (x∗1 , x∗2 , . . . , x∗n ) for (CQK)
Step 0. (Initialization.)
Set I ← {1, 2, . . . , n}.
Pn 2β 2 i −fi
Set c ← r − i=1 βdi fi i , ci ← dii , ai ← di `2β i
, bi ← di u2βi −f
i
i
, ∀i ∈ I.
c
P
Step 1. Set m ← i∈I ci , avgtmp ← m , avgnew ← avgtmp, d ← c.
Step 2. Set avg ← avgtmp. Then, P
L ← {i ∈ I | bi < avg}, nL ← |L|, sumL ← i∈L ci biP ;
A ← {i ∈ I | ai ≤ avg ≤ bi }, nA ← |A|, sumwts P A ← i∈A ci ;
U ← {i ∈ I | ai > avg}, nU ← |U |, sumU ← i∈U ci ai .
Step 3. (Four situations.)
(a) If nA = |I|, then,
if avg = avgnew, then I ← ∅;
otherwise, set avgtmp ← avgnew, goto Step 2.
(b) Otherwise, set T estsum ← avg ∗ sumwtsA + sumL + sumU .
(i) For c > T estsum, P
d ← d − sumL , m ← m − i∈L ci ; I ← U ∪ A;
d
avgnew ← m ; avgtmp ← avgnew;
d−sumU
if (nA > 0 and nL > 0), then avgtmp ← sumwts A
.
(ii) For c < T estsum, P
d ← d − sumU , m ← m − i∈U ci ; I ← L ∪ A;
d
avgnew ← m ; avgtmp ← avgnew;
d−sumL
if (nA > 0 and nU > 0), then avgtmp ← sumwts A
.
(iii) For c = T estsum, I ← ∅. P
Step 4. When I 6= ∅, reset c ← d and m ← i∈I ci . Go to Step 2.
Step 5. Set x∗i ← ui if bi < avg; x∗i ← `i if ai > avg; x∗i ← 2βi avg+f di
i
, otherwise,
i ∈ {1, 2, . . . , n}.
Step 6. Return x∗i , i ∈ {1, 2, . . . , n}.
24 HSIN-MIN SUN AND YU-JUAN SUN
6.1. Support vector machines. Vapnik and Chervonenkis invented the original
support vector machines (SVM) in 1963, which was a linear classifier. Boser, Guyon
and Vapnik [3] suggested using the kernel trick to create nonlinear classifiers in 1992.
Cortes and Vapnik [10] proposed the current standard of SVM.
Given a training set of labelled sample
{(zi , wi ) | zi ∈ Rm , wi ∈ {−1, 1}, i = 1, . . . , n},
the SVM classifies new examples z ∈ Rm by F : Rm → {−1, 1} of the form
n
!
X
∗ ∗
F (z) = sign xi wi K(z, zi ) + b ,
i=1
m m
where K : R × R → R is some kernel function. Here the Gaussian kernel
K(zi , zj ) = exp −kzi − zj k22 /(2σ 2 ) is used. The vector x∗ = (x∗i ) ∈ Rn solves
1 t
min x Hx − et x
2
s.t. wt x = 0, and 0 ≤ x ≤ Ce,
where H has entries Hij = wi wj K(zi , zj ), i, j ∈ {1, . . . , n}, e ∈ Rn is the all-ones
vector, and C is a positive parameter.
Since H is positive semi-definite and is not diagonal, we use the spectral projected
gradient method (SPG) [1] to deal with the above model, which computes a sequence
of CQK problems.
As natural applications of the (CQK) model, two problems arising in the training
of SVM are used for the experiments. The first problem, based on the UCI Adult
database, is to predict whether an adult earns more than US$ 50,000 per year√or not,
using the data from the 1994 US Census; the parameters C = 1 and σ = 10 are
used. The second problem, based on the MNIST database of handwritten digits, is
to decide whether a 20×20 pixel image represents the digit 8 or not; the parameters
QUADRATIC KNAPSACK PROBLEM 27
C = 10 and σ = 5 are used. We refer the reader to [9] for detailed descriptions on
the experimenting.
The results are presented in Table 5. For the UCI Adult test Algorithm QWM-
VA2 and Algorithm QWMVA outperform the other methods. In the MNIST appli-
cation Algorithm QWMVA2 and Algorithm QWMVA are slower than the Newton
method; while Algorithm QWMVA2 is competitive with the BLGnapsack method.
Set n QWMVA2 QWMVA Newton Secant BLG Var. Fixing
Iter. Time Iter. Time Iter. Time Iter. Time Time Iter. Time
UCI 1065 5.02 0.023 7.84 0.021 4.98 0.022 8.20 0.031 0.025 7.84 0.017
UCI 2265 5.28 0.061 8.55 0.061 5.28 0.081 9.03 0.109 0.093 8.55 0.066
UCI 3185 5.37 0.102 8.23 0.097 5.31 0.133 9.06 0.176 0.165 8.23 0.127
UCI 6370 5.51 0.249 8.67 0.248 5.44 0.316 9.46 0.445 0.402 8.67 0.363
UCI 12740 5.87 0.540 9.37 0.555 5.80 0.665 9.98 1.036 0.843 9.37 0.776
MNIST 800 3.32 0.018 5.72 0.022 3.14 0.015 6.77 0.016 0.012 5.72 0.017
MNIST 1600 3.56 0.038 5.94 0.045 3.31 0.029 7.01 0.033 0.029 5.94 0.035
MNIST 3200 3.65 0.092 6.37 0.111 3.45 0.068 7.11 0.074 0.103 6.37 0.098
MNIST 6400 3.56 0.189 6.73 0.231 3.47 0.161 7.25 0.170 0.222 6.73 0.260
MNIST 11702 3.74 0.357 7.39 0.438 3.59 0.322 7.32 0.344 0.398 7.39 0.525
7. Conclusions. In this article some algorithms for solving the separable convex
continuous quadratic knapsack problem (CQK) are investigated. We study the
connections of solving the (CQK) model by weighted average with Kiwiel’s variable
fixing method and also with the semismooth Newton method. It is shown that
Algorithm QWMVA and Kiwiel’s method generate the same iterates. Based on
these analyses, we develop a new algorithm QWMVA2 which gives significantly
good performance.
[9] R. Cominetti, W. F. Mascarenhas and P. J. S. Silva, A Newton’s method for the continuous
quadratic knapsack problem, Math. Prog. Comp., 6 (2014), 151–169.
[10] C. Cortes and V. Vapnik, Support-vector networks, Machine Learning, 20 (1995), 273–297.
[11] S. Cosares and D. S. Hochbaum, Strongly polynomial algorithms for the quadratic trans-
portation problem with a fixed number of sources, Math. Oper. Res., 19 (1994), 94–111.
[12] R. W. Cottle, S. G. Duvall and K. Zikan, A Lagrangian relaxation algorithm for the con-
strained matrix problem, Nav. Res. Logist. Q., 33 (1986), 55–76.
[13] Y.-H. Dai and R. Fletcher, New algorithms for singly linearly constrained quadratic programs
subject to lower and upper bounds, Math. Program., 106 (2006), 403–421.
[14] T. A. Davis, W. W. Hager and J. T. Hungerford, An efficient hybrid algorithm for the
separable convex quadratic knapsack problem, ACM Trans. Math. Softw., 42 Article 22
(2016), 25 pages.
[15] J.-P. Dussault, J. A. Ferland and B. Lemaire, Convex quadratic programming with one con-
straint and bounded variables, Math. Program., 36 (1986), 90–104.
[16] B. C. Eaves, On quadratic programming, Manag. Sci., 17 (1971), 698–711.
[17] C. Edirisinghe and J. Jeong, Tight bounds on indefinite separable singly-constrained quadratic
programs in linear-time, Math. Program., 164 (2017), 193–227.
[18] A. Frangioni and E. Gorgone, A library for continuous convex separable quadratic knapsack
problems, Eur. J. Oper. Res., 229 (2013), 37–40.
[19] W. W. Hager and J. T. Hungerford, Continuous quadratic programming formulations of
optimization problems on graphs, Eur. J. Oper. Res., 240 (2015), 328–337.
[20] W. W. Hager and Y. Krylyuk, Graph partitioning and continuous quadratic programming,
SIAM J. Disc. Math., 12 (1999), 500–523.
[21] M. Held, P. Wolfe and H. P. Crowder, Validation of subgradient optimization, Math. Program.,
6 (1974), 62–88.
[22] R. Helgason, J. Kennington and H. Lall, A polynomially bounded algorithm for a singly
constrained quadratic program, Math. Program., 18 (1980), 338–343.
[23] D. S. Hochbaum and S. P. Hong, About strongly polynomial time algorithms for quadratic
optimization over submodular constraints, Math. Program., 69 (1995), 269–309.
[24] J. Jeong, Indefinite Knapsack Separable Quadratic Programming: Methods and Applications,
Ph.D. Dissertation, University of Tennessee, Knoxville, 2014. Available from: https://trace.
tennessee.edu/utk_graddiss/2704/
[25] N. Katoh, A. Shioura and T. Ibaraki, Resource allocation problems, in Handbook of Com-
binatorial Optimization (eds. P.M. Pardalos, DZ Du and R.L. Graham), Springer, (2013),
2897–2988.
[26] K. C. Kiwiel, On linear-time algorithms for the continuous quadratic knapsack problem,
J. Optim. Theory Appl., 134 (2007), 549–554.
[27] K. C. Kiwiel, Breakpoint searching algorithms for the continuous quadratic knapsack problem,
Math. Program., 112 (2008), 473–491.
[28] K. C. Kiwiel, Variable fixing algorithms for the continuous quadratic knapsack problem,
J. Optim. Theory Appl., 136 (2008), 445–458.
[29] N. Maculan, C. P. Santiago, E. M. Macambira and M. H. C. Jardim, An O(n) algorithm for
projecting a vector on the intersection of a hyperplane and a box in Rn , J. Optim. Theory
Appl., 117 (2003), 553–574.
[30] N. Megiddo and A. Tamir, Linear time algorithms for some separable quadratic programming
problems, Oper. Res. Lett., 13 (1993), 203–211.
[31] S. S. Nielsen and S. A. Zenios, Massively parallel algorithms for singly constrained convex
programs, ORSA J. Comput., 4 (1992), 166–181.
[32] P. M. Pardalos and N. Kovoor, An algorithm for a singly constrained class of quadratic
programs subject to upper and lower bounds, Math. Program., 46 (1990), 321–328.
[33] P. M. Pardalos, Y. Ye and C.-G. Han, Algorithms for the solution of quadratic knapsack
problems, Linear Algebra Appl., 152 (1991), 69–91.
[34] M. Patriksson, A survey on the continuous nonlinear resource allocation problem, Eur. J.
Oper. Res., 185 (2008), 1–46.
[35] M. Patriksson and C. Strömberg, Algorithms for the continuous nonlinear resource allocation
problem—New implementations and numerical studies, Eur. J. Oper. Res., 243 (2015), 703–
722.
[36] A. G. Robinson, N. Jiang and C. S. Lerme, On the continuous quadratic knapsack problem,
Math. Program., 55 (1992), 99–108.
QUADRATIC KNAPSACK PROBLEM 29
[37] B. Shetty and R. Muthukrishnan, A parallel projection for the multicommodity network
model, J. Oper. Res. Soc., 41 (1990), 837–842.
[38] H.-M. Sun and R.-L. Sheu, Minimum variance allocation among constrained intervals, J.
Glob. Optim., 74 (2019), 21–44.
[39] J. A. Ventura, Computational development of a Lagrangian dual approach for quadratic
networks, Networks, 21 (1991), 469–485.
Received March 2020; 1st revision September 2020; Final revision October 2020;
Early access November 2021.
E-mail address: sunhm@mail.nutn.edu.tw
E-mail address: jenny0208sun@gmail.com