You are on page 1of 15

NUMERICAL ALGEBRA, doi:10.3934/naco.

2021048
CONTROL AND OPTIMIZATION
Volume 12, Number 1, March 2022 pp. 15–29

VARIABLE FIXING METHOD BY WEIGHTED AVERAGE FOR


THE CONTINUOUS QUADRATIC KNAPSACK PROBLEM

Hsin-Min Sun∗
Department of Applied Mathematics
National University of Tainan, Tainan 70005, Taiwan

Yu-Juan Sun
Department of Physics
National Cheng Kung University, Tainan 70101, Taiwan

Abstract. We analyze the method of solving the separable convex continuous


quadratic knapsack problem by weighted average from the viewpoint of vari-
able fixing. It is shown that this method, considered as a variant of the variable
fixing algorithms, and Kiwiel’s variable fixing method generate the same iter-
ates. We further improve the algorithm based on the analysis regarding the
semismooth Newton method. Computational results are given and compar-
isons are made among the state-of-the-art algorithms. Experiments show that
our algorithm has significantly good performance; it behaves very much like
an O(n) algorithm with a very small constant.

1. Introduction. In this paper we consider the problem of minimizing a separable


convex continuous quadratic function subject to a knapsack constraint and a box
constraint. It is well-known as the separable convex continuous quadratic knapsack
problem (CQK) or the separable convex quadratic resource allocation problem subject
to upper and lower bounds:
n
X 1
(CQK) min di x2i − fi xi
i=1
2
n
X
s.t. βi xi = r, and `i ≤ xi ≤ ui , i ∈ I
i=1

where di > 0, βi > 0, and I = {1, 2, . . . , n}; while `i = −∞ and/or ui = ∞


are
Pn allowed. WithoutPn loss of generality, we assume that `i < ui , ∀i ∈ I and
β `
i=1 i i ≤ r ≤ i=1 i ui .
β
Problem (CQK) has applications in resource allocation [2, 4, 11, 23, 25], hierar-
chical production planning [2], multicommodity network flow and logistics problem-
s [13, 22, 31, 37], network flows [39], transportation problems [11], graph partition-
ing problems [19, 20], support vector machines [9, 13], quasi-Newton updates with

2020 Mathematics Subject Classification. Primary: 65K05; Secondary: 90C20.


Key words and phrases. Quadratic programming, separable convex programming, singly con-
strained quadratic program.
∗ Corresponding author: Hsin-Min Sun.

15
16 HSIN-MIN SUN AND YU-JUAN SUN

bounds [8], constrained matrix problems [12], integer quadratic knapsack prob-
lems [5, 6], integer and continuous quadratic optimization over submodular con-
straints [23], and Lagrangian relaxation via subgradient optimization [21].
Patriksson [34] gave a survey on the continuous nonlinear resource allocation
problem. And later, Patriksson and Strömberg [35] provided an up-to-date exten-
sion of the above survey. Many kinds of methods are proposed for solving (CQK).
Kiwiel [28] studied variable fixing method for the problem. Dai and Fletcher [13]
gave an efficient algorithm based on a secant approximation. Cominetti, Mascaren-
has, and Silva [9] proposed a semismooth Newton method to solve the problem,
which is similar to Dai and Fletcher’s secant method but using a Newton’s step in-
stead of the secant step. Helgason et al. [22] gave an O(n log n) algorithm based on
sorting all the breakpoints. Several breakpoint searching methods based on binary
search achieve linear-time complexity: Brucker [7], Calamai and Moré [8], Pardalos
and Kovoor [32], Kiwiel [26, 27], and Maculan et al. [29]. Kiwiel’s algorithm is
similar to Pardalos and Kovoor’s algorithm and is faster than the one by Bruck-
er and the one by Calamai and Moré. Davis, Hager, and Hungerford [14] used a
Newton-type method to develop a hybrid algorithm for this problem, where a heap
data structure is implemented. Frangioni and Gorgone [18] presented a small open-
source library for its solution. Jeong [24, Chap.2] gave a comprehensive review of
separable quadratic knapsack programming.
There are other related works. Nielsen and Zenios [31] developed iterative al-
gorithms for separable convex nonlinear programs with a single linear constraint
and bounded variables. Three of the four algorithms are very efficient when imple-
mented on the massively parallel system. Pardalos, Ye, and Han [33] considered
t
the following
Pn related quadratic program formation: minimize f (x) = x Qx, sub-
ject to i=1 xi = 1, and xi ≥ 0, 1 ≤ i ≤ n, where Q is an n × n symmetric
matrix and xt = (x1 , x2 , . . . , xn ). Based on interior point method, they gave two
algorithms. The first solves convex quadratic problems; while the second computes
a stationary point for the indefinite case. We remark here that for the general
quadratic programs with positive definite Q, the interior point method can solve
them in polynomial time. Dussault et al. [15] dealt with the special case for a
positive semidefinite Q by solving a sequence of separable subproblems. Megiddo
and Tamir [30] demonstrated how the technique of Lagrangian relaxation provides
linear-time algorithms for separable convex quadratic program with a fixed number
of linear constraints, based on the multidimensional search procedure. Edirisinghe
and Jeong [17] gave linear-time algorithm that finds lower and upper bounds on
indefinite separable knapsack programs with closed box constraints.
The (CQK) problem can be treated by transforming it to equivalent forms, and
then solving the latter using weighted average, here we will explore this process
from the viewpoint of variable fixing.
Pn Pn Pn f 2
Notice that i=1 12 di x2i − fi xi = i=1 21 di (xi − dfii )2 − i=1 2dii . Therefore, let
yi ← xi − dfii , we then have the equivalent form
nP o
n Pn Pn βi fi
min i=1 2
1
di y 2
i i=1 βi y i = r − i=1 di ; `i − fi ≤ yi ≤ ui − fi , i ∈ I .
di di
QUADRATIC KNAPSACK PROBLEM 17

Next set yi = 2β
di zi , we can arrive a format of singly constrained quadratic pro-
i

gram subject to upper and lower bounds (SCQP):


n
X
(SCQP) min ci zi2
i=1
n
X
s.t. ci zi = c, and ai ≤ zi ≤ bi , i ∈ I,
i=1
Pn 2β 2
where c = r − i=1 βdi fi i , ci = dii , ai = 2β
di
i
(`i − dfii ), and bi = 2β
di
i
(ui − dfii ) for i ∈ I.
Problem (SCQP) is equivalent to the weighted version of minimum variance
allocation (WMVA):
n
X  c 2
(WMVA) min ci zi −
i=1
m
n
X
s.t. ci zi = c, and ai ≤ zi ≤ bi , i ∈ I,
i=1
Pn
where m = i=1 ci . We simply call this model (MVA) if ci = 1, ∀i ∈ I.
When there is an amount of a divisible resource to be distributed as fairly as
possible, it is reasonable to divide the amount by the number of parties. However,
if each party has his/her own demand interval to be satisfied and/or each party
has been associated with some weights that equal distribution may not be feasible,
a natural way is to minimize the distribution variance. This is the problem to be
studied in [38], called the (weighted ) minimum variance allocation MVA (WMVA)
model. Pn
Let D = {(z1 , z2 , . . . , zn ) | i=1 ci zi = c, ai ≤ zi ≤ bi , i ∈ I}, where
ci , c, ai , bi ∈ R, −∞ < ai < bi < ∞, and ci > 0, i ∈ I. An allocation (p1 , p2 , . . . , pn )
in D is said to satisfy the uniform distribution property (UDP in short) if, for any
i, j ∈ I, exact one of the following three conditions holds: (1) pi = pj ; (2) pi < pj
and (pi = bi or pj = aj ); (3) pi > pj and (pi = ai or pj = bj ). Sun and Sheu [38,
Theorem 1] prove that there is a unique vector p = (p1 , p2 , . . . , pn ) ∈ D with the u-
niform distribution property. Moreover, this vector p is the unique optimal solution
to both (WMVA) and (SCQP).
Problem (SCQP) on D has a famous O(n) algorithm by Pardalos and Kovoor [32],
but the implementation relies on a practically time-consuming procedure to find
the median of a set of values. If the median is randomly approximated, an average
O(n log n) algorithm with good performance can be acquired. However, from the
viewpoint of the (weighted) minimum variance allocation, it would be more natural
to approximate the real median by the so-called “generalized weighted median”
defined in [38] rather than by random. Based on this idea, they present an algorithm
which solves (WMVA) and (SCQP) faster than the random version of Pardalos and
Kovoor’s procedure; and also much faster than many other known algorithms. They
also conduct a sophisticate complexity analysis to show that, although the algorithm
has a theoretical worst-case O(n2 ) complexity, the worst case cannot happen on a
64-bit computer when the problem dimension is greater than 32 for MVA; or greater
than 129 for WMVA subject to slight conditions [38, Theorem 6, Remark 4]. The
algorithm behaved, in practical testing, very much like an O(n) algorithm with a
very small constant. Theoretically, they prove that the worst case of the algorithm
18 HSIN-MIN SUN AND YU-JUAN SUN

cannot happen on a d-bit computer (d ≥ 8) when the problem dimension is greater


than 2d + 1 subject to slight conditions.
In the next section, we introduce Algorithm QWMVA for solving (CQK) from the
viewpoint of variable fixing. In Section 3, we focus on the comparison with Kiwiel’s
variable fixing algorithm and show that they generate the identical iterates. In
Section 4, the connection with the semismooth Newton method is discussed. In
Section 5, we improve the algorithm. In Section 6, numerical results are given and
comparisons are made among the state-of-the-art algorithms.

2. The QWMVA Algorithm. The idea of variable fixing methods is at each iter-
ation an estimate is computed by solving a subproblem in which the box constraints
are ignored. In this way the optimal value of at least one variable can be obtained;
such variable(s) are fixed and then removed for the next iteration. The method
is considered by [2, 6, 28, 36, 39]. These algorithms have a theoretical worst-case
complexity of O(n2 ). However, in practice they perform well and are competitive
with the other methods.
As noted in the last section, the (CQK) problem can be transformed into the
simpler (SCQP) problem, which is equivalent to the (WMVA) problem. While the
complexity of the (WMVA) model has been analyzed in [38] recently.
Now we introduce the implementation of the WMVA algorithm to the (CQK)
problem. Note that here `i = −∞ and/or ui = ∞ are allowed. So we will explain
the algorithm from the viewpoint of variable fixing, and not just by the uniform
distribution property described in [38], which is presented for finite upper and lower
bounds. Steps 1–4 are the so-called WMVA algorithm.

Algorithm QWMVA.
Input: r, di > 0, fi , βi > 0, `i , ui , for i ∈ I defining the problem (CQK)
Output: Optimal solution (x∗1 , x∗2 , . . . , x∗n ) for (CQK)
Step 0. (Initialization.)
Set I ← {1, 2, . . . , n}.
Pn 2β 2 i −fi
Set c ← r − i=1 βdi fi i , ci ← dii , ai ← di `2β i
, bi ← di u2βi −f
i
i
, ∀i ∈ I.
Step 1. (StartingP the WMVA Process.)
Set m ← i∈I ci .
c
Step 2. Set avg ← m , d ← c. Then, P
L ← {i ∈ I | bi < avg}, nL ← |L|, sumL ← i∈L ci biP ;
A ← {i ∈ I | ai ≤ avg ≤ bi }, nA ← |A|, sumwts P A ← i∈A ci ;
U ← {i ∈ I | ai > avg}, nU ← |U |, sumU ← i∈U ci ai .
Step 3. (Four situations.)
(a) If nA = |I|, then zi∗ ← avg for each i ∈ A; I ← ∅.
(b) Otherwise, set T estsum ← avg ∗ sumwtsA + sumL + sumU .
(i) For c > T estsum,
zi∗ ← bi for each i ∈ L, d ← d−sumL , m ← m− i∈L ci ; I ← U ∪ A.
P
(ii) For c < T estsum,
zi∗ ← ai for each i ∈ U , d ← d−sumU , m ← m− i∈U ci ; I ← L ∪ A.
P
(iii) For c = T estsum, zi∗ ← bi for each i ∈ L; zi∗ ← ai for each i ∈ U ;
zi∗ ← avg for each i ∈ A (when PnA 6= 0); I ← ∅.
Step 4. When I 6= ∅, reset c ← d and m ← i∈I ci . Go to Step 2.
Step 5. Return x∗i ← (2βi zi∗ + fi )/di , i ∈ {1, 2, . . . , n}.
QUADRATIC KNAPSACK PROBLEM 19

c
Illustration. Step 3(a) is obvious. When the weighted average m is feasible to all
demand intervals, it is the minimum variance allocation for everybody. In Step
3(b)(i), c > T estsum, it means that the testing solution is not feasible. Some
c
j ∈ A, zj = m or some k ∈ U , zk = ak needs to be increased. However, since at
this stage all variable(s) in L attain their upper bound(s), which are less than those
assigned values of variables in A and U , it is reasonable to peg these variables in L
and continue the trial solution for the next stage. By a similar argument, when it
comes to Step 3(b)(ii), it is reasonable to fix zk∗ = bk , ∀k ∈ U for the trial solution.
Finally, it is easy to see that, in Step 3(b)(iii), the trial solution that i ∈ L, zi = bi ;
c
j ∈ A, zj = m ; k ∈ U , zk = ak is feasible. We have seen that, at each iteration,
the algorithm determines at least one value of the zi∗ , whereas the undetermined
ones constitute a smaller (WMVA) of exactly the same problem structure. The
algorithm should terminate correctly with the unique optimal solution after at most
n iterations.

We remark that for (CQK) with finite upper and lower bounds, the explanation-
s using uniform distribution property and generalized weighted median described
in [38] give convincing reasons of the correctness of this algorithm.
Eaves improved Frank and Wolfe’s result which indicates that a quadratic func-
tion bounded below on a polyhedral convex set attains its infimum. By the Eaves
Theorem [16, Theorem 3 and Corollary 4] we infer that problem (CQK) attains its
infimum. Here we give the result related to Theorem 1 of [38].
Theorem 2.1. Problem (CQK) with some unbounded upper/lower bounds of the
variables is equivalent to a formation with all finite upper and lower bounds.
Proof. As considered by its equivalent problem (SCQP), let
I1 = {i ∈ I | −∞ < ai < bi < ∞}, I2 = {i ∈ I | −∞ = ai < bi < ∞},

I3 = {i ∈ I | −∞ < ai < bi = ∞}, and I4 = {i ∈ I | −∞ = ai < bi = ∞}.


P P
Let η = c − i∈I1 ∪I3 ci ai if |I2 | + |I4 | > 0. Let ζ = c − i∈I1 ∪I2 ci bi if |I3 | + |I4 | > 0.
Let a0i = min({η/ i∈I2 ∪I4 ci } ∪ {aj | j ∈ I1 ∪ I3 }) − 1 if i ∈ I2 ∪ I4 , otherwise
P

a0i = ai ; let b0i = max({ζ/ i∈I3 ∪I4 ci } ∪ {bj | j ∈ I1 ∪ I2 }) + 1 if i ∈ I3 ∪ I4 ,


P

otherwise b0i = bi . We then have i∈I ci a0i < c < i∈I ci b0i . Therefore, zi∗ will
P P
never attain a0i for i ∈ I2 ∪ I4 , nor will zi∗ attain b0i for i ∈ I3 ∪ I4 , according to the
UDP property. Thus, any previously unbounded variable will not attain the newly
assigned bound(s) in the optimal solution on the new restricted bounded feasible
region. On the other hand, if in the original feasible region of (SCQP) there is zi∗
such thatPzi∗ ≤ a0i for some i ∈ I2 ∪ I4 , then zi∗ = a0i = ai , ∀i ∈ I1 ∪ I3 . So we have
zi∗ = η/ i∈I2 ∪I4 ci > a0i , ∀i ∈ I2 ∪ I4 , which leads to a contradiction. Similarly,
zi∗ ≥ b0i for some i ∈ I3 ∪I4 also leads to a contradiction. Hence the optimal solution
on the new restricted bounded feasible region is also the optimal solution for the
original (SCQP) and the statement follows.

Our method for obtaining the unique solution has the advantages: (1) It has
a short programming code— this is very important when one needs to integrate
the WMVA method with the specific applications; (2) Making variable fixing by
weighted average is deterministic by its own nature, and therefore it is easier to
handle with in numerical analysis. Besides, it is still efficient in the sense of low
complexity. That is, this algorithm is a good choice for the real-time applications.
20 HSIN-MIN SUN AND YU-JUAN SUN

Remark 1. For the solution {z1∗ , . . . , zn∗ } of (WMVA), there is the generalized
weighted median y ∗ among them, defined in [38], such that zi∗ = bi if bi < y ∗ and
zi∗ = ai if ai > y ∗ , for i ∈ {1, 2, . . . , n}. The properties of y ∗ are characterized in
[38, Corollary 1 and Remark 3].
The last average avg in the last iteration is related to y ∗ as follows: either

y = avg is the unique generalized weighted median, or there are exactly two
candidates of y ∗ which lie on the opposite sides of avg.
Therefore, the following improvements of Algorithm QWMVA can further save a
little computation time when the problem dimension is large. We omit the variable
zi∗ and those assignments for zi∗ in (a), (b)(i), (b)(ii), (b)(iii) of Step 3; while we
make the assignments for x∗i directly at Step 5 by

ui ,
 if bi < avg,

xi = `i , if ai > avg,

(2βi avg + fi )/di , otherwise,

where avg is the average in the last iteration. This approach is implemented in the
actual programming to obtain the experimental results in Section 6.

3. Relation with Kiwiel’s Variable Fixing Method. Bitran and Hax [2] pro-
posed variable fixing method for solving general convex and separable nonlinear
objective functions. Kiwiel [28] improved Bitran and Hax’s algorithm for dealing
with (CQK). We therefore describe Kiwiel’s algorithm as follows.
At each iteration, the algorithm partitions the variables into three parts with
index set Ik , Uk , and Lk ,1 which indicate that Uk is the set of variables that can be
pegged to upper bound ui , Lk is the set of variables that can be pegged to lower
bound `i , and Ik is the set of the remaining free variables. By fixing the values in
Uk and Lk , we consider a restricted version of the problem for the remaining free
variables to see if there is a feasible solution. Otherwise, at least one variable can be
pegged to upper bound or lower bound; so the fixed variable(s) are removed from
the free variables and the process repeats until an optimal solution is determined.
After fixing the values in Uk and Lk , at the k th iteration the reduced subproblem
without the box constraints is considered.
X 1 X
xkIk = arg min{ ( di x2i − fi xi ) | βi xi = rk }.
2
i∈Ik i∈Ik

The remaining components are computed as


−tk βi + fi
xki = , i ∈ Ik ,
di
where tk is the Lagrange multiplier of the reduced subproblem and is determined
as
−rk + i∈Ik βdi fi i
P
X X
tk = P β 2 with rk = r − bi `i − bi ui . (1)
i
i∈Ik di i∈Lk i∈Uk
k u
xki ≥ ui }. Also let ∇k =
P
Let ∆k = β
i∈Ik i i
u (x − u i ), where Ik = {i ∈ Ik |
k ` k
P
i∈Ik` βi (`i −
xi ), where Ik = {i ∈ Ik | xi ≤ `i }. Thestopping criterion is then
decided by the condition ∆k = ∇k . For the part of variable fixing, when ∆k > ∇k ,
1 Note that, the notations L and U used for Algorithm QWMVA and the notations L and
k
Uk have the opposite meanings, respectively. That is, we have the correspondence: L ↔ Uk and
U ↔ Lk .
QUADRATIC KNAPSACK PROBLEM 21

then the variable(s) in Iku are pegged to ui , i ∈ Iku ; while the variable(s) in Ik` are
pegged to `i if ∆k < ∇k .
Theorem 3.1. Algorithm QWMVA and Kiwiel’s variable fixing algorithm gener-
ate the same iterates; so they have the same number of iterations. Especially, the
conditions c > T estsum, c < T estsum, and c = T estsum in Algorithm QWMVA
are equivalent to ∆k > ∇k , ∆k < ∇k , and ∆k = ∇k in Kiwiel’s variable fixing,
respectively.
Proof. Notice that for Kiwiel’s variable fixing, at the k th iteration the key factor is
tk . In Algorithm QWMVA, the weighted average at the k th iteration is
P
c(kth ) rk − i∈Ik βi fi /di
avgk = = , (2)
2 i∈Ik βi2 /di
P P
i∈Ik ci

while to avgk there corresponds the trial solution xi = (2βi avgk + fi )/di . By (1)
and (2), the key factor tk in Kiwiel’s variable fixing only differs from the key factor
avgk in Algorithm QWMVA by a fixed ratio −2 in each stage, i.e., tk = −2avgk .
It happens that at the k th stage the trial solution xki in variable fixing equals the
trial solution xi in Algorithm QWMVA. Therefore, Algorithm QWMVA generates
exactly the same iteration steps as the Kiwiel’s variable fixing method does.

Since the iteration steps coincide for these two algorithms, the analysis for Algo-
rithm QWMVA also applies to Kiwiel’s variable fixing algorithm. Hence, although
variable fixing has a worst-case O(n2 ) complexity theoretically, the worst case can-
not happen on a 64-bit computer when the problem dimension is greater than 129
subject to slight conditions. We conclude that Kiwiel’s variable fixing algorithm
also behaves very much like an O(n) algorithm with a small constant. Meanwhile,
Algorithm QWMVA is always quicker than Kiwiel’s variable fixing algorithm due
to its simpler structure.
Notice that the semismooth Newton method [9] and the hybrid algorithm [14]
both incorporate the variable fixing ideas. Also, the semismooth Newton method
and Kiwiel’s variable fixing method coincide when only lower bounds or upper
bounds are present. Therefore, it gives a convincing reason why these algorithms
perform well even though they have the theoretical complexity of O(n2 ).

4. Connection with the Semismooth Newton Method. Cominetti, Mas-


carenhas, and Silva [9] proposed the semismooth Newton method, which is similar
to Dai and Fletcher’s secant method [13] but using a Newton’s step instead of the
secant step. Let us first consider the semismooth Newton method with upper and
lower bounds.
The following single variable dual problem is considered.
( )
X 1 X
2
max inf ( di xi − fi xi ) + λ(r − β i xi ) .
λ `i ≤xi ≤ui 2
i∈I i∈I i∈I

The inner infimum is then solved with optimalPsolution xi (λ) = mid(`i , (βi λ +
n
fi )/di , ui ) for i ∈ I. Next we have ϕ(λ) := i=1 βi xi (λ) = r from the KKT
conditions for (CQK). Therefore,
n
X
ϕ(λ) = βi mid(`i , (βi λ + fi )/di , ui ).
i=1
22 HSIN-MIN SUN AND YU-JUAN SUN

Pn
Notice that ϕ(λ) plays the role of T estsum; that is, ϕ(λ)− i=1 βi fi /di = T estsum
in its (SCQP) formation. There are 2n breakpoints zi in ∪i∈I {zi− , zi+ }, where
zi− = (di `i − fi )/βi = 2ai and zi+ = (di ui − fi )/βi = 2bi . Basically, the following
strategies are used to update the multiplier λ if it is in the direction of λ∗ :
r−ϕ(λk )
• When r > ϕ(λk ), set λN ← λk + ϕ0+ (λk ) ;
r−ϕ(λk )
• when r < ϕ(λk ), set λN ← λk + ϕ0− (λk ) ,

where ϕ0+ (λ) = βi2 /di and ϕ0− (λ) = 2


P P
z i ≤λ zi <λ βi /di , which correspond to
1 1
P P
2 zi ≤λ ci and 2 zi <λ ci in the (SCQP) model, respectively. An interval is used
to bracket a solution. If λN is within the bracketing interval, then set λk+1 = λN
and the new bracketing interval can be formed; otherwise, a secant approximation
might be used to get the bracketing interval. We find that λ/2 acts as the trial
solution in the (SCQP) model if the initial multiplier λ1 is twice the value of the
weighted average avg1 ; that is, λk is taken as 2avgk for k ≥ 1. Thus, interpreted
from the viewpoint of the corresponding (SCQP) model, these roughly become the
following:
2(c − T estsum)
• When c = c(kth ) > T estsum, set 2avgk+1 ← 2avgk + P ;
zi ≤2avgk ci
2(c − T estsum)
• when c = c(kth ) < T estsum, set 2avgk+1 ← 2avgk + P .
zi <2avgk ci
Meanwhile, the following forms can also be considered:
c − T estsum
• When c = c(kth ) > T estsum, set avgk+1 ← avgk + ;
sumwtsL + sumwtsA
c − T estsum
• when c = c(kth ) < T estsum, set avgk+1 ← avgk + .
sumwtsL
Therefore, it reveals certain connection of the semismooth Newton method with the
weighted average, which is used in Algorithm QWMVA. We have done experiments
based on the above idea; however, the results are not as good as those for Algorithm
QWMVA2 introduced in the next section.
As to the Newton method with single bounds, the situation is simpler.
Theorem 4.1. [9, Proposition 1] Suppose either `i = −∞ for all i or ui = ∞ for all
i. Let xi = (βi λ1 + fi )/di be the initial point in P
the variable fixing method
Pnwhere the
n
multiplier λ1 is given by (r − s1 )/q1 with s1 = i=1 βi fi /di and q1 = i=1 βi2 /di .
Then the Newton’s iteration started from λ1 generates exactly the same iterates as
the variable fixing method.2
Notice that λ1 /2 is the weighted average in the (SCQP) model. Therefore, Al-
gorithm QWMVA also generates the same iterates as the Newton method under
single bounds.

5. Improving the QWMVA Algorithm. One way to improve Algorithm QWM-


VA is using another value instead of using the weighted average of the remaining
parties in Step 3(b)(i)&(ii). When A 6= ∅, it is a good idea that the weighted av-
erage of the parties in A is used. That is, the value (c − sumL − sumU )/sumwtsA
is preferred. In this way we can speed up the process significantly. We call this
modified version “QWMVA2”.

2 Notice the change of sign λk = −tk for k ≥ 1, where tk is used in [28].


QUADRATIC KNAPSACK PROBLEM 23

For example, when entering Step 3(b)(i) with situation c > T estsum, if nL > 0
and nA > 0, then the variable(s) indexed in L are determined as their upper bounds.
We then use avg + (c − T estsum)/sumwtsA = (c − sumL − sumU )/sumwtsA for
the next approximation value instead of using d/m = (c − sumL )/(sumwtsA +
sumwtsU ). Note that these two values coincide if nU = 0. Also, in case it happens
that there is no any variable fixed under such iteration, i.e., nL = 0, then the value
d/m has to be used, since using it will fix at least one variable in the next iteration.
Similarly, when entering Step 3(a) with the approximation value not being d/m, we
need to restore the approximation value to d/m and recheck the situation so that
the final execution step is assured.
Notice that the improvements mentioned in Remark 1 are also carried. That is,
no any variable is actually assigned until only after the last average avg is found,
and then all the variables can be decided at once. Although there is no any variable
fixed before the last stage, it is still in the spirit of variable fixing which results the
correct answers.
The modified algorithm is as follows, which in average is quicker than Algorithm
QWMVA from the experiments.

Algorithm QWMVA2.
Input: r, di > 0, fi , βi > 0, `i , ui , for i ∈ I defining the problem (CQK)
Output: Optimal solution (x∗1 , x∗2 , . . . , x∗n ) for (CQK)

Step 0. (Initialization.)
Set I ← {1, 2, . . . , n}.
Pn 2β 2 i −fi
Set c ← r − i=1 βdi fi i , ci ← dii , ai ← di `2β i
, bi ← di u2βi −f
i
i
, ∀i ∈ I.
c
P
Step 1. Set m ← i∈I ci , avgtmp ← m , avgnew ← avgtmp, d ← c.
Step 2. Set avg ← avgtmp. Then, P
L ← {i ∈ I | bi < avg}, nL ← |L|, sumL ← i∈L ci biP ;
A ← {i ∈ I | ai ≤ avg ≤ bi }, nA ← |A|, sumwts P A ← i∈A ci ;
U ← {i ∈ I | ai > avg}, nU ← |U |, sumU ← i∈U ci ai .
Step 3. (Four situations.)
(a) If nA = |I|, then,
if avg = avgnew, then I ← ∅;
otherwise, set avgtmp ← avgnew, goto Step 2.
(b) Otherwise, set T estsum ← avg ∗ sumwtsA + sumL + sumU .
(i) For c > T estsum, P
d ← d − sumL , m ← m − i∈L ci ; I ← U ∪ A;
d
avgnew ← m ; avgtmp ← avgnew;
d−sumU
if (nA > 0 and nL > 0), then avgtmp ← sumwts A
.
(ii) For c < T estsum, P
d ← d − sumU , m ← m − i∈U ci ; I ← L ∪ A;
d
avgnew ← m ; avgtmp ← avgnew;
d−sumL
if (nA > 0 and nU > 0), then avgtmp ← sumwts A
.
(iii) For c = T estsum, I ← ∅. P
Step 4. When I 6= ∅, reset c ← d and m ← i∈I ci . Go to Step 2.
Step 5. Set x∗i ← ui if bi < avg; x∗i ← `i if ai > avg; x∗i ← 2βi avg+f di
i
, otherwise,
i ∈ {1, 2, . . . , n}.
Step 6. Return x∗i , i ∈ {1, 2, . . . , n}.
24 HSIN-MIN SUN AND YU-JUAN SUN

Theoretically, it takes at most 2n iterations for Algorithm QWMVA2 to complete


the execution. However, the worst case of 2n iterations for n intervals would not
happen in practice, based on the analysis in [38]. It shows that Algorithm QWMVA2
takes no more than 258 (4d + 2) iterations on a 64-bit (d-bit) computer. Also
note that, from the experiments, although Algorithm QWMVA2 performs better
than Algorithm QWMVA in average, the former has larger maximum number of
iterations and maximum time than the latter generally.

Dimension Iterations Time (msec) Iterations Time (msec)


n avg max min avg max min avg max min avg max min
QWMVA2 QWMVA
50000 4.9 9 3 1.5 1.8 1.2 7.8 11 6 1.6 1.9 1.4
100000 5.7 12 3 3.0 4.3 2.6 8.2 11 7 3.3 3.7 2.5
500000 5.7 13 4 16.0 23.0 13.5 8.7 12 7 17.5 20.0 14.0
1000000 5.7 14 3 38.5 71.0 30.0 8.8 12 7 46.6 55.0 33.0
1500000 5.7 13 4 62.1 116.7 46.7 8.9 12 7 76.7 91.7 56.7
2000000 6.0 14 4 86.7 164.0 64.0 8.9 12 7 104.3 122.0 68.0
Newton Secant
50000 4.7 8 3 2.3 2.7 1.8 8.2 12 6 3.0 4.0 2.2
100000 5.2 9 3 4.7 5.5 3.8 8.7 14 6 6.1 8.2 4.1
500000 5.3 9 4 25.8 30.5 20.5 8.6 14 6 31.2 43.0 21.5
1000000 5.2 10 3 52.0 63.0 42.0 8.4 13 6 60.9 82.0 44.0
1500000 5.1 9 3 77.0 93.3 58.3 8.4 14 6 93.2 130.0 63.3
2000000 5.2 11 4 102.1 124.0 84.0 8.5 15 7 126.5 170.0 94.0
Variable fixing Median search
50000 7.8 11 6 2.5 2.8 2.2 16.7 17 16 4.3 4.5 3.6
100000 8.2 11 7 5.0 5.4 4.4 17.7 18 17 8.4 9.0 7.3
500000 8.7 12 7 30.6 34.5 26.5 19.9 20 19 45.2 50.5 39.0
1000000 8.8 12 7 65.1 73.0 55.0 20.9 21 20 92.8 99.0 80.0
1500000 8.9 12 7 98.1 108.3 85.0 21.6 22 21 144.4 151.7 125.0
2000000 8.9 12 7 129.4 144.0 108.0 22.0 22 21 192.4 202.0 166.0

Table 1. uncorrelated test

Dimension Iterations Time (msec) Iterations Time (msec)


n avg max min avg max min avg max min avg max min
QWMVA2 QWMVA
50000 5.0 11 3 1.4 2.0 1.2 7.7 10 6 1.6 1.7 1.3
100000 5.5 11 3 2.9 3.9 2.4 7.9 11 7 3.1 3.5 2.4
500000 5.7 13 3 15.5 22.0 13.5 8.2 11 7 16.8 18.0 14.5
1000000 5.5 13 3 37.4 62.0 27.0 8.4 11 7 44.6 50.0 31.0
1500000 5.4 13 4 60.3 108.3 48.3 8.6 10 7 74.2 83.3 56.7
2000000 5.9 13 4 84.4 146.0 66.0 8.6 12 7 100.4 110.0 64.0
Newton Secant
50000 4.6 8 3 2.2 2.4 1.8 8.0 12 6 2.9 4.2 1.9
100000 5.0 9 3 4.4 4.8 3.5 8.4 14 6 5.8 8.8 3.7
500000 5.1 8 3 24.3 28.0 19.5 8.5 13 6 29.6 45.0 19.5
1000000 5.0 9 3 49.4 59.0 37.0 8.2 13 6 57.3 91.0 39.0
1500000 5.0 8 3 74.2 90.0 60.0 8.3 13 6 88.3 130.0 56.7
2000000 5.1 10 3 98.9 118.0 78.0 8.3 14 6 120.8 182.0 76.0
Variable fixing Median search
50000 7.7 10 6 2.4 2.6 2.2 16.7 17 16 4.2 4.4 3.6
100000 7.9 11 7 4.8 5.3 4.4 17.7 18 17 8.3 8.7 7.2
500000 8.2 11 7 29.1 31.5 26.0 19.9 20 19 44.8 47.5 39.0
1000000 8.4 11 7 61.9 67.0 55.0 21.0 21 20 91.7 97.0 80.0
1500000 8.6 10 7 93.4 100.0 83.3 21.6 22 21 141.9 150.0 123.3
2000000 8.6 12 7 125.3 138.0 102.0 21.9 22 21 189.7 198.0 162.0

Table 2. weakly correlated test


QUADRATIC KNAPSACK PROBLEM 25

6. Numerical Results. We compared Algorithm QWMVA/QWMVA2 with those


state-of-the-art algorithms such as the Newton method [9], the secant algorithm [13],
Kiwiel’s variable fixing [28], and Kiwiel’s implementation of median search [27]. The
comparison was made in the way Cominetti, Mascarenhas, and Silva [9] did for the
semismooth Newton method.
Three groups of problems were generated with data entries chosen by independent
draws from uniform distributions U [a, b]. These classes are
1. Uncorrelated: βi , di , fi ∼ U [10, 25];
2. Weakly correlated: βi ∼ U [10, 25] and di , fi ∼ U [βi − 5, βi + 5];
3. Correlated: βi ∼ U [10, 25] and di = fi = βi + 5.
We used randomly generated `i , ui independently from [1, Pn15] withPanuniform
probability distribution; while r was chosen uniformly in [ i=1 βi `i , i=1 βi ui ].
The problem dimension n varied from 50, 000 to 2, 000, 000.
The results are presented in Tables 1-3. The average, maximum, and minimum
number of iterations and runtimes over 100 randomly generated data for each di-
mension were obtained, where each random test was repeated 107 /n times in a loop
and the mean time was obtained accordingly. The programs run on a server with
an Intel Xeon E5-2620 v3 CPU (2.40 GHz). The computer has 64 G RAM and runs
Ubuntu 12.04.5 64 bit. The compiler used was gcc 4.6.3 with optimization flags:
-march=corei7-avx -O3 --fast-math. The results indicate that Algorithm QWMVA2
has significantly good performance.
We also performed experiments similar to problems arising in multicommodity
network flows and logistics. Here we used d1 = 1, dn = 104 , di ∼ U [d1 , dn ] for
i = 2, . . . , n−1, fi ∼ U [−1000, 1000], βi = 1, `i = 0, ui ∼ U [0, 1000] for i = 1, . . . , n.
The results are presented in Table 4. Algorithm QWMVA/QWMVA2 performed
far better than the others did on this part. It surprises us that Algorithm QWMVA
is even quicker than Algorithm QWMVA2 in this instance. Note that Algorithm
QWMVA, Algorithm QWMVA2, the semismooth Newton method, and the variable
fixing algorithm almost generate the same iterates in this case.
Dimension Iterations Time (msec) Iterations Time (msec)
n avg max min avg max min avg max min avg max min
QWMVA2 QWMVA
50000 5.1 12 3 1.4 1.8 1.2 7.6 9 7 1.6 1.7 1.4
100000 5.1 11 3 2.9 3.7 2.5 7.8 10 6 3.1 3.4 2.5
500000 5.5 12 3 15.5 21.0 13.0 8.1 10 7 16.7 18.0 14.5
1000000 5.6 12 3 37.2 58.0 27.0 8.3 10 7 44.0 49.0 36.0
1500000 5.5 14 3 60.4 96.7 46.7 8.4 11 7 72.0 80.0 53.3
2000000 5.5 13 3 82.1 128.0 62.0 8.3 10 7 97.9 106.0 88.0
Newton Secant
50000 4.8 7 3 2.1 2.4 1.7 8.1 12 6 2.8 4.3 1.6
100000 4.7 9 3 4.3 4.8 3.4 8.0 11 6 5.8 8.7 3.8
500000 5.0 8 3 23.8 28.5 18.0 8.3 12 6 29.0 44.5 19.0
1000000 4.9 8 3 48.9 60.0 37.0 8.2 12 6 57.2 88.0 38.0
1500000 5.0 9 3 73.1 93.3 56.7 8.3 12 6 91.1 131.7 58.3
2000000 4.9 7 3 96.1 116.0 72.0 8.1 11 6 114.3 176.0 80.0
Variable fixing Median search
50000 7.6 9 7 2.4 2.6 2.2 16.7 17 16 4.2 4.4 3.8
100000 7.8 10 6 4.8 5.5 4.4 17.7 18 17 8.3 8.7 7.3
500000 8.1 10 7 28.8 31.0 26.0 19.9 20 19 44.8 47.5 39.5
1000000 8.3 10 7 61.1 68.0 55.0 20.9 21 20 91.2 97.0 81.0
1500000 8.4 11 7 92.2 103.3 83.3 21.6 22 21 142.2 150.0 123.3
2000000 8.3 10 7 122.0 136.0 110.0 21.9 22 21 188.6 198.0 172.0

Table 3. correlated test


26 HSIN-MIN SUN AND YU-JUAN SUN

Dimension Iterations Time (msec) Iterations Time (msec)


n avg max min avg max min avg max min avg max min
QWMVA2 QWMVA
50000 5.9 8 4 1.1 1.3 0.9 5.9 8 4 1.1 1.2 0.8
100000 5.8 9 4 2.2 2.6 1.8 5.8 9 4 2.1 2.5 1.7
500000 6.1 9 4 12.2 14.0 10.0 6.1 9 4 11.5 13.5 9.0
1000000 6.2 9 4 27.1 32.0 21.0 6.2 9 4 25.5 30.0 19.0
1500000 6.1 11 4 44.1 51.7 35.0 6.1 11 4 42.0 50.0 33.3
2000000 6.2 8 4 59.5 68.0 48.0 6.2 8 4 56.8 66.0 44.0
Newton Secant
50000 5.9 8 4 1.9 2.4 1.2 14.2 18 9 3.0 3.9 1.9
100000 5.8 9 4 3.8 4.9 2.6 14.0 19 9 6.0 8.2 4.2
500000 6.1 9 4 22.7 30.5 14.5 14.2 19 9 32.4 43.5 23.5
1000000 6.2 9 4 46.9 64.0 29.0 14.2 19 9 64.5 87.0 40.0
1500000 6.1 11 4 69.1 95.0 43.3 14.0 21 10 95.6 145.0 66.7
2000000 6.1 8 4 92.7 124.0 58.0 14.2 16 11 128.1 148.0 98.0
Variable fixing Median search
50000 5.9 8 4 1.9 2.4 1.2 16.6 17 16 3.3 3.6 3.1
100000 5.8 9 4 3.7 4.7 2.5 17.7 18 17 6.5 6.9 6.1
500000 6.1 9 4 22.0 28.5 14.0 19.9 20 19 35.8 38.0 33.0
1000000 6.2 9 4 45.6 61.0 27.0 20.9 21 20 73.3 77.0 69.0
1500000 6.1 11 4 66.5 91.7 41.7 21.6 22 21 112.9 118.3 108.3
2000000 6.2 8 4 90.9 118.0 56.0 21.9 22 21 151.0 162.0 144.0

Table 4. Multicommodity network flow test

6.1. Support vector machines. Vapnik and Chervonenkis invented the original
support vector machines (SVM) in 1963, which was a linear classifier. Boser, Guyon
and Vapnik [3] suggested using the kernel trick to create nonlinear classifiers in 1992.
Cortes and Vapnik [10] proposed the current standard of SVM.
Given a training set of labelled sample
{(zi , wi ) | zi ∈ Rm , wi ∈ {−1, 1}, i = 1, . . . , n},
the SVM classifies new examples z ∈ Rm by F : Rm → {−1, 1} of the form
n
!
X
∗ ∗
F (z) = sign xi wi K(z, zi ) + b ,
i=1
m m
where K : R × R → R is some  kernel function. Here the Gaussian kernel
K(zi , zj ) = exp −kzi − zj k22 /(2σ 2 ) is used. The vector x∗ = (x∗i ) ∈ Rn solves
1 t
min x Hx − et x
2
s.t. wt x = 0, and 0 ≤ x ≤ Ce,
where H has entries Hij = wi wj K(zi , zj ), i, j ∈ {1, . . . , n}, e ∈ Rn is the all-ones
vector, and C is a positive parameter.
Since H is positive semi-definite and is not diagonal, we use the spectral projected
gradient method (SPG) [1] to deal with the above model, which computes a sequence
of CQK problems.
As natural applications of the (CQK) model, two problems arising in the training
of SVM are used for the experiments. The first problem, based on the UCI Adult
database, is to predict whether an adult earns more than US$ 50,000 per year√or not,
using the data from the 1994 US Census; the parameters C = 1 and σ = 10 are
used. The second problem, based on the MNIST database of handwritten digits, is
to decide whether a 20×20 pixel image represents the digit 8 or not; the parameters
QUADRATIC KNAPSACK PROBLEM 27

C = 10 and σ = 5 are used. We refer the reader to [9] for detailed descriptions on
the experimenting.
The results are presented in Table 5. For the UCI Adult test Algorithm QWM-
VA2 and Algorithm QWMVA outperform the other methods. In the MNIST appli-
cation Algorithm QWMVA2 and Algorithm QWMVA are slower than the Newton
method; while Algorithm QWMVA2 is competitive with the BLGnapsack method.
Set n QWMVA2 QWMVA Newton Secant BLG Var. Fixing
Iter. Time Iter. Time Iter. Time Iter. Time Time Iter. Time
UCI 1065 5.02 0.023 7.84 0.021 4.98 0.022 8.20 0.031 0.025 7.84 0.017
UCI 2265 5.28 0.061 8.55 0.061 5.28 0.081 9.03 0.109 0.093 8.55 0.066
UCI 3185 5.37 0.102 8.23 0.097 5.31 0.133 9.06 0.176 0.165 8.23 0.127
UCI 6370 5.51 0.249 8.67 0.248 5.44 0.316 9.46 0.445 0.402 8.67 0.363
UCI 12740 5.87 0.540 9.37 0.555 5.80 0.665 9.98 1.036 0.843 9.37 0.776
MNIST 800 3.32 0.018 5.72 0.022 3.14 0.015 6.77 0.016 0.012 5.72 0.017
MNIST 1600 3.56 0.038 5.94 0.045 3.31 0.029 7.01 0.033 0.029 5.94 0.035
MNIST 3200 3.65 0.092 6.37 0.111 3.45 0.068 7.11 0.074 0.103 6.37 0.098
MNIST 6400 3.56 0.189 6.73 0.231 3.47 0.161 7.25 0.170 0.222 6.73 0.260
MNIST 11702 3.74 0.357 7.39 0.438 3.59 0.322 7.32 0.344 0.398 7.39 0.525

Table 5. Results for the projection in SVM training. The val-


ues reported are the average number of iterations and the average
computing time in milliseconds. For BLGnapsack only the time is
reported.

7. Conclusions. In this article some algorithms for solving the separable convex
continuous quadratic knapsack problem (CQK) are investigated. We study the
connections of solving the (CQK) model by weighted average with Kiwiel’s variable
fixing method and also with the semismooth Newton method. It is shown that
Algorithm QWMVA and Kiwiel’s method generate the same iterates. Based on
these analyses, we develop a new algorithm QWMVA2 which gives significantly
good performance.

Acknowledgments. The authors thank Cominetti, Mascarenhas, and Silva (who


owns the copyright) for releasing the source codes for experiment. Our experiments
are built on top of this source. Thanks also to the editors and reviewers for reading
this article and for making suggestions.
REFERENCES
[1] E. G. Birgin, J. M. Martı́nez and M. Raydan, Algorithm 813: SPG—software for convex-
constrained optimization, ACM Trans. Math. Softw., 27 (2001), 340–349.
[2] G. R. Bitran and A. C. Hax, Disaggregation and resource allocation using convex knapsack
problems with bounded variables, Manag. Sci., 27 (1981), 431–441.
[3] B. E. Boser, I. M. Guyon and V. N. Vapnik, A training algorithm for optimal margin classifiers,
in Proc. 5th Annu. Wkshp. Comput. Learning Theory (COLT’92) (ed. D. Haussler), ACM
Press, (1992), 144–152.
[4] K. M. Bretthauer and B. Shetty, Quadratic resource allocation with generalized upper bounds,
Oper. Res. Lett., 20 (1997), 51–57.
[5] K. M. Bretthauer, B. Shetty and S. Syam, A branch-and-bound algorithm for integer quadratic
knapsack problems, ORSA J. Comput., 7 (1995), 109–116.
[6] K. M. Bretthauer, B. Shetty and S. Syam, A projection method for the integer quadratic
knapsack problem, J. Oper. Res. Soc., 47 (1996), 457–462.
[7] P. Brucker, An O(n) algorithm for quadratic knapsack problems, Oper. Res. Lett., 3 (1984),
163–166.
[8] P. H. Calamai and J. J. Moré, Quasi-Newton updates with bounds, SIAM J. Numer. Anal.,
24 (1987), 1434–1441.
28 HSIN-MIN SUN AND YU-JUAN SUN

[9] R. Cominetti, W. F. Mascarenhas and P. J. S. Silva, A Newton’s method for the continuous
quadratic knapsack problem, Math. Prog. Comp., 6 (2014), 151–169.
[10] C. Cortes and V. Vapnik, Support-vector networks, Machine Learning, 20 (1995), 273–297.
[11] S. Cosares and D. S. Hochbaum, Strongly polynomial algorithms for the quadratic trans-
portation problem with a fixed number of sources, Math. Oper. Res., 19 (1994), 94–111.
[12] R. W. Cottle, S. G. Duvall and K. Zikan, A Lagrangian relaxation algorithm for the con-
strained matrix problem, Nav. Res. Logist. Q., 33 (1986), 55–76.
[13] Y.-H. Dai and R. Fletcher, New algorithms for singly linearly constrained quadratic programs
subject to lower and upper bounds, Math. Program., 106 (2006), 403–421.
[14] T. A. Davis, W. W. Hager and J. T. Hungerford, An efficient hybrid algorithm for the
separable convex quadratic knapsack problem, ACM Trans. Math. Softw., 42 Article 22
(2016), 25 pages.
[15] J.-P. Dussault, J. A. Ferland and B. Lemaire, Convex quadratic programming with one con-
straint and bounded variables, Math. Program., 36 (1986), 90–104.
[16] B. C. Eaves, On quadratic programming, Manag. Sci., 17 (1971), 698–711.
[17] C. Edirisinghe and J. Jeong, Tight bounds on indefinite separable singly-constrained quadratic
programs in linear-time, Math. Program., 164 (2017), 193–227.
[18] A. Frangioni and E. Gorgone, A library for continuous convex separable quadratic knapsack
problems, Eur. J. Oper. Res., 229 (2013), 37–40.
[19] W. W. Hager and J. T. Hungerford, Continuous quadratic programming formulations of
optimization problems on graphs, Eur. J. Oper. Res., 240 (2015), 328–337.
[20] W. W. Hager and Y. Krylyuk, Graph partitioning and continuous quadratic programming,
SIAM J. Disc. Math., 12 (1999), 500–523.
[21] M. Held, P. Wolfe and H. P. Crowder, Validation of subgradient optimization, Math. Program.,
6 (1974), 62–88.
[22] R. Helgason, J. Kennington and H. Lall, A polynomially bounded algorithm for a singly
constrained quadratic program, Math. Program., 18 (1980), 338–343.
[23] D. S. Hochbaum and S. P. Hong, About strongly polynomial time algorithms for quadratic
optimization over submodular constraints, Math. Program., 69 (1995), 269–309.
[24] J. Jeong, Indefinite Knapsack Separable Quadratic Programming: Methods and Applications,
Ph.D. Dissertation, University of Tennessee, Knoxville, 2014. Available from: https://trace.
tennessee.edu/utk_graddiss/2704/
[25] N. Katoh, A. Shioura and T. Ibaraki, Resource allocation problems, in Handbook of Com-
binatorial Optimization (eds. P.M. Pardalos, DZ Du and R.L. Graham), Springer, (2013),
2897–2988.
[26] K. C. Kiwiel, On linear-time algorithms for the continuous quadratic knapsack problem,
J. Optim. Theory Appl., 134 (2007), 549–554.
[27] K. C. Kiwiel, Breakpoint searching algorithms for the continuous quadratic knapsack problem,
Math. Program., 112 (2008), 473–491.
[28] K. C. Kiwiel, Variable fixing algorithms for the continuous quadratic knapsack problem,
J. Optim. Theory Appl., 136 (2008), 445–458.
[29] N. Maculan, C. P. Santiago, E. M. Macambira and M. H. C. Jardim, An O(n) algorithm for
projecting a vector on the intersection of a hyperplane and a box in Rn , J. Optim. Theory
Appl., 117 (2003), 553–574.
[30] N. Megiddo and A. Tamir, Linear time algorithms for some separable quadratic programming
problems, Oper. Res. Lett., 13 (1993), 203–211.
[31] S. S. Nielsen and S. A. Zenios, Massively parallel algorithms for singly constrained convex
programs, ORSA J. Comput., 4 (1992), 166–181.
[32] P. M. Pardalos and N. Kovoor, An algorithm for a singly constrained class of quadratic
programs subject to upper and lower bounds, Math. Program., 46 (1990), 321–328.
[33] P. M. Pardalos, Y. Ye and C.-G. Han, Algorithms for the solution of quadratic knapsack
problems, Linear Algebra Appl., 152 (1991), 69–91.
[34] M. Patriksson, A survey on the continuous nonlinear resource allocation problem, Eur. J.
Oper. Res., 185 (2008), 1–46.
[35] M. Patriksson and C. Strömberg, Algorithms for the continuous nonlinear resource allocation
problem—New implementations and numerical studies, Eur. J. Oper. Res., 243 (2015), 703–
722.
[36] A. G. Robinson, N. Jiang and C. S. Lerme, On the continuous quadratic knapsack problem,
Math. Program., 55 (1992), 99–108.
QUADRATIC KNAPSACK PROBLEM 29

[37] B. Shetty and R. Muthukrishnan, A parallel projection for the multicommodity network
model, J. Oper. Res. Soc., 41 (1990), 837–842.
[38] H.-M. Sun and R.-L. Sheu, Minimum variance allocation among constrained intervals, J.
Glob. Optim., 74 (2019), 21–44.
[39] J. A. Ventura, Computational development of a Lagrangian dual approach for quadratic
networks, Networks, 21 (1991), 469–485.

Received March 2020; 1st revision September 2020; Final revision October 2020;
Early access November 2021.
E-mail address: sunhm@mail.nutn.edu.tw
E-mail address: jenny0208sun@gmail.com

You might also like