You are on page 1of 40

Quadratic programming problems - a review on

algorithms and applications
(Active-set and interior point methods)
Dr. Abebe Geletu
Ilmenau University of Technology
Department of Simulation and Optimal Processes (SOP)

Quadratic programming problems - a review on algorithms and applications (Active-set and interior point methods)

TU Ilmenau

Topics
Introduction
Quadratic programming problems with equality constraints
Quadratic programming problems with inequality constraints
Primal Interior Point methods for Quadratic programming
problems
Primal-Dual-Interior Point methods Quadratic programming
problems
Linear model-predictive control (LMPC) and current issues
References and Resources

Quadratic programming problems - a review on algorithms and applications (Active-set and interior point methods)

TU Ilmenau

Introduction - Quadratic optimization (is not programming)
• A general quadratic optimization (programming) problem  

1 >
>
(QP)
min
x Qx + q x
x
2
s.t.
Ax = a;
Bx ≤ b;
x ≥ 0,
where Q ∈ Rn×n symmetric (not necessarily positive definite),
A ∈ Rm1 ×n , B ∈ Rm2 ×n and m1 ≤ n.
• Sometimes
instead of Ax = a we write

a>
i x = ai , i = 1, . . . , m1

instead of Bx = b we write

b>
j x ≤ bj , j = 1, . . . , m2 ;

>
where a>
i is the i − th row of A and bj is the j-th row of B.
Quadratic programming problems - a review on algorithms and applications (Active-set and interior point methods)

TU Ilmenau

Introduction...
Some applications of QP’s:
• Least Square approximations and estimation
• Portfolio optimization
• Signal and image processing, computer vision, etc.
• Optimal contro, linear model predictive control, etc
• PDE-constrained optimization problems in CFD, CT,
topology/shape optimization, etc
• Sequential quadratic programming (SQP) methods for NLP
• etc.
What has been achieved to date for the solution of nonlinear optimization problems has been really
attained through methods of quadratic optimization and techniques of numerical linear algebra. As a
result nowadays it is not surprising to see a profound interest in QP’s and their real-time computing.
QP’s are now the driving force behind modern control technology.

Quadratic programming problems - a review on algorithms and applications (Active-set and interior point methods)

TU Ilmenau

.SQP in brief Given a constrained optimization problem (NLP) min f (x) x s. µk ) or a Quasi-Newton approximation of it. λk . m.   I the step-length αk is determine by a 1D minimization of a merit function M x k + αd k . p. µ) = f (x) + λ h(x) + µ> g (x). . . .t. j = 1. m}. λ. . j ∈ A(x ) ⊂ {1. p. Quadratic programming problems . . hi (x) = 0. > k ∇gj (x) d + gj (xk ) ≤ 0. .QP introduction. .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .a function that guarantees a sufficient decrease in the objective f (x) and satisfaction of constraints along d k with an appropriate step length αk . i = 1. 2.t. . . gj (x) ≤ 0. A rough scheme for the SQP algorithm: Step 0: start from x 0 Step k: x k+1 = x k + αk dk . . . . 2. . where Hk is the Hessian of the Lagrangian ∇2x L(x k .. 2. i = 1. where I the search-direction d k is computed using a quadratic programming problem:   1 > > (QP)k min d Hk d + ∇f (xk ) d d 2 s. 2. > ∇hi (xk ) d + hi (xk ) = 0.. . . > with Lagrange function: L(x.

QP introduction .t. Quadratic programming problems . x(t0 ) = x0 . x(t) ˙ = Ax(t) + Bu(t). umin ≤ u(t) ≤ umax . t0 ≤ t ≤ tf .trajectory tracking for autonomous vehicles Figure: Trajectory tracking in a safe corridor (OptCtrl) min u 1 2 Z t n f > [x(t) − s(t)] > M [x(t) − u(t)] + u(t) Ru(t) o t0 s. xmin (t) ≤ x(t) ≤ xmax (t).a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . x(tf ) = xf .

Special classes • Tracking problems for fast systems are better treated using model-predictive control (MPC). Quadratic programming problems .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .Q positive definite or not I if there are only equality constraints Ax = b I if the are only bound constraints xmin ≤ x ≤ xmax I if matrices exhibit sparsity properties and/or block structures I etc.Introduction to QP . • Seldom are these problems solved analytically. • Numerical methods strongly depended on I the properties of the matrix Q . • Practical engineering applications frequently lead to large-scale QP’s.

min x  (QP)I ♣ 1 > x Qx + q > x x 2 Ax = a min Bx ≥ b.. Box-constrained QP Unconstraned QP   (QP) min x 1 > x Qx + q > x 2  . Quadratic programming problems .Some classification of QP’s .  F min Inequality constrained QP Equality constrained QP   (QP)E 1 > x Qx + q > x 2 Ax = a. ♣ (QP) 1 > x Qx + q > x x 2 a ≤ x ≤ b.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau  F ...

a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . I Variants of CG: • Hestens-Steifel 1952.Special classes (I) Unconstrained QP  min x 1 > x Qx + q > x 2  I Known method: Conjugate gradient methods (CG). Basically CG is used when the matrix Q is symmetric and positive definite. Shewchuk. CG methods may require preconditioning techniques to guarantee convergence to the correct solution ⇒ leading to preconditioned CG (PCGG). • Fletcher-Reeves 1964.CG without an agonizing pain by J.Matrix Computations by Golub & Van Loan. • Polak-Ribiere 1969 As in all iterative methods. Read: . R.Introduction to QP . Quadratic programming problems . . This is necessary for large-scale QP’s.

In fact. umin ≤ u ≤ umax . : • in image recovery and deblurring. xmin ≤ x ≤ xmax .t. Quadratic programming problems . we can solve for x in terms of u (quasi-sequential) so that x = A−1 (b − Bu) so that we have  h  i> h i 1 −1 −1 > min A (b − Bu) Q A (b − Bu) + u Ru x 2 s. • in linear quadratic control or linear model predictive control. etc.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .t. Theoretically. I This form of QP commonly arises.t. with bound constraints on the control. eg. if   1 > > min x Qx + u Ru x 2 s.Introduction to QP . Ax + Bu = b umin ≤ u ≤ umax .Special classes (II) Box-constrained QP’s  min x 1 > x Qx + q > x 2  s. with lower and upper bounds on the gray-levels of the image.. for instance.

7(1995). pp. Ph. Conn. Numer.. Toint. Gould.. A. 1985. Implementation aspects of model predictive control for embedded systems. R.. N. Dennis and Tapia 1985) Important extensions and application-specific modifications of the above: gradient projection methods. Global convergence of a class of trust region algorithms for optimization with simple bounds. L. I Variants: • The Brazilai-Borwein method (Brazilai & Borowein 1988) • The Nesetrov gradient method (Nesterov 1983) • Trust region methods (Celis. 149 – 154. SIAM. M. IMA J. Ferreau...-H. A new method for large-scale box constrained convex quadratic minimization problems. Gould. Anal. M. M. 14(2007). J. Procedings of the 2012 American Control Conference. E. Numer. Canada. IEEE Trans. 2419 – 2434. • Hu. R. 2279 – 2285. pp. Faulwasser... Fast Gradient-based algorithms for constrained total variation image denoising and deblurring problems. A. Philadelphia. R. M. M. Raydan. Dai Y. Optim. • Celis.Box constrained QP’s I Known methods: Iterative projection algorithms. H. in P. Automatica. A trust region strategy for nonlinear equality constrained optimization. 1205–1210. 13(1993).. R.. SIAM J. B.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . R. Tiboulle. M. M. J. Anal. M. T. • Friedlander. • Hauska. Inexact Barzilai-Borwein method for saddle point problems. 71 – 82. 321 – 326. Toint Some recent engineering applications: • Beck. 18(2009). N. • Raydan. On the Brazilai and Borwein choice of steplength for gradient methods. • Conn. in Numerical Optimization 1984. eds. Y. Quadratic programming problems .-Q. Martinez. K¨ ugel.. 299–317. T. Byrd and R. Numerical Linear Algebra with Applications. M. 25(1988).. and Ph. on Image Processing. Diehl.. Issue 4. Tapia. • Zometa. J. Montreal.. Findeisen. A. Methods and Software. 47(2011). Dennis.. Boggs. L. A. An auto-generated real-time iteration algorithm for nonlinear mpc in the microseconds range. I. P. • Trust Region Methods by A. pp. Schnabel. 764 – 767. I.

t. Quadratic programming problems .e.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . rank(A) = m1 or rank(A) < m1 ) is not serious.Equality constrained QP’s (III) Equality constrained QP’s  (QP) min x 1 > x Qx + q > x 2  s. Note: The issue that wether A is of full-rank or rank-deficient (i. (b) Q is symmetric but not positive semi-definite ⇒ QP is non-convex. Ax = a. (1) There are two cases: (a) Q is symmetric and positive semi-definite ⇒ QP is convex.

Q A> x ⇒ = b Ax = b.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . λ) = 12 x > Qx + q > x + λ> (Ax − b) • KKT conditions (KKT-equation):      −q Qx + A> λ = −q. • K is a symmetric matrix. Quadratic programming problems . but it may or may not be positive definite.Equality constrained QP’s  • Lagrange function: L(x. A O λ | {z } =:K ⇒ the optimization problem is reduced to the solution of a (possibly large-scale) system of linear equations .

N. eg. On practical conditions for the existence and uniqueness of solutions to the general equality quadratic programming problem. 90 – 99. I If Kis not positive definite.E. Comput. • Gould... 32(1985). Quadratic programming problems .Equality constrained QP’s Some suggestions: (A) A numerical analyst approach: I If K is positive definite. Find more. SIAM J.. then use a pre-conditioned conjugate gradient (PCG) method..M. Hiribar. 23(2001). Math. then transform Ky = p to  K > K y = K > p and then apply PCG.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . J.I. N.. Nocedal.M. (B) An optimization specialist approach: • Use (gradient) projection along with the conjugate gradient method. Sci. 1376 – 1395. Recommended reading: • Gould. 2nd ed. Notes that the matrix K > K is positive definite • Almost all state-of-the-art linear algebra packages include a PCG solver. by Golub and Van Loan. Programming. M. On the solution of equality constrained quadratic programming problems arising in optimization.I. from: Matrix computations.

I Small. q+Q −1 > A is invertible. avoid inversion of a matrix in computations . • otherwise. Quadratic programming problems . in general. Further recommended reading: • Practical optimization by Gill. using Choleski or QR factorizations.     −1 > −1 −1 AQ A b + AQ q ∗ λ −1 A> O I However. the solution of the PCG solver needs to be verified.Equality constrained QP’s NB: • If Q is positive definite. Murray and Wright.  Q A I small-scale quadratic optimization problems can be solved directly to obtain the solution Special case: when Q is positive definite and A has full rank (rank(A) = m1 ). the solution of the KKT-equation is a solution for the QP (convex optimization problem).a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . eg. the matrix ⇒ x = −Q ∗     −1 > −1 −1 = − AQ A b + AQ q .to medium-scale KKT-equations can be efficiently solved by using a direct-linear algebra algorithm..

t. There are two classes of algorithms: • the active-set method (ASM) • the interior point method (IPM). Ax = a. Quadratic programming problems . Bx ≤ b.QP with inequality constraints  (QP)I min x 1 > x Qx + q > x 2  s.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .

. Active set Method Active Set Method Strategy: • Start from an arbitrary point x 0 • Find the next iterate by setting x k+1 = x k + αk d k . . m2 }.. . Quadratic programming problems . j = 1. Question • How to determine the search direction d k ? • How to determine the step-length αk ? (A) Determination of the search direction: • At the current iterate x k determine the index set of active the inequality constraints k Ak = {j | b> j x − bj = 0. .. where αk is a step-length and d k is search direction.Quadratic optimization .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .

a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . > k bj (x k + d) = bj ..Quadratic optimization . • Simplify these expressions and drop constants to obtain:  min d h i> 1 > k d Qd + Qx + q d 2  s. e = 0.t.t. Ad = 0. i = 1. j ∈ Ak . .    e = b>  . Bd where  . . . . Set g k = Qx k + q. bj >d = bj − bj x = 0. j ∈ A .  . > k ai (x + d) = ai . Expand • 21 (x k + d)> Q(x k + d) + q > (x k + d) = 12 d > Qd + 12 d > Qx k + 21 (x k )> Qd + 12 (x k )> Qx k + q > d + q > x k   • ai > x k + d > k > k = ai ⇒ a> i d = ai − ai x = 0.   . . B  j   . m1 .. Similarly.. Active set Method • Solve the direction finding problem  min 1 2 d k > k (x + d) Q(x + d) + q > k  (x + d) s. Quadratic programming problems .

. B where λ and µ e are Lagrange multipliers corresponding to equality and active inequality constraints. Quadratic programming problems . Q A> B d −g Ad = 0.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . resp. e = 0. O  λ =  0  (∗) e e O. ⇒  A O . .Quadratic optimization . O µ e 0 Bd = 0. Ad = 0. Bd The KKT optimality conditions lead to the system:      k e >µ e> Qd + g k + A> λ + B e = 0. Active set Method • To obtain the search direction d k ... solve the equality constrained QP:  h i>  1 > d Qd + g k d min d 2 s.t...

Active set Method I Apply algorithms for equality constrained QP’s to obtain d k . (∗∗) Case 1-a: If µ ek ≥ 0.. Then. = 0 g + A λ + Bµ Bx − b  > k k Bx − bk µ 0. • If d k is a solution of QP. . Ad e k Bd = 0. k ≤ 0. I There are two cases: either d k = 0 or d k 6= 0. then corresponding to the non-active constraints we can set µkj = 0 for j ∈ {1. Hence. µ ekj ≥ 0. the system above reduces to > k k e g +A λ +B > k µ e = 0. µk ≥ 0. 2. (i.e. .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . m2 } \ Ak . k = 0. . we have for the original problem k > k k = k Ax − a = 0. j ∈ Ak ).Quadratic optimization . then there are λk and µ ek such that (KKT conditions for the equality constrained QP) e >µ Qd k + g k + A> λk + B ek = 0.. Case 1: d k = 0. . Quadratic programming problems .

Stop! ek are negative.. j ∈ Ak . Quadratic programming problems . then x k is not an • Case 1-b: If some components of µ optimal solution. k b> j d = 0. Then the direction obtained is descent direction d k for (QP)I . Active set Method • If d k = 0 and µ ek ≥ 0.t.. j ∈ A \ {j0 }.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . Let µj0 = min µ ej | µ ej < 0.Quadratic optimization . x k+1 = x k is a KKT point. Remove the k index j0 from A and solve the quadratic programming problem  min d h i> 1 > d Qd + g k d 2  s. a> i d = 0.

• For the inactive inequality constraints at x k .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .. Active set Method Case 2: d k 6= 0. then αk b> j d ≤ bj − bj x is satisfied for any αk > 0.Quadratic optimization . j ∈ Ak for j x j any αk . we need to determine αk so k k that b> / Ak .. (ii) If bj> d k > 0. since x k is feasible to (QP)In and Adk = 0 (in (QP)E ). Thus. • Determine a step-length αk that guarantees x k + αk d k is feasible to (QP)In . k > k > k k (i) If bj> d k ≤ 0.e. • For the equality constraints Ax = b. Quadratic programming problems . we have b> x k < bj . i. j (x + αk d ) ≤ bj holds true for j ∈ > k > k > k bj x + αk bj d ) ≤ bj ⇒ αk bj d > k ≤ bj − bj x . similarly we have b> = b> x k + αk d k = bj ≤ bj . Hence. then choose αk αk = bj − bj> d k bj> d k . j ∈ / Ak . since 0 ≤ bj − bj x (recall that x is feasible for QPI ).   k+1 • For the active inequality constraints Bx ≤ b. We need to choose αk so that Ax k+1 = A(x k + αk d k ) = a and Bx k+1 = B(x k + αk d k ) ≤ b. we have Ax k+1 k k k = A(x + αk d ) = Ax + αk Adk = a + 0 = a So the equality constraints are satisfied for any αk .

then update the active index set as Ak+1 = Ak ∪ {j0 }. the inequality constraint corresponding to the index j0 be comes active.. j0 x + αk d That is.. Quadratic programming problems . bj> d k • Updating the active index set: Observe that if αk < 1. • If αk < 1. αk = min 1.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . then αk = bj −bj> d k 0 0 b> d k j0   k k = for some j0 ∈ / Ak .Quadratic optimization . Active set Method • A common αk that guarantees the satisfaction of all constraints is ) ( bj − bj> d k k > k |j∈ / A and bj d > 0 . This implies that b> = bj0 .

Obtain d k . j ∈ A ..Algorithm 1: An active-set algorithm for QP 1: 2: 3: 4: 5: 6: Give a start vector x 0 . Identify the active index set A0 . λk and µ e k by solving the KKT-equations for 1 min d d > h i>  k Qd + g d 2 s. > ai d = 0.t. j ∈ Ak } 0 Update the index set Ak ← Ak \ {j0 } and GOTO Step 6. else µ e j = min{µ ej | µ e j < 0. while (no convergence) do Compute g k = Qx k + q . end if end if end while((continue.)) Quadratic programming problems . Set k = 0.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . > k bj d = 0. 7: 8: 9: 10: 11: 12: 13: 14: 15: if (d k = 0) then if µ e k ≥ 0 then STOP! x k is a KKT point.

Update active index-set: if αk = 1. |j ∈ / A and bj d > 0 . Compute the step-length      bj − b > d k  j k > k αk = min 1. else Ak+1 = Ak ∪ {j0 }. then Ak+1 = Ak . Update x k+1 = x k + αk d k .16.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . where αk = 20. 21 end if 22. • ASM requires an efficient strategy for the determination of active sets Ak at each x k . end while bj −bj> d k 0 0 for bj> d k > 0. if (d k 6= 0) 17. 19.     b> d k j 18. 0 b> d k j0 Important issues: • How to determine a starting iterate x 0 for the active-set algorithm. Update k ← k + 1. Quadratic programming problems .

• For some problems ASM can be computationally expensive. sparsity. eg. ⇒ Large-scale (QP)I ’s are easy to solve. Quadratic programming problems . eg.. making active set method efficient. • In many cases the active set varies slightly from step-to-step. This is an important property..Quadratic optimization . the structure and properties. in SQP. Disadvantages of ASM: • Since the active-set Ak may vary from step to step. of constraint matrices may change.. ⇒ Data obtained from the current QPE can be used to solve the next QPE known as warm starting. • ASM may become slower near to the optimal solution. Active set Method Advantages of ASM: • Since only active constraints are consider at each iteration x k . • All iterates x k are feasible to (QP)I .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . (QP)E usually has only a few constraints and can be solved fast.

. s ≥ 0 so that the constraints become      a A O x = b s B In   x ≥ 0. x ≥ 0. Quadratic programming problems . • If there is Bx ≤ b.Quadratic optimization.. then split it in two constraints xmax − x ≥ 0 x − xmin ≥ 0. Any other constraints can be transformed to the above form. s • If there is a box constraint xmin ≤ x ≤ xmax . where A ∈ Rm1 ×n .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . PD Interior point method Consider  (QP) min x 1 > x Qx + q > x 2  s. Ax = a.t. then add slack variables s to obtain Bx + s = b.

a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . Ax = a. λ) = x > Qx + q > x − λ> (Ax − a) − µ 2 j=1 where x > 0. Lµ (x. ⇒ The barrier problem associated to (QP) will be   n 1  X (BQP)µ min x > Qx + q > x − µ log xi x 2  j=1 s..t.Quadratic optimization. Quadratic programming problems . • The Lagrange function of (BQP)µ n X 1 log xi .. PD Interior point method ⇒ Use the logarithmic barrier function for the non-negative variables x.

s > 0 and xi si = µ. . . Quadratic programming problems . . . .. . • The KKT condition for (BQP)µ takes the form ⇒ Qx − A> λ − s = −q Ax = b x i si = µ. . . 1).a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . i = 1. Then s ∈ Rn . ⇒ Qx − A> λ − µX −1 e = −q Ax = b. ∂x ∂Lµ = 0 ⇒ Ax = b. . • Define s = µX −1 e. PD Interior point method • The KKT condition for (BQP)µ will be ∂Lµ = 0 ⇒ Qx + q − A> λ − µX −1 e = 0. ∂λ where e = (1.. . n. n. . x > 0.Quadratic optimization. i = 1.

a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . Ax − b Fµ (x. λ. the KKT condition for (BQP)µ is a system of non-linear equations (nonlinearity due to the product xi si = µ). λ. s) = 0. PD Interior point method • For each fixed parameter µ. λ. Set   Qx − A> λ − s − q . • For each fixed µ > 0.Quadratic optimization. s). s) =  XS − µe where X = diag (x) ∈ Rn×n and S = diag (s) ∈ Rn×n . • There are 2n + m1 variables (x.. the equation Fµ (x. Quadratic programming problems .. should be solved.

Quadratic optimization. ∆λk . A general Newton algorithm: Step 0: Give an initial iterate (x 0 . ∆sk ). PD Interior point method • The system can be solved using a Newton-like algorithm. λk . ◦ Set x k+1 = x k + αk ∆xk λk+1 = λk + αk ∆λk s k+1 = s k + αk ∆sk . λ0 .. λk . s k d = −Fµ x k . λk .. ◦ Determine a step-length αk . s 0 ) > 0 . s k to determine the search direction d k = (∆xk . s k ): ◦ solve the system of equations     JFµ x k . Quadratic programming problems . Step k: Given (x k . s 0 ) where (x 0 .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .

s k =  A k S O Xk where X k = diag (x k ) and S k = diag (s k ). λk . In interior-point method.Quadratic optimization. PD Interior point method • At each iteration the system of linear equations     JFµ x k . most computation effort is spent for solving the system of linear equations to determine d k . JFµ x k . λk . s k d = −F x k . λk .. Quadratic programming problems . s k should be solved. λk .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . This should be accomplished by an efficient linear algebra solver. s k has the special form   Q −A> −I   O O ..  • The Jacobian matrix JFµ x k .

λ0 . PD Interior point method • There are various prima-dual interior-point algorithms with modifications on the above general algorithm.. ∆sk .λ . d = − x . 1) and µk =   k+1 k+1 k+1 Set (x . s 0 ) with (x 0 . Update k ← k + 1.s ) + αk ∆xk . s ) + α ∆xk . ∆sk > 0.s Ax − b XS − σk µk e  to determine d k = ∆xk .    > k while ( xk s /n > ε) do Choose σk ∈ [σmin .. 8: 9: 10: 11:  Set αk = min 1. ∆λk . ηk αmax for some ηk ∈ (0. end while Quadratic programming problems . ∆λk . σmax ] . ∆sk  > xk sk n . ∗ Path-following Algorithm ∗ Affine-scaling Algorithm.λ .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . ∗ Mehrotra predicator-corrector Algorithm etc. 7: Choose αmax the largest value such that  k k (x . s 0 ) > 0. Set k = 0. Choose initial iterates (x 0 . Algorithm 2: Path-following Algorithm for QP 1: 2: 3: 4: 5: 6: Choose σmin . Solve the system JF µ     Qx − A> λ − s − q k k k .Quadratic optimization.

75 (path following method) (d) σ k = 1 − √1n . . .Quadratic optimization. 2.. PD Interior point method • Commonly used termination criteria is xk > n sk ≤ ε. . k = 1. . . . 2. . . Some strategies for choice of centering parameter: (a) σ k = 0. . (with αk = 1 . .affine-scaling approach. 2. k = 1.001.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .. . etc. . k = 1. σmin = 0. . .short-step path-following method) Quadratic programming problems .01 and σmax = 0. . for some termination tolerance ε > 0. ε = 0. k = 1. 2. Commonly. eg. . σmax ] = 1. (c) σ k ∈ [σmin . (b) σ k = 1.

⇒ The computation of d k may be only attained through iterative linear algebra solvers.to large-scale QPs ⇒ so important in optimal control. • The Jacobian matrix JFµk takes a special structure.. • Structure of the Jacobian matrix does not change a lot. • All constraints are used in the computation of the search direction d k . even if some are not active. which can be exploited by a linear algebra solver. PD Interior point method Advantages of PDIPM: • Can be used to solve medium. s 0 ) which is not trivial to obtain.Quadratic optimization. • Fast convergence properties. Quadratic programming problems .. Disadvantages of PDIPM: • Usually requires a strictly feasible initial iterate (x 0 .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau .

Some current Issues I Optimization methods for real-time control. ⇒ Large QP’s can solved on shared memory PC with graphic processors. Applications for fast real-time systems control: • Control of autonomous (ground. AMD Athlon graphic cards. etc large-scale stochastic simulation and optimization. Thermodynamics. Quadratic programming problems . ⇒ Require hardware (embedded systems) implementation of optimization algorithms. I Parallel implementation on graphic processors. For instance in the numerical solution of optimization problems with partial differential constrains. Computer Tomography. Applications: CFD optimization. Example: Graphic cards with processors: Latest NVIDA Graphic cards. etc. • Computer graphic cards provided an opportunity for parallel linear algebra computation. OpenCL (Athlon). I Parallel implementation on shared and distributed memory (multiprocess) computers. Programming Languages: CUDA .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . arial or underwater) vehicles and Robots computation of QP’s in microseconds.

cs.coin-or.numerical. D.html (Book.sourceforge.rl.python.Quadratic Programming in C Adrian Wills. N. A. Gertz and S.hsl.J.net/ • Python based optimization solvers URL: http://wiki. Stanford University. etc) • The HSL Mathematical Software Library (HSL=Harwell Subroutine Library).html • Computational Optimization Laboratory.kuleuven. H.stanford.be/optec/software/qpOASES/ • QPC .a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . software. W¨ achter and C. Stephen P.Resources Some use full links: • A quadratic programming page. URL: http://www.wisc. Stanford University. URL: http://www.edu/ yyye/Col. University of Lauven (Belgium).ac.ac.html • Convex Optimization.Online Active Set Strategy.at/ neum/software/minq/ • qpOASES .org/quadprog/ • OOQP is an object-oriented C++ package. Y. Wright URL: http://pages.ac. URL: http://nlpy.uk/ Software: • MINQ: General Definite and Bound Constrained Indefinite Quadratic Programming. University of Vienna URL: http://www.mat.stanford. Ye. Toint URL: http://www.org/moin/NumericAndScientific Quadratic programming problems .org/Ipopt/ • NLPy is a Python package for numerical optimization.edu/ swright/ooqp/ • Interior point Optimizer (IpOpt). URL: http://www. Ferreau. M. URL: http://sigpromu. Numerical Analysis Group at the STFC Rutherford Appleton Laboratory URL: http://www.edu/ boyd/software. Orban. Boyd. Laird URL: https://projects. Gould and P.rl. Arnold Neumaier. University of Newcastle.univie.uk/qp/qp. Ecole Polytechnique de Montreal.

7(2006). 373 – 395. 25(1985). V. • E. University of California. A. Nemirovsky: Interior-point Polynomial Algorithms in Convex Programming. D. Stanford University.html.C.References Some references on active set methods: • R. Quadratic programming problems . Wong: Active-Set Methods for Quadratic Programming. Appl.. 4(1984). PA.T. 45 – 66. E. 283 – 310. • D. August 2010. 587-601. • S. H. 2011.09. of the American Math.org/DB HTML/2010/11/2828. Math. Some references on interior point methods: • M.G: Bock and J.. Kirches. Prima-dual interior-point methods. • C. • M. NY. • M. di Serafino: On mutual impact of numerical linear algebra and large-scale optimization with focus on interior point methods. Ye: Interior point algorithms: Theory and analysis. C.A. 44(1989).optimization-online. Programming Stud. Optimization Online 2011. M. • N. Stanford. Gill. Potschka. 45(2010). Idnani: A numerically stable dual method for solving strictly convex quadratic programming. 62(2004). Bartlett and L. 46–61. Interior point methods for nonlinear optimization.ejor. Prog. 1997. Prog. Forsgren.D.. Gondzio: Interior point methods 25 years later. Wright. 39 – 56. 44(2002). Biegler: QPSchur: a dual. Optim.H. 1994. • A.1016/j. Powell: On the quadratic programming algorithm of Goldfarb and Idnani. 27(1983). Adler: Interior path-following primal dual algorithms. Comput. Published online: October 8.2011. Part II: Convex quadratic programming. SIAM Review. • A. P. Gondzio: Matrix-Free Interior Point Method. 2009. Bull. Society. SIAM Philadelphia. De Simone.D. active-set. 5–32. Shl¨ oder: Reliable solution of convex quadratic programs with parametric active set methods. • Y.P. J. SIAM Philadelphia 1997. Maes: A Regularized Active-Set Method for Sparse Convex Quadratic Programming. 525 – 597.J. Wright.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . Nesterov and A. Math. Wright: The interior-point revolution in optimization: history. recent developments and lasting consequences. • J. San Diego. European Journal of Operational Research 218 (2012) pp.M.. Combinatorica. CA. Optimization Online. • J. • Y. Optim. 2011. Karamarkar: A new polynomial-time algorithm for linear programming. Institute for Computational and Mathematical Engineering.Schur-complement method for large-scale and structured convex quadratic programming. PhD thesis.017. URL: http://www. DOI 10. Math. H. Wiley. Phd thesis. Goldfarb. Eng. • R. 1 – 33. Monterio and I. D’Apuzzo.

. 18(2010). B. Bock. Ksouri: A Microcontroller Implementation of Constrained Model Predictive Control.26. Engineering and Technology. Ling. A. F.V. June 2000. on Ctrl. • A. July 6-11. Mills and B. Milano (Italy). K. European Control Conference. Wang and S.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . Diehl: An online active set strategy to overcome the limitations of explicit MPC. • M. Maciejowski: A comparison of interior point and active set methods for FPGA implementation of model predictive control.M.K. Illinois. • C. S.. Ling and J. J. Wright and J. 2009. Quadratic programming problems . J.References Some references on real-time control: • R. Rawlings: Application of Interior-Point Methods to Model Predictive Control. 2011. B. 2011. J. The 18th IFAC World Congress. W¨ achter and L. Boyd: Fast model predictive control using online optimization. H. Maciejowski: Embedded Model Predictive Control (MPC) using a FPGA. 816 – 830. Bouani and M. Abbes.S. Korea. Yue. Proceedings of the 17th World Congress The International Federation of Automatic Control Seoul. • K.M. • Y. Wills and A. Lau. 723 –757. Sys. 80. • A. V.A. of Optim. J.P. Wu and J. Bartlett.F. Proceedings of the American Control Conference. Applic. Tech. Budapest. IEEE Trans. World Academy of Science. 267 – 278. S.V. Ninness: FPGA implementation of an interior-point solution for linear model predictive control. Ferreau.K. 2008.T. Inter. August 23. Chicago. • H. Biegler: Active set vs interior point strategies for model predictive control. Rao.September 2. 18(2008). of Robust and Nonlinear Ctrl. G. 99(1998). and M. August 28 . Proc.

• B.edu/ quake-papers/painless-conjugate-gradient. 27(1983). Van Loan: Matrix Computations. J.8(1988). Computational Methods in Optimization: A Unified Approach.a review on algorithms and applications (Active-set and interior point methods) TU Ilmenau . Math. Standards. 49 (1952).cmu. • M..D. Polak . Reeves: Function minimization by conjugate gradients. Polyak: The conjugate gradient method in extremal problems. • J. USSR Comput. • E. 3e Ann´ ee 16(1969).H. Bur. Comp. • Y. Fletcher and M. J. 94 – 112. Prog. Brazilai and J. 7(1964). Quadratic programming problems . 7(1964). Fletcher. Research Nat. • R.. Powell: Restart procedures of the conjugate gradient method. Math. 2(1977).  141 – 148. Practical Methods of Optimization.J. Shewchuk: An Introduction to the Conjugate Gradient Method Without the Agonizing Pain. Fletcher and C. Polak and G. J. • E. Phys. Stiefel: Methods of conjugate gradients for solving linear systems. Powell: Function Minimization by Conjugate Gradients. 372 – 376. 42 – 49. 409 – 436.R. Math. (URL: http://www.T. Anal. • R. John Wiley & Sons.pdf) Some references on box-constrained QP: • J. Powell: Some convergence properties of the conjugate gradient method. Rev.. Francaise Informat Recherche Opertionelle.D. Hestenes and E. Math. Golub and C. 1971. 149 – 154.F. Nesterov: A method of solving convex programming problems with convergence rate O 1/k 2 . • M.D.. Johns Hopkins University Press. Ribiere: Note sur la convergence de directions conjug´ ees..cs. 1987. 241 – 254. Comput. 3rd ed. New York.J. 9(1969). 149 – 154. IMA J.References Some classical references on conjugate gradient methods: • R. L. R.M. London. Academic Press Inc. 35 – 43. Prog. • G. • M. Borwein: Two-point step size gradient methods. 1996. Numer.. 11(1976). Baltimore. Soviet Mathematics Doklady.J.