OPTIMIZATION IN SCILAB

This document has been written by Michaël Baudin and Vincent Couvert from the Scilab
Consortium and by Serge Steer from INRIA Paris - Rocquencourt.
© July 2010 The Scilab Consortium – Digiteo / INRIA. All rights reserved.
Abstract
In this document, we make an overview of optimization features in Scilab. The goal of this
document is to present all existing and non-existing features, such that a user who wants to solve
a particular optimization problem can know what to look for. In the introduction, we analyse a
classification of optimization problems. In the first chapter, we analyse the flagship of Scilab in
terms of nonlinear optimization: the optim function. We analyse its features, the management of
the cost function, the linear algebra and the management of the memory. Then we consider the
algorithms which are used behind optim, depending on the type of algorithm and the constraints.
In the remaining chapters, we present the algorithms available to solve quadratic problems, non-
linear least squares problems, semidefinite programming, genetic algorithms, simulated annealing
and linear matrix inequalities. A chapter focus on optimization data files managed by Scilab,
especially MPS and SIF files. Some optimization features are available in the form of toolboxes,
the most important of which are the Quapro and CUTEr toolboxes. The final chapter is devoted
to missing optimization features in Scilab.
Contents
Introduction 5
1 Non-linear optimization 10
1.1 Mathematical point of view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2 Optimization features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Optimization routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4 The cost function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Linear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Management of memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7 Quasi-Newton ”qn” without constraints : n1qn1 . . . . . . . . . . . . . . . . . . . 13
1.7.1 Management of the approximated Hessian matrix . . . . . . . . . . . . . . 15
1.7.2 Line search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.7.3 Initial Hessian matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.7.4 Termination criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.7.5 An example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.8 Quasi-Newton ”qn” with bounds constraints : qnbd . . . . . . . . . . . . . . . . . 20
1.9 L-BFGS ”gc” without constraints : n1qn3 . . . . . . . . . . . . . . . . . . . . . . . 20
1.10 L-BFGS ”gc” with bounds constraints : gcbd . . . . . . . . . . . . . . . . . . . . . 21
1.11 Non smooth method without constraints : n1fc1 . . . . . . . . . . . . . . . . . . . 22
2 Quadratic optimization 23
2.1 Mathematical point of view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 qpsolve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.3 qp solve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.4 Memory requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5 Internal design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3 Non-linear least square 26
3.1 Mathematical point of view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Scilab function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3 Optimization routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Semidefinite programming 27
4.1 Mathematical point of view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Scilab function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.3 Optimization routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1
5 Genetic algorithms 28
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.3 Support functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
5.3.1 Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3.2 Cross-over . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3.3 Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.3.4 Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.4 Solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.4.1 optim ga . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4.2 optim moga, pareto filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.4.3 optim nsga . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.4.4 optim nsga2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
6 Simulated Annealing 35
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.2 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.4 Neighbor functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.5 Acceptance functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.6 Temperature laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
6.7 optim sa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7 LMITOOL: a Package for LMI Optimization in Scilab 41
7.1 Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.2 Function lmisolver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.2.1 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.2.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
7.3 Function LMITOOL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.3.1 Non-interactive mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.3.2 Interactive mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7.4 How lmisolver works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
7.5 Other versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
8 Optimization data files 56
8.1 MPS files and the Quapro toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
8.2 SIF files and the CUTEr toolbox . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
9 Scilab Optimization Toolboxes 57
9.1 Quapro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.1.1 Linear optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
9.1.2 Linear quadratic optimization . . . . . . . . . . . . . . . . . . . . . . . . . 58
9.2 CUTEr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
9.3 The Unconstrained Optimization Problem Toolbox . . . . . . . . . . . . . . . . . 60
9.4 Other toolboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
10 Missing optimization features in Scilab 63
2
Conclusion 64
Bibliography 65
3
Copyright c 2008-2010 - Consortium Scilab - Digiteo - Michael Baudin
Copyright c 2008-2009 - Consortium Scilab - Digiteo - Vincent Couvert
Copyright c 2008-2009 - INRIA - Serge Steer
This file must be used under the terms of the Creative Commons Attribution-ShareAlike 3.0
Unported License:
http://creativecommons.org/licenses/by-sa/3.0
4
Introduction
This document aims at giving Scilab users a complete overview of optimization features in Scilab.
It is written for Scilab partners needs in OMD project (http://omd.lri.fr/tiki-index.php). The
core of this document is an analysis of current Scilab optimization features. In the final part,
we give a short list of new features which would be interesting to find in Scilab. Above all the
embedded functionalities of Scilab itself, some contributions (toolboxes) have been written to
improve Scilab capabilities. Many of these toolboxes are interfaces to optimization libraries, such
as FSQP for example.
In this document, we consider optimization problems in which we try to minimize a cost
function f(x) with or without constraints. These problems are partly illustrated in figure 1.
Several properties of the problem to solve may be taken into account by the numerical algorithms :
• The unknown may be a vector of real values or integer values.
• The number of unknowns may be small (from 1 to 10 - 100), medium (from 10 to 100 - 1
000) or large (from 1 000 - 10 000 and above), leading to dense or sparse linear systems.
• There may be one or several cost functions (multi-objective optimization).
• The cost function may be smooth or non-smooth.
• There may be constraints or no constraints.
• The constraints may be bounds constraints, linear or non-linear constraints.
• The cost function can be linear, quadratic or a general non linear function.
An overview of Scilab optimization tools is showed in figure 2.
In this document, we present the following optimization features of Scilab.
• nonlinear optimization with the optim function,
• quadratic optimization with the qpsolve function,
• nonlinear least-square optimization with the lsqrsolve function,
• semidefinite programming with the semidef function,
• genetic algorithms with the optim_ga function,
• simulated annealing with the optim_sa function,
• linear matrix inequalities with the lmisolver function,
5
>
1
0
0
U
n
k
n
o
w
n
s
1
0
-
1
0
0
U
n
k
n
o
w
n
s
1
-
1
0
U
n
k
n
o
w
n
s
N
o
n
-
l
i
n
e
a
r
O
b
j
e
c
t
i
v
e
Q
u
a
d
r
a
t
i
c
O
b
j
e
c
t
i
v
e
L
i
n
e
a
r
O
b
j
e
c
t
i
v
e
N
o
n
-
l
i
n
e
a
r
C
o
n
s
t
r
a
i
n
t
s
L
i
n
e
a
r
C
o
n
s
t
r
a
i
n
t
s
B
o
u
n
d
s
C
o
n
s
t
r
a
i
n
t
s
W
i
t
h
C
o
n
s
t
r
a
i
n
t
s
W
i
t
h
o
u
t
C
o
n
s
t
r
a
i
n
t
s
S
m
o
o
t
h
N
o
n

S
m
o
o
t
h
O
n
e
O
b
j
e
c
t
i
v
e
S
e
v
e
r
a
l
O
b
j
e
c
t
i
v
e
s
C
o
n
t
i
n
u
o
u
s
P
a
r
a
m
e
t
e
r
s
D
i
s
c
r
e
t
e
P
a
r
a
m
e
t
e
r
s
O
p
t
i
m
i
z
a
t
i
o
n
Figure 1: Classes of optimization problems
6
Figure 2: Scilab Optimization Tools
7
• reading of MPS and SIF files with the quapro and CUTEr toolboxes.
Scilab v5.2 provides the fminsearch function, which a derivative-free algorithm for small
problems. The fminsearch function is based on the simplex algorithm of Nelder and Mead (not
to be confused with Dantzig’s simplex for linear optimization). This unconstrained algorithm
does not require the gradient of the cost function. It is efficient for small problems, i.e. up to
10 parameters and its memory requirement is only O(n). This algorithm is known to be able to
manage ”noisy” functions, i.e. situations where the cost function is the sum of a general nonlinear
function and a low magnitude function. The neldermead component provides three simplex-
based algorithms which allow to solve unconstrained and nonlinearly constrained optimization
problems. It provides an object oriented access to the options. The fminsearch function is, in
fact, a specialized use of the neldermead component. This component is presented in in depth in
[2].
An analysis of optimization in Scilab, including performance tests, is presented in ”Optimiza-
tion with Scilab, present and future” [3]. The following is the abstract of the paper :
”We present in this paper an overview of optimization algorithms available in theScilab soft-
ware. We focus on the user’s point of view, that is, we have to minimize or maximize an objective
function and must find a solver suitable for the problem. The aim of this paper is to give a simple
but accurate view of what problems can be solved by Scilab and what behavior can be expected
for those solvers. For each solver, we analyze the type of problems that it can solve as well as its
advantages and limitations. In order to compare the respective performances of the algorithms,
we use the CUTEr library, which is available in Scilab. Numerical experiments are presented,
which indicates that there is no cure-for-all solvers.”
Several existing optimization features are not presented in this document. We especially
mention the following tools.
• The fsqp toolbox provides an interface for a Sequential Quadratic Programming algorithm.
This algorithm is very efficient but is not free (but is provided by the authors, free of charge,
for an academic use).
• Multi-objective optimization is available in Scilab with the genetic algorithm component.
The organization of this document is as following.
In the first chapter, we analyse the flagship of Scilab in terms of nonlinear optimization: the
optim function. This function allows to solve nonlinear optimization problems without constraints
or with bound constraints. It provides a Quasi-Newton method, a Limited Memory BFGS algo-
rithm and a bundle method for non-smooth functions. We analyse its features, the management
of the cost function, the linear algebra and the management of the memory. Then we consider the
algorithms which are used behind optim, depending on the type of algorithm and the constraints.
In the second chapter we present the qpsolve and qp_solve functions which allows to solve
quadratic problems. We describe the solvers which are used, the memory requirements and the
internal design of the tool.
The chapter 3 and 4 briefly present non-linear least squares problems and semidefinite pro-
gramming.
The chapter 5 focuses on genetic algorithms. We give a tutorial example of the optim_ga
function in the case of the Rastrigin function. We also analyse the support functions which allow
to configure the behavior of the algorithm and describe the algorithm which is used.
8
The simulated annealing is presented in chapter 6, which gives an overview of the algorithm
used in optim_sa. We present an example of use of this method and shows the convergence of
the algorithm. Then we analyse the support functions and present the neighbor functions, the
acceptance functions and the temperature laws. In the final section, we analyse the structure of
the algorithm used in optim_sa.
The LMITOOL module is presented in chapter 7. This tool allows to solve linear matrix
inequalities. This chapter was written by Nikoukhah, Delebecque and Ghaoui. The syntax of the
lmisolver function is analysed and several examples are analysed in depth.
The chapter 8 focuses on optimization data files managed by Scilab, especially MPS and SIF
files.
Some optimization features are available in the form of toolboxes, the most important of which
are the Quapro, CUTEr and the Unconstrained Optimization Problems toolboxes. These modules
are presented in the chapter 9, along with other modules including the interface to CONMIN, to
FSQP, to LIPSOL, to LPSOLVE, to NEWUOA.
The chapter 10 is devoted to missing optimization features in Scilab.
9
Chapter 1
Non-linear optimization
The goal of this chapter is to present the current features of the optim primitive Scilab. The
optim primitive allows to optimize a problem with a nonlinear objective without constraints or
with bound constraints.
In this chapter, we describe both the internal design of the optim primitive. We analyse in
detail the management of the cost function. The cost function and its gradient can be computed
using a Scilab function, a C function or a Fortran 77 function. The linear algebra components
are analysed, since they are used at many places in the algorithms. Since the management of
memory is a crucial feature of optimization solvers, the current behaviour of Scilab with respect
to memory is detailed here.
Three non-linear solvers are connected to the optim primitive, namely, a BFGS Quasi-Newton
solver, a L-BFGS solver, and a Non-Differentiable solver. In this chapter we analyse each solver
and present the following features :
• the reference articles or reports,
• the author,
• the management of memory,
• the linear algebra system, especially the algorithm name and if dense/sparse cases are taken
into account,
• the line search method.
The Scilab online help is a good entry point for this function.
1.1 Mathematical point of view
The problem of the non linear optimization is to find the solution of
min
x
f(x)
with bounds constraints or without constraints and with f : R
n
→R the cost function.
10
1.2 Optimization features
Scilab offers three non-linear optimization methods:
• Quasi-Newton method with BFGS formula without constraints or with bound constraints,
• Quasi-Newton with limited memory BFGS (L-BGFS) without constraints or with bound
constraints,
• Bundle method for non smooth functions (half derivable functions, non-differentiable prob-
lem) without constraints.
Problems involving non linear constraints cannot be solved with the current optimization
methods implemented in Scilab. Non smooth problems with bounds constraints cannot be solved
with the methods currently implemented in Scilab.
1.3 Optimization routines
Non-linear optimization in Scilab is based on a subset of the Modulopt library, developed at
Inria. The library which is used by optim was created by the Modulopt project at INRIA and
developped by Bonnans, Gilbert and Lemar´echal [5].
This section lists the routines used according to the optimization method used.
The following is the list of solvers currently available in Scilab, and the corresponding fortran
routine :
• ”qn” without constraints : a Quasi-Newton method without constraints, n1qn1,
• ”qn” with bounds constraints : Quasi-Newton method with bounds constraints, qnbd,
• ”gc” without constraints : a Limited memory BGFS methoud without constraints, n1qn3,
• ”gc” with bounds constraints : a Limited memory BGFS with bounds constraints, gcbd,
• ”nd” without constraints : a Non smooth method without constraints, n1fc1.
1.4 The cost function
The communication protocol used by optim is direct, that is, the cost function must be passed
as a callback argument to the ”optim” primitive. The cost function must compute the objective
and/or the gradient of the objective, depending on the input integer flag ”ind”.
In the most simple use-case, the cost function is a Scilab function, with the following header :
[ f , g , i nd]= c o s t f ( x , i nd )
where ”x” is the current value of the unknown and ”ind” is the integer flag which states if ”f”, ”g”
or both are to be computed.
The cost function is passed to the optimization solver as a callback, which is managed with
the fortran 77 callback system. In that case, the name of the routine to call back is declared as
”external” in the source code. The cost function may be provided in the following ways :
11
• the cost function is provided as a Scilab script,
• the cost function is provided as a C or fortran 77 compiled routine.
If the cost function is a C or fortran 77 source code, the cost function can be statically or
dynamically linked against Scilab. Indeed, Scilab dynamic link features, such as ilib for link for
example, can be used to provide a compiled cost function.
In the following paragraph, we analyse the very internal aspects of the management of the
cost function.
This switch is managed at the gateway level, in the sci f optim routine, with a ”if” statement :
• if the cost function is compiled, the ”foptim” symbol is passed,
• if not, the ”boptim” symbol is passed.
In the case where the cost function is a Scilab script, the ”boptim” routine performs the copy of
the input local arguments into Scilab internal memory, calls the script, and finally copy back the
output argument from Scilab internal memory into local output variables. In the case where the
cost function is compiled, the computation is based on function pointers, which are managed at
the C level in optimtable.c. The optimization function is configured by the ”setfoptim” C ser-
vice, which takes as argument the name of the routine to callback. The services implemented in
AddFunctionInTable.c are used, especially the function ”AddFunctionInTable”, which takes the
name of the function as input argument and searches the corresponding function address, be it
in statically compiled libraries or in dynamically compiled libraries. This allows the optimization
solvers to callback dynamically linked libraries. These names and addresses are stored in the
hashmap FTab foptim, which maps function names to function pointer. The static field foptim-
fonc with type foptimf is then set to the address of the function to be called back. When the
optimization solver needs to compute the cost function, it calls the ”foptim”C void function which
in returns calls the compiled cost function associated to the configured address (*foptimfonc).
1.5 Linear algebra
The linear algebra which is used in the ”optim” primitive is dense. Generally, the linear algebra
is inlined and there is no use the BLAS API. This applies to all optimization methods, except
”gcbd”. This limits the performance of the optimization, because optimized libraries like ATLAS
cannot not used. There is only one exception : the L-BFGS with bounds constraints routine gcbd
uses the ”dcopy” routine of the BLAS API.
1.6 Management of memory
The optimization solvers requires memory, especially to store the value of the cost function,
the gradient, the descent direction, but most importantly, the approximated Hessian of the cost
function.
Most of the memory is required by the approximation of the Hessian matrix. If the full
approximated Hessian is stored, as in the BFGS quasi-Newton method, the amount of memory
is the square of the dimension of the problem O(n
2
), where n is the size of the unknown. When
12
a quasi-Newton method with limited memory is used, only a given number m of vectors of size n
are stored.
This memory is allocated by Scilab, inside the stack and the storage area is passed to the
solvers as an input argument. This large storage memory is then split into pieces like a piece
of cake by each solver to store the data. The memory system used by the fortran solvers is the
fortran 77 ”assumed-size-dummy-arrays” mechanism, based on ”real arrayname(*)” statements.
The management of memory is very important for large-scale problems, where n is from 100
to 1000. One main feature of one of the L-BFGS algorithms is to limit the memory required.
More precisely, the following is a map from the algorithm to the memory required, as the number
of required double precision values.
• Quasi-Newton BFGS ”qn” without constraints (n1qn1) : n(n + 13)/2,
• Quasi-Newton BFGS ”qn” with bounds constraints (qnbd) : n(n + 1)/2 + 4n + 1,
• Limited Memory BFGS ”gc” without constraints (n1qn3) : 4n +m(2n + 1),
• Limited Memory BFGS ”gc” with bounds constraints (gcbd) : n(5 + 3nt) + 2nt + 1 with
nt = max(1, m/3),
• Non smooth method without constraints (n1fc1) : (n + 4)m/2 + (m+ 9) ∗ m+ 8 + 5n/2.
Note that n1fc1 requires an additionnal 2(m + 1) array of integers. Simplifying these array
sizes leads to the following map, which clearly shows why Limited Memory BFGS algorithms in
Scilab are more suitable for large problems. This explains why the name ”cg”was chosen: it refers
to the Conjugate Gradient method, which stores only one vector in memory. But the name ”cg”
is wrongly chosen and this is why we consistently use L-BFGS to identify this algorithm.
• Quasi-Newton ”qn” without constraints (n1qn1) : O(n
2
),
• Quasi-Newton ”qn” with bounds constraints (qnbd) : O(n
2
),
• Limited Memory BFGS ”gc” without constraints (n1qn3) : O(n),
• Limited Memory BFGS ”gc” with bounds constraints (gcbd) : O(n),
• Non smooth method without constraints (n1fc1) : O(n).
That explains why L-BFGS methods associated with the ”gc” option of the optim primitive
are recommended for large-scale optimization. It is known that L-BFGS convergence may be slow
for large-scale problems (see [21], chap. 9).
1.7 Quasi-Newton ”qn” without constraints : n1qn1
The author is C. Lemarechal, 1987. There is no reference report for this solver.
The following is the header for the n1qn1 routine :
13
subroutine n1qn1 (simul,n,x,f,g,var,eps,
1 mode,niter,nsim,imp,lp,zm,izs,rzs,dzs)
c!but
c minimisation d une fonction reguliere sans contraintes
c!origine
c c. lemarechal, inria, 1987
c Copyright INRIA
c!methode
c direction de descente calculee par une methode de quasi-newton
c recherche lineaire de type wolfe
The following is a description of the arguments of this routine.
• simul : point d’entree au module de simulation (cf normes modulopt i). n1qn1 appelle
toujours simul avec indic = 4 ; le module de simulation doit se presenter sous la forme
subroutine simul(n,x, f, g, izs, rzs, dzs) et ˆetre declare en external dans le programme
appelant n1qn1.
• n (e) : nombre de variables dont depend f.
• x (e-s) : vecteur de dimension n ; en entree le point initial ; en sortie : le point final calcule
par n1qn1.
• f (e-s) : scalaire ; en entree valeur de f en x (initial), en sortie valeur de f en x (final).
• g (e-s) : vecteur de dimension n : en entree valeur du gradient en x (initial), en sortie valeur
du gradient en x (final).
• var (e) : vecteur strictement positif de dimension n. amplitude de la modif souhaitee a la
premiere iteration sur x(i).une bonne valeur est 10% de la difference (en valeur absolue)
avec la coordonee x(i) optimale
• eps (e-s) : en entree scalaire definit la precision du test d’arret. Le programme considere
que la convergence est obtenue lorque il lui est impossible de diminuer f en attribuant ` a au
moins une coordonn´ee x(i) une variation superieure a eps*var(i). En sortie, eps contient le
carr´e de la norme du gradient en x (final).
• mode (e) : definit l’approximation initiale du hessien
– =1 n1qn1 l’initialise lui-meme
– =2 le hessien est fourni dans zm sous forme compressee (zm contient les colonnes de
la partie inferieure du hessien)
• niter (e-s) : en entree nombre maximal d’iterations : en sortie nombre d’iterations reellement
effectuees.
• nsim (e-s) : en entree nombre maximal d’appels a simul (c’est a dire avec indic = 4). en
sortie le nombre de tels appels reellement faits.
• imp (e) : contrˆ ole les messages d’impression :
14
– = 0 rien n’est imprime
– = 1 impressions initiales et finales
– = 2 une impression par iteration (nombre d’iterations, nombre d’appels a simul, valeur
courante de f).
– >=3 informations supplementaires sur les recherches lineaires ; tres utile pour detecter
les erreurs dans le gradient.
• lp (e) : le numero du canal de sortie, i.e. les impressions commandees par imp sont faites
par write (lp, format).
• zm : memoire de travail pour n1qn1 de dimension n*(n+13)/2.
• izs,rzs,dzs memoires reservees au simulateur (cf doc)
The n1qn1 solver is an interface to the n1qn1a routine, which really implements the optimiza-
tion method. The n1qn1a file counts approximately 300 lines. The n1qn1a routine is based on
the following routines :
• simul : computes the cost function,
• majour : probably (there is no comment) an update of the BFGS matrix.
Many algorithms are in-lined, especially the line search and the linear algebra.
1.7.1 Management of the approximated Hessian matrix
The current BFGS implementation is based on a approximation of the Hessian [21], which is based
on Cholesky decomposition, i.e. the approximated Hessian matrix is decomposed as G = LDL
T
,
where D is a diagonal n ×n matrix and L is a lower triangular n ×n matrix with unit diagonal.
To compute the descent direction, the linear system Gd = LDL
T
d = −g is solved.
The memory requirements for this method is O(n
2
) because the approximated Hessian matrix
computed from the BFGS formula is stored in compressed form so that only the lower part of
the approximated Hessian matrix is stored. This is why this method is not recommended for
large-scale problems (see [21], chap.9, introduction).
The approximated hessian H ∈ R
n×n
is stored as the vector h ∈ R
n
h
which has size n
h
=
n(n + 1)/2. The matrix is stored in factored form as following
h = (D
11
L
21
. . . L
n1
|H
21
D
22
. . . L
n2
| . . . |D
n−1n−1
L
nn−1
|D
nn
) . (1.1)
Instead of a direct acces to the factors of D and L, integers algebra is necessary to access to the
data stored in the vector h.
The algorithm presented in figure 1.1 is used to set the diagonal terms the diagonal terms of D,
the diagonal matrix of the Cholesky decomposition of the approximated Hessian. The right-hand
side
0.01c
max
v
2
i
of this initialization is analysed in the next section of this document.
15
k ←1
for i = 1 to n do
h(k) =
0.01c
max
v
2
i
k ←k +n + 1 −i
end for
Figure 1.1: Loop over the diagonal terms of the Cholesky decomposition of the approximated
Hessian
Solving the linear system of equations
The linear system of equations Gd = LDL
T
d = −g must be solved to computed the descent
direction d ∈ R
n
. This direction is computed by the following algorithm
• compute w so that Lw = −g,
• computed d so that DL
T
d = w.
This algorithm requires O(n
2
) operations.
1.7.2 Line search
The line search is based on the algorithms developped by Lemar´echal [26]. It uses a cubic inter-
polation.
The Armijo condition for sufficient decrease is used in the following form
f(x
k
+αp
k
) −f(x
k
) ≤ c
1
α∇f
T
k
p
k
(1.2)
with c
1
= 0.1. The following fortran source code illustrates this condition
if (fb-fa.le.0.10d+0*c*dga) go to 280
where fb = f(x
k
+ αp
k
), fa = f(x
k
), c = α and dga = ∇f
T
k
p
k
is the local directional derivative.
1.7.3 Initial Hessian matrix
Several modes are available to compute the initial Hessian matrix, depending on the value of the
mode variable
• if mode = 1, n1qn1 initializes the matrix by itself,
• if mode = 2, the hessian is provided in compressed form, where only the lower part of the
symetric hessian matrix is stored.
An additionnal mode=3 is provided but the feature is not clearly documented. In Scilab, the
n1qn1 routine is called with mode = 1 by default. In the case where a hot-restart is performed,
the mode = 3 is enabled.
If mode = 1 is chosen, the initial Hessian matrix H
0
is computed by scaling the identity matrix
H
0
= Iδ (1.3)
16
where δ ∈ R
n
is a n-vector and I is the n ×n identity matrix. The scaling vector δ ∈ R
n
is based
on the gradient at the initial guess g
0
= g(x
0
) = ∇f(x
0
) ∈ R
n
and a scaling vector v ∈ R
n
, given
by the user
δ
i
=
0.01c
max
v
2
i
(1.4)
where c
max
> 0 is computed by
c
max
= max

1.0, max
i=1,n
(|g
0
i
)|)

(1.5)
In the Scilab interface for optim, the scaling vector is set to 0.1 :
v
i
= 0.1, i = 1, n. (1.6)
1.7.4 Termination criteria
The following list of parameters are taken into account by the solver
• niter, the maximum number of iterations (default value is 100),
• nap, the maximum number of function evaluations (default value is 100),
• epsg, the minimum length of the search direction (default value is %eps ≈ 2.22e −16).
The other parameters epsf and epsx are not used. The termination condition is not based
on the gradient, as the name epsg would indicate.
The following is a list of termination conditions which are taken into account in the source
code.
• The iteration is greater than the maximum.
if (itr.gt.niter) go to 250
• The number of function evaluations is greater than the maximum.
if (nfun.ge.nsim) go to 250
• The directionnal derivative is positive, so that the direction d is not a descent direction for
f.
if (dga.ge.0.0d+0) go to 240
• The cost function set the indic flag (the ind parameter) to 0, indicating that the optimization
must terminate.
call simul (indic,n,xb,fb,gb,izs,rzs,dzs)
[...]
go to 250
17
• The cost function set the indic flag to a negative value indicating that the function cannot
be evaluated for the given x. The step is reduced by a factor 10, but gets below a limit so
that the algorithm terminates.
call simul (indic,n,xb,fb,gb,izs,rzs,dzs)
[...]
step=step/10.0d+0
[...]
if (stepbd.gt.steplb) goto 170
[...]
go to 250
• The Armijo condition is not satisfied and step size is below a limit during the line search.
if (fb-fa.le.0.10d+0*c*dga) go to 280
[...]
if (step.gt.steplb) go to 270
• During the line search, a cubic interpolation is computed and the computed minimum is
associated with a zero step length.
if(c.eq.0.0d+0) goto 250
• During the line search, the step length is lesser than a computed limit.
if (stmin+step.le.steplb) go to 240
• The rank of the approximated Hessian matrix is lesser than n after the update of the
Cholesky factors.
if (ir.lt.n) go to 250
1.7.5 An example
The following script illustrates that the gradient may be very slow, but the algorithm continues.
This shows that the termination criteria is not based on the gradient, but on the length of the
step. The problem has two parameters so that n = 2. The cost function is the following
f(x) = x
p
1
+ x
p
2
(1.7)
where p ≥ 0 is an even integer. Here we choose p = 10. The gradient of the function is
g(x) = ∇f(x) = (px
p−1
1
, px
p−1
2
)
T
(1.8)
and the Hessian matrix is
H(x) =

p(p −1)x
p−2
1
0
0 p(p −1)x
p−2
2

(1.9)
18
The optimum of this optimization problem is at
x

= (0, 0)
T
. (1.10)
The following Scilab script defines the cost function, checks that the derivatives are correctly
computed and performs an optimization. At each iteration, the norm of the gradient of the cost
function is displayed so that one can see if the algorithm terminates when the gradient is small.
function [ f , g , i nd ] = myquadrati c ( x , i nd )
p = 10
i f i nd == 1 | i nd == 2 | i nd == 4 then
f = x ( 1) ˆp + x ( 2) ˆp ;
end
i f i nd == 1 | i nd == 2 | i nd == 4 then
g ( 1) = p ∗ x ( 1) ˆ( p−1)
g ( 2) = p ∗ x ( 2) ˆ( p−1)
end
i f i nd == 1 then
mprintf ( ”| x|=%e, f=%e, | g|=%e\n”,norm( x ) , f , norm( g ) )
end
endfunction
function f = quadf ornumdi f f ( x )
f = myquadrati c ( x , 2 )
endfunction
x0 = [ −1. 2 1 . 0 ] ;
[ f , g ] = myquadrati c ( x0 , 4 ) ;
mprintf ( ”Computed f (x0) = %f\n”, f ) ;
mprintf ( ”Computed g(x0) = \n”) ; disp( g ’ ) ;
mprintf ( ”Expected g(x0) = \n”) ; disp( derivative ( quadf ornumdi f f , x0 ’ ) )
nap = 100
i t e r = 100
epsg = %eps
[ f opt , xopt , gradopt ] = optim ( myquadrati c , x0 , . . .
”ar” , nap , i t e r , epsg , imp = −1)
The script produces the following output.
−−>[ f opt , xopt , gradopt ] = optim ( myquadrati c , x0 , . . .
”ar” , nap , i t e r , epsg , imp = −1)
| x| =1. 562050 e +000, f =7. 191736 e +000, | g | =5. 255790 e+001
| x| =1. 473640 e +000, f =3. 415994 e +000, | g | =2. 502599 e+001
| x| =1. 098367 e +000, f =2. 458198 e +000, | g | =2. 246752 e+001
| x| =1. 013227 e +000, f =1. 092124 e +000, | g | =1. 082542 e+001
| x| =9. 340864 e −001, f =4. 817592e −001, | g | =5. 182592 e+000
[ . . . ]
| x| =1. 280564 e −002, f =7. 432396e −021, | g | =5. 817126 e−018
| x| =1. 179966 e −002, f =3. 279551e −021, | g | =2. 785663 e−018
| x| =1. 087526 e −002, f =1. 450507e −021, | g | =1. 336802 e−018
| x| =1. 002237 e −002, f =6. 409611e −022, | g | =6. 409898 e−019
| x| =9. 236694 e −003, f =2. 833319e −022, | g | =3. 074485 e−019
Norm of pr oj e c t e d gr adi ent l ower than 0. 3074485D−18.
19
gradopt =
1. 0D−18 ∗
0. 2332982 0. 2002412
xopt =
0. 0065865 0. 0064757
f opt =
2. 833D−22
One can see that the algorithm terminates when the gradient is extremely small g(x) ≈ 10
−18
.
The cost function is very near zero f(x) ≈ 10
−22
, but the solution is not accurate only up to the
3d digit.
This is a very difficult test case for optimization solvers. The difficulty is because the function
is extremely flat near the optimum. If the termination criteria was based on the gradient, the
algorithm would stop in the early iterations. Because this is not the case, the algorithm performs
significant iterations which are associated with relatively large steps.
1.8 Quasi-Newton ”qn” with bounds constraints : qnbd
The comments state that the reference report is an INRIA report by F. Bonnans [4]. The solver
qnbd is an interface to the zqnbd routine. The zqnbd routine is based on the following routines :
• calmaj : calls majour, which updates the BFGS matrix,
• proj : projects the current iterate into the bounds constraints,
• ajour : probably (there is no comment) an update of the BFGS matrix,
• rlbd : line search method with bound constraints,
• simul : computes the cost function
The rlbd routine is documented as using an extrapolation method to computed a range for the
optimal t parameter. The range is then reduced depending on the situation by :
• a dichotomy method,
• a linear interpolation,
• a cubic interpolation.
The stopping criteria is commented as ”an extension of the Wolfe criteria”. The linear algebra does
not use the BLAS API. It is in-lined, so that connecting the BLAS may be difficult. The memory
requirements for this method are O(n
2
), which shows why this method is not recommended for
large-scale problems (see [21], chap.9, introduction).
1.9 L-BFGS ”gc” without constraints : n1qn3
The comments in this solver are clearly written. The authors are Jean Charles Gilbert, Claude
Lemarechal, 1988. The BFGS update is based on the article [34]. The solver n1qn3 is an interface
to the n1qn3a routine. The architecture is clear and the source code is well commented. The
n1qn3a routine is based on the following routines :
20
• prosca : performs a vector x vector scalar product,
• simul : computes the cost function,
• nlis0 : line search based on Wolfe criteria, extrapolation and cubic interpolation,
• ctonb : copies array u into v,
• ddd2 : computed the descent direction by performing the product hxg.
The linear algebra is dense, which limits the feature to small size optimization problems. The
linear algebra does not use the BLAS API but is based on the prosca and ctonb routines. The
prosca routine is a call back input argument of the n1qn3 routine, connected to the fuclid routine.
This implements the scalar product, but without optimizaion. Connecting BLAS may be easy
for n1qn3. The algorithm is a limited memory BFGS method with m levels, so that the memory
cost is O(n). It is well suited for medium-scale problems, although convergence may be slow (see
[21], chap. 9, p.227).
1.10 L-BFGS ”gc” with bounds constraints : gcbd
The author is F. Bonnans, 1985. There is no reference report for gcbd. The gcbd solver is an
interface to the zgcbd routine, which really implements the optimization method. The zgcbd
routine is based on the following routines :
• simul : computes the cost function
• proj : projects the current iterate into the bounds constraints,
• majysa : updates the vectors y = g(k + 1) −g(k), s = x(k + 1) −x(k), ys,
• bfgsd : updates the diagonal by Powell diagonal BFGS,
• shanph : scalse the diagonal by Shanno-Phua method,
• majz : updates z,zs,
• relvar : computes the variables to relax by Bertsekas method,
• gcp : gradient conjugate method for Ax = b,
• dcopy : performs a vector copy (BLAS API),
• rlbd : line search method with bound constraints.
The linear algebra is dense. But zgcbd uses the ”dcopy” BLAS routine, which allows for some
optimizations. The algorithm is a limited memory BFGS method with m levels, so that the
memory cost is O(n). It is well suited for medium-scale problems, although the convergence may
be slow (see [21], chap. 9, p.227).
21
1.11 Non smooth method without constraints : n1fc1
This routine is probably due to Lemar´echal, who is an expert of this topic. References for this
algorithm include the ”Part II, Nonsmooth optimization” in [5], and the in-depth presentation in
[18, 19].
The n1fc1 solver is an interface to the n1fc1a routine, which really implements the optimization
method. The n1fc1a routine is based on the following routines :
• simul : computes the cost function,
• fprf2 : computes the direction,
• frdf1 : reduction du faisceau
• nlis2 : line search,
• prosca : performs a vector x vector scalar product.
It is designed for functions which have a non-continuous derivative (e.g. the objective function is
the maximum of several continously differentiable functions).
22
Chapter 2
Quadratic optimization
Quadratic problems can be solved with the qpsolve Scilab macro and the qp_solve Scilab prim-
itive. In this chapter, we presents these two primitives, which are meant to be a replacement for
the former Quapro solver (which has be transformed into a Scilab toolbox). We especially analyse
the management of the linear algebra as well as the memory requirements of these solvers.
2.1 Mathematical point of view
This kind of optimization is the minimization of function f(x) with
f(x) =
1
2
x
T
Qx +p
T
x
under the constraints :
C
T
1
x = b
1
(2.1)
C
T
2
x ≥ b
2
(2.2)
(2.3)
2.2 qpsolve
The Scilab function qpsolve is a solver for quadratic problems when Q is symmetric positive
definite.
The qpsolve function is a Scilab macro which aims at providing the same interface (that is,
the same input/output arguments) as the quapro solver.
For more details about this solver, please refer to Scilab online help for qpsolve
The qpsolve Scilab macro is based on the work by Berwin A Turlach from The University of
Western Australia, Crawley [38]. The solver is implemented in Fortran 77. This routine uses the
Goldfarb/Idnani algorithm [11, 12].
The constraints matrix can be dense or sparse.
The qpsolve macro calls the qp_solve compiled primitive. The internal source code for
qpsolve manages the equality and inequality constraints so that it can be processed by the
qp_solve primitive.
23
2.3 qp solve
The qp_solve compiled primitive is an interface for the fortran 77 solver. The interface is im-
plemented in the C source code sci qp solve.c. Depending on the constraints matrix, a dense or
sparse solver is used :
• If the constraints matrix C is dense, the qpgen2 fortran 77 routine is used. The qpgen2
routine is the original, unmodified algorithm which was written by Turlach (the original
name was solve.QP.f)
• If the constraints matrix C is a Scilab sparse matrix, the qpgen1sci routine is called. This
routine is a modification of the original routine qpgen1, in order to adapt to the specific
Scilab sparse matrices storage.
2.4 Memory requirements
Suppose that n is the dimension of the quadratic matrix Q and m is the sum of the number of
equality constraints me and inequality constraints md. Then, the temporary work array which is
allocated in the primitive has the size
r = min(n, m), (2.4)
lw = 2n + r(r + 5)/2 + 2m+ 1. (2.5)
This temporary array is de-allocated when the qpsolve primitive returns.
This formulae may be simplified in the following cases :
• if n m, that is the number of constraints m is negligible with respect to the number of
unknowns n, then the memory required is O(n),
• if m n, that is the number of unknowns n is negligible with respect to the number of
constraints m, then the memory required is O(m),
• if m = n, then the memory required is O(n
2
).
2.5 Internal design
The original Goldfarb/Idnani algorithm [11, 12] was designed to solve the following minimization
problem:
min
x
−d
T
x +
1
2
x
T
Dx
where
A
T
1
x = b
1
(2.6)
A
T
2
x >= b
2
(2.7)
where the matrix D is assumed to be symmetric positive definite. It was considered as a building
block for a Sequential Quadratic Programming solver. The original package provides two routines :
24
• solve.QP.f containing routine qpgen2 which implements the algorithm for dense matrices,
• solve.QP.compact.f containing routine qpgen1 which implements the algorithm for sparse
matrices.
25
Chapter 3
Non-linear least square
3.1 Mathematical point of view
The problem of the non linear least-quare optimization is to find the solution of
min
x
¸
x
f(x)
2
with bounds constraints or without constraints and with f : R
n
→R
m
the cost function.
3.2 Scilab function
Scilab function called lsqrsolve is designed for the minimization of the sum of the squares of non-
linear functions using a Levenberg-Marquardt algorithm. For more details about this function,
please refer to Scilab online help
3.3 Optimization routines
Scilab lsqrsolve function is based on the routines lmdif and lmder of the library Minpack (Ar-
gonne National Laboratory).
26
Chapter 4
Semidefinite programming
4.1 Mathematical point of view
This kind of optimization is the minimization of f(x) = c
T
x under the constraint:
F
0
+x
1
F
1
+... +x
m
F
m
≥ 0 (4.1)
or its dual problem, the maximization or −Trace(F
0
, Z) under the constraints:
Trace(F
i
, Z) = c
i
, i = 1, ..., m (4.2)
Z ≥ 0 (4.3)
4.2 Scilab function
The Scilab function called semidef is designed for this kind of optimization problems. For more
details about this function, please refer to Scilab online help
4.3 Optimization routines
Scilab semidef function is based on a routine from L. Vandenberghe and Stephen Boyd.
27
Chapter 5
Genetic algorithms
5.1 Introduction
Genetic algorithms are search algorithms based on the mechanics on natural selection and natural
genetics [17, 28]. Genetic algorithms have been introduced in Scilab v5 thanks to a work by Yann
Collette [9]. The solver is made of Scilab macros, which enables a high-level programming model
for this optimization solver.
The problems solved by the current genetic algorithms in Scilab are the following :
• minimization of a cost function with bound constraints,
• multi-objective non linear minimization with bound constraints.
Genetic algorithms are different from more normal optimization and search procedures in four
ways :
• GAs work with a coding of the parameter set, not the parameters themselves,
• GAs search from a population of points, not a single point,
• GAs use payoff (objective function) information, not derivativess or other auxiliary knowl-
edge,
• GAs use probabilistic transition rules, not deterministic rules.
A simple genetic algorithm that yields good results in many practical problems is composed
of three operators [17] :
• reproduction,
• cross-over,
• mutation.
Many articles on this subject have been collected by Carlos A. Coello Coello on his website
[7]. A brief introduction to GAs is done in [43].
The GA macros are based on the ”parameters” Scilab module for the management of the
(many) optional parameters.
28
5.2 Example
In the current section, we give an example of the user of the GA algorithms.
The following is the definition of the Rastrigin function.
function Res = min_bd_rastrigin()
Res = [-1 -1]’;
endfunction
function Res = max_bd_rastrigin()
Res = [1 1]’;
endfunction
function Res = opti_rastrigin()
Res = [0 0]’;
endfunction
function y = rastrigin(x)
y = x(1)^2+x(2)^2-cos(12*x(1))-cos(18*x(2));
endfunction
This cost function is then defined with the generic name ”f”. Other algorithmic parameters,
such as the size of the population, are defined in the following sample Scilab script.
func = ’rastrigin’;
deff(’y=f(x)’,’y = ’+func+’(x)’);
PopSize = 100;
Proba_cross = 0.7;
Proba_mut = 0.1;
NbGen = 10;
NbCouples = 110;
Log = %T;
nb_disp = 10;
pressure = 0.05;
Genetic Algorithms require many settings, which are cleanly handled by the ”parameters”
module. This module provides the nit_param function, which returns a new, empty, set of pa-
rameters. The add_param function allows to set individual named parameters, which are configure
with key-value pairs.
1 ga params = i ni t par am ( ) ;
2 // Parameters to adapt to the shape of the optimization problem
3 ga params = add param( ga params , ’minbound’ , eval ( ’ min bd ’+f unc+’ () ’ ) ) ;
4 ga params = add param( ga params , ’maxbound’ , eval ( ’max bd ’+f unc+’ () ’ ) ) ;
5 ga params = add param( ga params , ’ dimension ’ , 2 ) ;
6 ga params = add param( ga params , ’ beta ’ , 0 ) ;
7 ga params = add param( ga params , ’ delta ’ , 0 . 1 ) ;
8 // Parameters to f i ne tune the Genetic algorithm.
9 ga params = add param( ga params , ’ i ni t f unc ’ , i n i t g a d e f a u l t ) ;
10 ga params = add param( ga params , ’ crossover func ’ , c r o s s o ve r g a de f a ul t ) ;
11 ga params = add param( ga params , ’ mutation func ’ , mut at i on ga def aul t ) ;
12 ga params = add param( ga params , ’ codage func ’ , c odi ng ga i de nt i t y ) ;
29
13 ga params = add param( ga params , ’ sel ecti on func ’ , s e l e c t i o n g a e l i t i s t ) ;
14 ga params = add param( ga params , ’ nb couples ’ , NbCouples ) ;
15 ga params = add param( ga params , ’ pressure ’ , pr e s s ur e ) ;
The optim_ga function search a population solution of a single-objective problem with bound
constraints.
1 [ pop opt , f obj pop opt , pop i ni t , f o bj po p i ni t ] = . . .
2 opti m ga ( f , PopSize , NbGen, Proba mut , Proba cross , Log , ga params ) ;
The following are the messages which are displayed in the Scilab console :
optim_ga: Initialization of the population
optim_ga: iteration 1 / 10 - min / max value found = -1.682413 / 0.081632
optim_ga: iteration 2 / 10 - min / max value found = -1.984184 / -0.853613
optim_ga: iteration 3 / 10 - min / max value found = -1.984184 / -1.314217
optim_ga: iteration 4 / 10 - min / max value found = -1.984543 / -1.513463
optim_ga: iteration 5 / 10 - min / max value found = -1.998183 / -1.691332
optim_ga: iteration 6 / 10 - min / max value found = -1.999551 / -1.871632
optim_ga: iteration 7 / 10 - min / max value found = -1.999977 / -1.980356
optim_ga: iteration 8 / 10 - min / max value found = -1.999979 / -1.994628
optim_ga: iteration 9 / 10 - min / max value found = -1.999989 / -1.998123
optim_ga: iteration 10 / 10 - min / max value found = -1.999989 / -1.999534
The initial and final populations for this simulation are shown in 5.1.
The following script is a loop over the optimum individuals of the population.
1 pri ntf ( ’ Genetic Algorithm: %d points from pop opt\n’ , nb di sp ) ;
2 for i =1: nb di sp
3 pri ntf ( ’ Individual %d: x(1) = %f x(2) = %f −> f = %f\n’ , . . .
4 i , pop opt ( i ) ( 1 ) , pop opt ( i ) ( 2 ) , f obj pop opt ( i ) ) ;
5 end
The previous script make the following lines appear in the Scilab console.
Individual 1: x(1) = -0.000101 x(2) = 0.000252 -> f = -1.999989
Individual 2: x(1) = -0.000118 x(2) = 0.000268 -> f = -1.999987
Individual 3: x(1) = 0.000034 x(2) = -0.000335 -> f = -1.999982
Individual 4: x(1) = -0.000497 x(2) = -0.000136 -> f = -1.999979
Individual 5: x(1) = 0.000215 x(2) = -0.000351 -> f = -1.999977
Individual 6: x(1) = -0.000519 x(2) = -0.000197 -> f = -1.999974
Individual 7: x(1) = 0.000188 x(2) = -0.000409 -> f = -1.999970
Individual 8: x(1) = -0.000193 x(2) = -0.000427 -> f = -1.999968
Individual 9: x(1) = 0.000558 x(2) = 0.000260 -> f = -1.999966
Individual 10: x(1) = 0.000235 x(2) = -0.000442 -> f = -1.999964
5.3 Support functions
In this section, we analyze the GA services to configure a GA computation.
30
Figure 5.1: Optimum of the Rastrigin function – Initial population is in red, Optimum population
is accumulated on the blue dot
31
5.3.1 Coding
The following is the list of coding functions available in Scilab’s GA :
• coding_ga_binary : A function which performs conversion between binary and continuous
representation
• coding_ga_identity : A ”no-operation” conversion function
The user may configure the GA parameters so that the algorithm uses a customized coding
function.
5.3.2 Cross-over
The crossover function is used when mates have been computed, based on the Wheel algorithm :
the crossover algorithm is a loop over the couples, which modifies both elements of each couple.
The following is the list of crossover functions available in Scilab :
• crossover_ga_default : A crossover function for continuous variable functions.
• crossover_ga_binary : A crossover function for binary code
5.3.3 Selection
The selection function is used in the loop over the generations, when the new population is
computed by processing a selection over the individuals.
The following is the list of selection functions available in Scilab :
• selection_ga_random : A function which performs a random selection of individuals. We
select pop size individuals in the set of parents and childs individuals at random.
• selection_ga_elitist : An ’elitist’ selection function. We select the best individuals in
the set of parents and childs individuals.
5.3.4 Initialization
The initialization function returns a population as a list made of ”pop size”individuals. The Scilab
macro init_ga_default computes this population by performing a randomized discretization of
the domain defined by the bounds as minimum and maximum arrays. This randomization is
based on the Scilab primitive rand.
5.4 Solvers
In this section, we analyze the 4 GA solvers which are available in Scilab :
• optim_ga : flexible genetic algorithm
• optim_moga : multi-objective genetic algorithm
• optim_nsga : multi-objective Niched Sharing Genetic Algorithm
32
• optim_nsga2 : multi-objective Niched Sharing Genetic Algorithm version 2
While optim_ga is designed for one objective, the 3 other solvers are designed for multi-
objective optimization.
5.4.1 optim ga
The Scilab macro optim_ga implements a Genetic Algorithm to find the solution of an optimiza-
tion problem with one objective function and bound constraints.
The following is an overview of the steps in the GA algorithm.
• processing of input arguments
In the case where the input cost function is a list, one defines the ”hidden” function _ga_f
which computes the cost function. If the input cost function is a regular Scilab function,
the ”hidden” function _ga_f simply encapsulate the input function.
• initialization
One computes the initial population with the init_func callback function (the default value
for init_func is init_ga_default)
• coding
One encodes the initial population with the codage_func callback function (default : coding_ga_identity)
• evolutionary algorithm as a loop over the generations
• decoding
One decodes the optimum population back to the original variable system
The loop over the generation is made of the following steps.
• reproduction : two list of children populations are computed, based on a randomized Wheel,
• crossover : the two populations are processed through the crossover_func callback function
(default : crossover_ga_default)
• mutation : the two populations are processed throught the mutation_func callback function
(default : mutation_ga_default)
• computation of cost functions : the _ga_f function is called to compute the fitness for all
individuals of the two populations
• selection : the new generation is computed by processing the two populations through the
selection_func callback function (default : selection_ga_elitist)
5.4.2 optim moga, pareto filter
The optim_moga function is a multi-objective genetic algorithm. The method is based on [15].
The function pareto_filter extracts non dominated solution from a set.
33
5.4.3 optim nsga
The optim_nsga function is a multi-objective Niched Sharing Genetic Algorithm. The method is
based on [37].
5.4.4 optim nsga2
The function optim_nsga2 is a multi-objective Niched Sharing Genetic Algorithm. The method
is based on [14].
34
Chapter 6
Simulated Annealing
In this document, we describe the Simulated Annealing optimization methods, a new feature
available in Scilab v5 .
6.1 Introduction
Simulated annealing (SA) is a generic probabilistic meta-algorithm for the global optimization
problem, namely locating a good approximation to the global optimum of a given function in a
large search space. It is often used when the search space is discrete (e.g., all tours that visit a
given set of cities) [42].
Genetic algorithms have been introduced in Scilab v5 thanks to the work by Yann Collette
[9].
The current Simulated Annealing solver aims at finding the solution of
min
x
f(x)
with bounds constraints and with f : R
n
→R the cost function.
Reference books on the subject are [24, 25, 13].
6.2 Overview
The solver is made of Scilab macros, which enables a high-level programming model for this opti-
mization solver. The GA macros are based on the ”parameters”Scilab module for the management
of the (many) optional parameters.
To use the SA algorithm, one must perform the following steps :
• configure the parameters with calls to ”init param” and ”add param” especially,
– the neighbor function,
– the acceptance function,
– the temperature law,
• compute an initial temperature with a call to ”compute initial temp”
• find an optimum by using the ”optim sa” solver
35
6.3 Example
The following example is extracted from the SA examples. The Rastrigin functin is used as an
example of a dimension 2 problem because it has many local optima but only one global optimum.
1 //
2 // Rastrigin function
3 //
4 function Res = mi n bd r as t r i gi n ( )
5 Res = [ −1 −1] ’ ;
6 endfunction
7 function Res = max bd r as t r i gi n ( )
8 Res = [ 1 1 ] ’ ;
9 endfunction
10 function Res = o p t i r a s t r i g i n ( )
11 Res = [ 0 0 ] ’ ;
12 endfunction
13 function y = r a s t r i g i n ( x)
14 y = x(1)ˆ2+x(2)ˆ2−cos (12∗x(1)) −cos (18∗x ( 2 ) ) ;
15 endfunction
16 //
17 // Set parameters
18 //
19 f unc = ’ rastri gi n ’ ;
20 Pr oba s t ar t = 0 . 8 ;
21 I t i nt e r n = 1000;
22 I t e xt e r n = 30;
23 I t Pr e = 100;
24 Min = eval ( ’ min bd ’+f unc+’ () ’ ) ;
25 Max = eval ( ’max bd ’+f unc+’ () ’ ) ;
26 x0 = (Max − Min ) . ∗ rand( si ze (Min , 1 ) , si ze ( Min , 2 ) ) + Min ;
27 def f ( ’y=f (x) ’ , ’y=’+f unc+’ (x) ’ ) ;
28 //
29 // Simulated Annealing with default parameters
30 //
31 pri ntf ( ’SA: geometrical decrease temperature law\n’ ) ;
32
33 sa params = i ni t par am ( ) ;
34 sa params = add param( sa params , ’ min delta ’ , −0. 1∗(Max−Min ) ) ;
35 sa params = add param( sa params , ’ max delta ’ , 0. 1∗(Max−Min ) ) ;
36 sa params = add param( sa params , ’ neigh func ’ , ne i gh f unc de f aul t ) ;
37 sa params = add param( sa params , ’ accept func ’ , ac c e pt f unc de f aul t ) ;
38 sa params = add param( sa params , ’ temp law’ , t emp l aw def aul t ) ;
39 sa params = add param( sa params , ’min bound’ , Min ) ;
40 sa params = add param( sa params , ’max bound’ ,Max) ;
41
42 T0 = comput e i ni t i al t emp ( x0 , f , Proba start , I t Pr e , sa params ) ;
43 pri ntf ( ’ I ni t i al temperature T0 = %f\n’ , T0 ) ;
44
45 [ x opt , f opt , s a mean l i s t , s a v a r l i s t , t e mp l i s t ] = . . .
36
46 opti m sa ( x0 , f , I t e xt e r n , I t i nt e r n , T0 , Log = %T, sa params ) ;
47
48 pri ntf ( ’ optimal solution : \n’ ) ; disp( x opt ) ;
49 pri ntf ( ’ value of the objective function = %f\n’ , f opt ) ;
50
51 s c f ( ) ;
52 drawlater ;
53 subplot ( 2 , 1 , 1 ) ;
54 xti t l e ( ’ Geometrical annealing ’ , ’ Iterati on ’ , ’Mean / Variance ’ ) ;
55 t = 1: length( s a me an l i s t ) ;
56 plot ( t , s a mean l i s t , ’ r ’ , t , s a v a r l i s t , ’ g ’ ) ;
57 l egend ( [ ’Mean’ , ’ Variance ’ ] ) ;
58 subplot ( 2 , 1 , 2 ) ;
59 xti t l e ( ’ Temperature evolution ’ , ’ Iterati on ’ , ’ Temperature ’ ) ;
60 for i =1: length( t )−1
61 plot ( [ t ( i ) t ( i +1)] , [ t e mp l i s t ( i ) t e mp l i s t ( i ) ] , ’k−’ ) ;
62 end
63 drawnow;
After some time, the following messages appear in the Scilab console.
optimal solution:
- 0.0006975
- 0.0000935
value of the objective function = -1.999963
The figure 6.1 presents the evolution of Mean, Variance and Temperature depending on the
iteration.
6.4 Neighbor functions
In the simulated annealing algorithm, a neighbour function is used in order to explore the domain
[43].
The prototype of a neighborhood function is the following :
1 function x nei gh = ne i gh f unc de f aul t ( x cur r ent , T, param)
where:
• x current represents the current point,
• T represents the current temperature,
• param is a list of parameters.
The following is a list of the neighbour functions available in the SA context :
• neigh_func_default : SA function which computes a neighbor of a given point. For
example, for a continuous vector, a neighbor will be produced by adding some noise to each
component of the vector. For a binary string, a neighbor will be produced by changing one
bit from 0 to 1 or from 1 to 0.
37
Figure 6.1: Convergence of the simulated annealing algorithm
• neigh_func_csa : The classical neighborhood relationship for the simulated annealing.
The neighbors distribution is a gaussian distribution which is more and more peaked as the
temperature decrease.
• neigh_func_fsa : The Fast Simulated Annealing neghborhood relationship. The corre-
sponding distribution is a Cauchy distribution which is more and more peaked as the tem-
perature decreases.
• neigh_func_vfsa : The Very Fast Simulated Annealing neighborhood relationship. This
distribution is more and more peaked as the temperature decreases.
6.5 Acceptance functions
There exist several kind of simulated annealing optimization methods:
• the Fast Simulated Annealing,
• the simulated annealing based on metropolis-hasting acceptance function,
• etc...
To implement these various simulated annealing optimization methods, you only need to
change the acceptance function. For common optimization, you need not to change the default
acceptance function.
The following is a list of acceptance functions available in Scilab SAs :
38
• accept_func_default : is the default acceptance function, based on the exponential func-
tion
level = exp


F
neigh
−F
current
T

• accept_func_vfsa : is the Very Fast Simulated Annealing function, defined by :
Level =
1
1 + exp


F
current
−F
neigh
T

6.6 Temperature laws
In the simulated annealing algorithm, a temperature law is used in a statistical criteria for the
update of the optimum [43]. If the new (neighbor) point improves the current optimum, the
update is done with the new point replacing the old optimum. If not, the update may still be
processed, provided that a statistical criteria is satisfied. The statistical law decreases while the
iterations are processed.
There are 5 temperature laws available in the SA context :
• temp_law_default : A SA function which computes the temperature of the next tempera-
ture stage
• temp_law_csa : The classical temperature decrease law, the one for which the convergence
of the simulated annealing has been proven
• temp_law_fsa : The Szu and Hartley Fast simulated annealing
• temp_law_huang : The Huang temperature decrease law for the simulated annealing
• temp_law_vfsa : This function implements the Very Fast Simulated Annealing from L.
Ingber
6.7 optim sa
The optim_sa macro implements the simulated annealing solver. It allows to find the solution of
an minimization problem with bound constraints.
It is based on an iterative update of two points :
• the current point is updated by taking into account the neighbour function and the accep-
tance criterium,
• the best point is the point which achieved the minimum of the objective function over the
iterations.
While the current point is used internally to explore the domain, only the best point is returned
as the algorithm output.
The algorithm is based on the following steps, which include a main, external loop over the
temperature decreases, and an internal loop.
39
• processing of input arguments,
• initialization,
• loop over the number of temperature decreases.
For each iteration over the temperature decreases, the following steps are processed.
• loop over internal iterations, with constant temperature,
• if history is required by user, store the temperature, the x iterates, the values of f,
• update the temperature with the temperature law.
The internal loop allows to explore the domain and is based on the neighbour function. It is
based on the following steps.
• compute a neighbour of the current point,
• compute the objective function for that neighbour
• if the objective decreases or if the acceptance criterium is true, then overwrite the current
point with the neighbour
• if the cost of the best point is greater than the cost of the current point, overwrite the best
point by the current point.
40
Chapter 7
LMITOOL: a Package for LMI
Optimization in Scilab
R. Nikoukhah Ramine.Nikoukhah@inria.fr
F. Delebecque Francois.Delebecque@inria.fr
L. El Ghaoui ENSTA, 32, Bvd. Victor, 75739 Paris, France. Internet: elghaoui@ensta.fr.
Research supported in part by DRET under contract 92017-BC14
This chapter describes a user-friendly Scilab package, and in particular its two main functions
lmisolver and lmitool for solving Linear Matrix Inequalities problems. This package uses Scilab
function semidef, an interface to the program Semidefinite Programming SP (Copyright c 1994
by Lieven Vandenberghe and Stephen Boyd) distributed with Scilab.
7.1 Purpose
Many problems in systems and control can be formulated as follows (see [6]):
Σ :

minimize f(X
1
, . . . , X
M
)
subject to

G
i
(X
1
, . . . , X
M
) = 0, i = 1, 2, ..., p,
H
j
(X
1
, . . . , X
M
) ≥ 0, j = 1, 2, .., q.
where
• X
1
, . . . , X
M
are unknown real matrices, referred to as the unknown matrices,
• f is a real linear scalar function of the entries of the unknown matrices X
1
, . . . , X
M
; it is
referred to as the objective function,
• G
i
’s are real matrices with entries which are affine functions of the entries of the unknown
matrices, X
1
, . . . , X
M
; they are referred to as “Linear Matrix Equality” (LME) functions,
• H
j
’s are real symmetric matrices with entries which are affine functions of the entries of the
unknown matrices X
1
, . . . , X
M
; they are referred to as “Linear Matrix Inequality” (LMI)
functions. (In this report, the V ≥ 0 stands for V positive semi-definite unless stated
otherwise).
41
The purpose of LMITOOL is to solve problem Σ in a user-friendly manner in Scilab, using the code
SP [23]. This code is intended for small and medium-sized problems (say, up to a few hundred
variables).
7.2 Function lmisolver
LMITOOL is built around the Scilab function lmisolver. This function computes the solution
X
1
, . . . , X
M
of problem Σ, given functions f, G
i
and H
j
. To solve Σ, user must provide an
evaluation function which “evaluates” f, G
i
and H
j
as a function the unknown matrices, as well
as an initial guess on the values of the unknown matrices. User can either invoke lmisolver
directly, by providing the necessary information in a special format or he can use the interactive
function lmitool described in Section 7.3.
7.2.1 Syntax
[XLISTF[,OPT]] = lmisolver(XLIST0,EVALFUNC[,options])
where
• XLIST0: a list structure including matrices and/or list of matrices. It contains initial guess
on the values of the unknown matrices. In general, the ith element of XLIST0 is the initial
guess on the value of the unknown matrix X
i
. In some cases however it is more convenient
to define one or more elements of XLIST0 to be lists (of unknown matrices) themselves. This
is a useful feature when the number of unknown matrices is not fixed a priori (see Example
of Section 7.2.2).
The values of the matrices in XLIST0, if compatible with the LME functions, are used as
intial condition for the optimization algorithm; they are ignored otherwise. The size and
structure of XLIST0 are used to set up the problem and determine the size and structure of
the output XLISTF.
• EVALFUNC: a Scilab function called evaluation function (supplied by the user) which evalu-
ates the LME, LMI and objective functions, given the values of the unknown matrices. The
syntax is:
[LME,LMI,OBJ]=EVALFUNC(XLIST)
where
– XLIST: a list, identical in size and structure to XLIST0.
– LME: a list of matrices containing values of the LME functions G
i
’s for X values in
XLIST. LME can be a matrix in case there is only one LME function to be evaluated
(instead of a list containing this matrix as unique element). It can also be a list of a
mixture of matrices and lists which in turn contain values of LME’s, and so on.
– LMI: a list of matrices containing the values of the LMI functions H
j
’s for X values
in XLIST. LMI can also be a matrix (in case there is only one LMI function to be
evaluated). It can also be a list of a mixture of matrices and lists which in turn contain
values of of LMI’s, and so on.
42
– OBJ: a scalar equal to the value of the objective function f for X values in XLIST.
If the Σ problem has no equality constraints then LME should be []. Similarly for LMI and
OBJ.
• options: a 5×1 vector containing optimization parameters Mbound, abstol, nu, maxiters,
and reltol, see manual page for semidef for details (Mbound is a multiplicative coefficient
for M). This argument is optional, if omitted, default parameters are used.
• XLISTF: a list, identical in size and structure to XLIST0 containing the solution of the
problem (optimal values of the unknown matrices).
• OPT: a scalar corresponding to the optimal value of the minimization problem Σ.
7.2.2 Examples
State-feedback with control saturation constraint
Consider the linear system
˙ x = Ax + Bu
where A is an n × n and B, an n × n
u
matrix. There exists a stabilizing state feedback K such
that for every initial condition x(0) with x(0) ≤ 1, the resulting control satisfies u(t) for all
t ≥ 0, if and only if there exist an n×n matrix Q and an n
u
×n matrix Y satisfying the equality
constraint
Q−Q
T
= 0
and the inequality constraints
Q ≥ 0
−AQ−QA
T
−BY −Y
T
B
T
> 0

u
2
max
I Y
Y
T
Q

≥ 0
in which case one such K can be constructed as K = Y Q
−1
.
To solve this problem using lmisolver, we first need to construct the evaluation function.
function [LME,LMI,OBJ]=sf_sat_eval(XLIST)
[Q,Y]=XLIST(:)
LME=Q-Q’
LMI=list(-A*Q-Q*A’-B*Y-Y’*B’,[umax^2*eye(Y*Y’),Y;Y’,Q],Q-eye())
OBJ=[]
Note that OBJ=[] indicates that the problem considered is a feasibility problem, i.e., we are only
interested in finding a set of X’s that satisfy LME and LMI functions.
Assuming A, B and umax already exist in the environment, we can call lmisolver, and recon-
struct the solution in Scilab, as follows:
43
--> Q_init=zeros(A);
--> Y_init=zeros(B’);
--> XLIST0=list(Q_init,Y_init);
--> XLIST=lmisolver(XLIST0,sf_sat_eval);
--> [Q,Y]=XLIST(:)
These Scilab commands can of course be encapsulated in a Scilab function, say sf_sat. Then,
To solve this problem, all we need to do is type:
--> [Q,Y]=sf_sat(A,B,umax)
We call sf_sat the solver function for this problem.
Control of jump linear systems
We are given a linear system
˙ x = A(r(t))x +B(r(t))u,
where A is n×n and B is n×n
u
. The scalar parameter r(t) is a continuous-time Markov process
taking values in a finite set {1, . . . , N}.
The transition probabilities of the process r are defined by a “transition matrix” Π = (π
ij
),
where π
ij
’s are the transition probability rates from the i-th mode to the j-th. Such systems,
referred to as “jump linear systems”, can be used to model linear systems subject to failures.
We seek a state-feedback control law such that the resulting closed-loop system is mean-square
stable. That is, for every initial condition x(0), the resulting trajectory of the closed-loop system
satisfies lim
t→∞
Ex(t)
2
= 0.
The control law we look for is a mode-dependent linear state-feedback, i.e. it has the form
u(t) = K(r(t))x(t); K(i)’s are n
u
×n matrices (the unknowns of our control problem).
It can be shown that this problem has a solution if and only if there exist n × n matrices
Q(1), . . . , Q(N), and n
u
×n matrices Y (1), . . . , Y (N), such that
Q(i) −Q(i)
T
= 0,
TrQ(1) + . . . +TrQ(N) −1 = 0.
and
¸
Q(i) Y (i)
T
Y (i) I

> 0,

¸
A(i)Q(i) + Q(i)A(i)
T
+B(i)Y (i) + Y (i)
T
B(i)
T
+
N
¸
j=1
π
ji
Q(j)
¸
> 0, i = 1, . . . , N,
If such matrices exist, a stabilizing state-feedback is given by K(i) = Y (i)Q(i)
−1
, i = 1, . . . , N.
In the above problem, the data matrices are A(1), . . . , A(N), B(1), . . . , B(N) and the tran-
sition matrix Π. The unknown matrices are Q(i)’s (which are symmetric n × n matrices) and
Y (i)’s (which are n
u
×n matrices). In this case, both the number of the data matrices and that
of the unknown matrices are a-priori unknown.
The above problem is obviously a Σ problem. In this case, we can let XLIST be a list of two
lists: one representing the Q’s and the other, the Y ’s.
The evaluation function required for invoking lmisolver can be constructed as follows:
44
function [LME,LMI,OBJ]=jump_sf_eval(XLIST)
[Q,Y]=XLIST(:)
N=size(A); [n,nu]=size(B(1))
LME=list(); LMI1=list(); LMI2=list()
tr=0
for i=1:N
tr=tr+trace(Q(i))
LME(i)=Q(i)-Q(i)’
LMI1(i)=[Q(i),Y(i)’;Y(i),eye(nu,nu)]
SUM=zeros(n,n)
for j=1:N
SUM=SUM+PI(j,i)*Q(j)
end
LMI2(i)= A(i)*Q(i)+Q(i)*A(i)’+B(i)*Y(i)+Y(i)’*B(i)’+SUM
end
LMI=list(LMI1,LMI2)
LME(N+1)=tr-1
OBJ=[]
Note that LMI is also a list of lists containing the values of the LMI matrices. This is just a matter
of convenience.
Now, we can solve the problem in Scilab as follows (assuming lists A and B, and matrix PI
have already been defined).
First we should initialize Q and Y.
--> N=size(A); [n,nu]=size(B(1)); Q_init=list(); Y_init=list();
--> for i=1:N, Q_init(i)=zeros(n,n);Y_init(i)=zeros(nu,n);end
Then, we can use lmisolver as follows:
--> XLIST0=list(Q_init,Y_init)
--> XLISTF=lmisolver(XLIST0,jump_sf_eval)
--> [Q,Y]=XLISTF(:);
The above commands can be encapsulated in a solver function, say jump_sf, in which case
we simply need to type:
--> [Q,Y]=jump_sf(A,B,PI)
to obtain the solution.
Descriptor Lyapunov inequalities
In the study of descriptor systems, it is sometimes necessary to find (or find out that it does not
exist) an n ×n matrix X satisfying
E
T
X = X
T
E ≥ 0
A
T
X + X
T
A +I ≤ 0
where E and A are n × n matrices such that E, A is a regular pencil. In this problem, which
clearly is a Σ problem, the LME functions play important role. The evaluation function can be
written as follows
45
function [LME,LMI,OBJ]=dscr_lyap_eval(XLIST)
X=XLIST(:)
LME=E’*X-X’*E
LMI=list(-A’*X-X’*A-eye(),E’*X)
OBJ=[]
and the problem can be solved by (assuming E and A are already defined)
--> XLIST0=list(zeros(A))
--> XLISTF=lmisolver(XLIST0,dscr_lyap_eval)
--> X=XLISTF(:)
Mixed H
2
/H

Control
Consider the linear system
˙ x = Ax +B
1
w +B
2
u
z
1
= C
1
x +D
11
w + D
12
u
z
2
= C
2
x +D
22
u
The mixed H
2
/H

control problem consists in finding a stabilizing feedback which yields T
z
1
w


<
γ and minimizes T
z
2
w

2
where T
z
1
w


and T
z
2
w

2
denote respectively the closed-loop transfer
functions from w to z
1
and z
2
. In [22], it is shown that the solution to this problem can be
expressed as K = LX
−1
where X and L are obtained from the problem of minimizing Trace(Y )
subject to:
X −X
T
= 0, Y −Y
T
= 0,
and

AX +B
2
L + (AX +B
2
L)
T
+B
1
B
T
1
XC
T
1
+L
T
D
T
12
+ B
1
D
T
11
C
1
X +D
12
L +D
11
B
T
1
−γ
2
I +D
11
D
T
11

> 0

Y C
2
X +D
22
L
(C
2
X +D
22
L)
T
X

> 0
To solve this problem with lmisolver, we define the evaluation function:
function [LME,LMI,OBJ]=h2hinf_eval(XLIST)
[X,Y,L]=XLIST(:)
LME=list(X-X’,Y-Y’);
LMI=list(-[A*X+B2*L+(A*X+B2*L)’+B1*B1’,X*C1’+L’*D12’+B1*D11’;...
(X*C1’+L’*D12’+B1*D11’)’,-gamma^2*eye()+D11*D11’],...
[Y,C2*X+D22*L;(C2*X+D22*L)’,X])
OBJ=trace(Y);
and use it as follows:
--> X_init=zeros(A); Y_init=zeros(C2*C2’); L_init=zeros(B2’)
--> XLIST0=list(X_init,Y_init,L_init);
--> XLISTF=lmisolver(XLIST0,h2hinf_eval);
--> [X,Y,L]=XLISTF(:)
46
Descriptor Riccati equations
In Kalman filtering for descriptor system
Ex(k + 1) = Ax(k) + u(k)
y(k + 1) = Cx(k + 1) + r(k)
where u and r are zero-mean, white Gaussian noise sequences with covariance Q and R respec-
tively, one needs to obtain the positive solution to the descriptor Riccati equation (see [33])
P = −

0 0 I

¸
APA
T
+Q 0 E
0 R C
E
T
C
T
0

−1

¸
0
0
I

.
It can be shown that this problem can be formulated as a Σ problem as follows: maximize
Trace(P) under constraints
P −P
T
= 0
and

¸
APA
T
+Q 0 EP
0 R CP
P
T
E
T
P
T
C
T
P

≥ 0.
The evaluation function is:
function [LME,LMI,OBJ]=ric_dscr_eval(XLIST)
LME=P-P’
LMI=[A*P*A’+Q,zeros(A*C’),E*P;zeros(C*A’),R,C*P;P*E’,P*C’,P]
OBJ=-trace(P)
which can be used as follows (asuming E, A, C, Q and R are defined and have compatible
sizes–note that E and A need not be square).
--> P_init=zeros(A’*A)
--> P=lmisolver(XLIST0,ric_dscr_eval)
Linear programming with equality constraints
Consider the following classical optimization problem
minimize e
T
x
subject to Ax + b ≥ 0,
Cx + d = 0.
where A and C are matrices and e, b and d are vectors with appropriate dimensions. Here the
sign ≥ is to be understood elementwise.
This problem can be formulated in LMITOOL as follows:
function [LME,LMI,OBJ]=linprog_eval(XLIST)
[x]=XLIST(:)
[m,n]=size(A)
47
LME=C*x+d
LMI=list()
tmp=A*x+b
for i=1:m
LMI(i)=tmp(i)
end
OBJ=e’*x
and solved in Scilab by (assuming A, C, e, b and d and an initial guess x0 exist in the environment):
--> x=lmisolver(x0,linprog_eval)
Sylvester Equation
The problem of finding matrix X satisfying
AX +XB = C
or
AXB = C
where A and B are square matrices (of possibly different sizes) is a well-known problem. We refer
to the first equation as the continuous Sylvester equation and the second, the discrete Sylvester
equation.
These two problems can easily be formulated as Σ problems as follows:
function [LME,LMI,OBJ]=sylvester_eval(XLIST)
[X]=XLIST(:)
if flag==’c’ then
LME=A*X+X*B-C
else
LME=A*X*B-C
end
LMI=[]
OBJ=[]
with a solver function such as:
function [X]=sylvester(A,B,C,flag)
[na,ma]=size(A);[nb,mb]=size(B);[nc,mc]=size(C);
if ma<>na|mb<>nb|nc<>na|mc<>nb then error("invalid dimensions");end
XLISTF=lmisolver(zeros(nc,mc),sylvester_eval)
X=XLISTF(:)
Then, to solve the problem, all we need to do is to (assuming A, B and C are defined)
--> X=sylvester(A,B,C,’c’)
for the continuous problem and
--> X=sylvester(A,B,C,’d’)
for the discrete problem.
48
7.3 Function LMITOOL
The purpose of LMITOOL is to automate most of the steps required before invoking lmisolver.
In particular, it generates a *.sci file including the solver function and the evaluation function
or at least their skeleton. The solver function is used to define the initial guess and to modify
optimization parameters (if needed).
lmitool can be invoked with zero, one or three arguments.
7.3.1 Non-interactive mode
lmitool can be invoked with three input arguments as follows:
Syntax
txt=lmitool(probname,varlist,datalist)
where
• probname: a string containing the name of the problem,
• xlist: a string containing the names of the unknown matrices (separated by commas if
there are more than one).
• dlist: a string containing the names of data matrices (separated by commas if there are
more than one).
• txt: a string providing information on what the user should do next.
In this mode, lmitool generates a file in the current directory. The name of this file is obtained
by adding “.sci” to the end of probname. This file is the skeleton of a solver function and the
corresponding evaluation function.
Example
Suppose we want to use lmitool to solve the problem presented in Section 7.2.2. Invoking
-->txt=lmitool(’sf_sat’,’Q,Y’,’A,B,umax’)
yields the output
--> txt =
! To solve your problem, you need to !
! !
!1- edit file /usr/home/DrScilab/sf_sat.sci !
! !
!2- load (and compile) your functions: !
! !
! getf(’/usr/home/DrScilab/sf_sat.sci’,’c’) !
49
! !
!3- Define A,B,umax and call sf_sat function: !
! !
! [Q,Y]=sf_sat(A,B,umax) !
! !
!To check the result, use [LME,LMI,OBJ]=sf_sat_eval(list(Q,Y)) !
and results in the creation of the file ’/usr/home/curdir/sf sat.sci’ with the following content:
function [Q,Y]=sf_sat(A,B,umax)
// Generated by lmitool on Tue Feb 07 10:30:35 MET 1995
Mbound = 1e3;
abstol = 1e-10;
nu = 10;
maxiters = 100;
reltol = 1e-10;
options=[Mbound,abstol,nu,maxiters,reltol];
///////////DEFINE INITIAL GUESS BELOW
Q_init=...
Y_init=...
///////////
XLIST0=list(Q_init,Y_init)
XLIST=lmisolver(XLIST0,sf_sat_eval,options)
[Q,Y]=XLIST(:)
/////////////////EVALUATION FUNCTION////////////////////////////
function [LME,LMI,OBJ]=sf_sat_eval(XLIST)
[Q,Y]=XLIST(:)
/////////////////DEFINE LME, LMI and OBJ BELOW
LME=...
LMI=...
OBJ=...
It is easy to see how a small amount of editing can do the rest!
7.3.2 Interactive mode
lmitool can be invoked with zero or one input argument as follows:
50
Syntax
txt=lmitool()
txt=lmitool(file)
where
• file: is a string giving the name of an existing “.sci” file generated by lmitool.
In this mode, lmitool is fully interactive. Using a succession of dialogue boxes, user can com-
pletely define his problem. This mode is very easy to use and its operation completely self
explanatory. Invoking lmitool with one argument allows the user to start off with an existing
file. This mode is useful for modifying existing files or when the new problem is not too much
different from a problem already treated by lmitool.
Example
Consider the following estimation problem
y = Hx + V w
where x is unknown to be estimated, y is known, w is a unit-variance zero-mean Gaussian vector,
and
H ∈ Co{H(1), ..., H(N)} , V ∈ Co{V (1), ..., V (N)}
where Co denotes the convex hull and H(i) and V (i), i = 1, ..., N, are given matrices.
The objective is to find L such that the estimate
ˆ x = Ly
is unbiased and the worst case estimation error variance E(x − ˆ x
2
) is minimized.
It can be shown that this problem can be formulated as a Σ problem as follows: minimize γ
subject to
I −LH(i) = 0, i = 1, ..., N,
X(i) −X(i)
T
= 0, i = 1, ..., N,
and

I (L(i)V (i))
T
L(i)V (i) X(i)

≥ 0, i = 1, ..., N,
γ −Trace(X(i)) ≥ 0, i = 1, ..., N.
To use lmitool for this problem, we invoke it as follows:
--> lmitool()
This results is an interactive session which is partly illustrated in following figures.
51
7.4 How lmisolver works
The function lmisolver works essentially in four steps:
1. Initial set-up. The sizes and structure of the initial guess are used to set up the problem,
and in particular the size of the unknown vector.
2. Elimination of equality constraints. Making repeated calls to the evaluation function,
lmisolver generates a canonical representation of the form
minimize ˜ c
T
z
subject to
˜
F
0
+z
1
˜
F
1
+· · · +z
˜ m
˜
F
˜ m
≥ 0, Az +b = 0,
where z contains the coefficients of all matrix variables. This step uses extensively sparse
matrices to speed up the computation and reduce memory requirement.
3. Elimination of variables. Then, lmisolver eliminates the redundant variables. The equality
constraints are eliminated by computing the null space N of A and a solution z
0
(if any) of
Ax +b = 0. At this stage, all solutions of the equality constraints are parametrized by
z = Nx + z
0
,
where x is a vector containing the independent variables. The computation of N, z
0
is done
using sparse LU functions of Scilab.
Once the equality constraints are eliminated, the problem is reformulated as
minimize c
T
x
subject to F
0
+x
1
F
1
+· · · +x
m
F
m
≥ 0,
where c is a vector, and F
0
, . . . , F
m
are symmetric matrices, and x contains the indepen-
dent elements in the matrix variables X
1
, . . . , X
M
. (If the F
i
’s are dependent, a column
compression is performed.)
Figure 7.1: This window must be edited to define problem name and the name of variables used.
52
Figure 7.2: For the example at hand the result of the editing should look something like this.
4. Optimization. Finally, lmisolver makes a call to the function semidef (an interface to SP
[23]). This phase is itself divided into a feasibility phase and a minimization phase (only
if the linear objective function is not empty). The feasibility phase is avoided if the initial
guess is found to be feasible.
The function semidef is called with the optimization parameters abstol, nu, maxiters,
reltol. The parameter M is set above the value
Mbnd*max(sum(abs([F0 ... Fm])))
For details about the optimization phase, and the meaning of the above optimization pa-
rameters see manual page for semidef.
7.5 Other versions
LMITOOL is also available on Matlab. The Matlab version can be obtained by anonymous ftp from
ftp.ensta.fr under /pub/elghaoui/lmitool.
53
Figure 7.3: This is the skeleton of the solver function and the evaluation function generated by
LMITOOL using the names defined previously.
54
Figure 7.4: After editing, we obtain.
Figure 7.5: A file is proposed in which the solver and evaluation functions are to be saved. You
can modify it if you want.
55
Chapter 8
Optimization data files
This section presents the optimization data files which can be used to configure a specific opti-
mization problem in Scilab. The following is a (non-exhaustive) list of ASCII file formats often
used in optimization softwares :
• SIF : Standard Input Format [1, 30],
• GAMS : General Algebraic Modeling System [40, 16]
• AMPL : A Mathematical Programming Language [10, 39]
• MPS : Mathematical Programming System [27, 41]
but other file formats appeared in recent years, such as the XML-based file format OSiL [35, 8, 36].
The following sections describe Scilab tools to manage optimization data files.
8.1 MPS files and the Quapro toolbox
The Quapro toolbox implements the readmps function, which reads a file containing description
of an LP problem given in MPS format and returns a tlist describing the optimization problem.
It is an interface with the program rdmps1.f of hopdm (J. Gondzio). For a description of the
variables, see the file rdmps1.f. MPS format is a standard ASCII medium for LP codes. MPS
format is described in more detail in Murtagh’s book [30].
8.2 SIF files and the CUTEr toolbox
The SIF file format can be processed with the CUTEr Scilab toolbox. Given a SIF [1] file the func-
tion sifdecode generates associated Fortran routines RANGE.f, EXTER.f, ELFUN.f, GROUP.f
and if automatic differentiation is required ELFUND.f, GROUPD.f, EXTERA.f. An associated
data file named OUTSDIF.d and an Output messages file OUTMESS are also generated. All
these files are created in the directory whose path is given in Pathout. The sifdecode function
is based on the Sifdec code [20]. More precisely it results of an interface of SDLANC Fortran
procedure.
56
Chapter 9
Scilab Optimization Toolboxes
Some Scilab toolboxes are designed to solve optimization problems. In this chapter, we begin by
presenting the Quapro toolbox, which allows to solve linear and quadratic problems. Then we
outline other main optimization toolboxes.
9.1 Quapro
The Quapro toolbox was formely a Scilab built-in optimization tool. It has been transformed into
a toolbox for license reasons.
9.1.1 Linear optimization
Mathematical point of view
This kind of optimization is the minimization of function f(x) with
f(x) = p
T
x
under:
• no constraints
• inequality constraints (9.1)
• or inequality constraints and bound constraints ((9.1) & (9.2))
• or inequality constraints, bound constraints and equality constraints ((9.1) & (9.2) & (9.3)).
C ∗ x ≤ b (9.1)
ci ≤ x ≤ cs (9.2)
C
e
∗ x = b
e
(9.3)
57
Scilab function
Scilab function called linpro is designed for linear optimization programming. For more details
about this function, please refer to Scilab online help This function and associated routines
have been written by Cecilia Pola Mendez and Eduardo Casas Renteria from the University of
Cantabria. Please note that this function can not solve problems based on sparse matrices. For
this kind of problem, you can use a Scilab toolbox called LIPSOL that gives an equivalent of
linpro for sparse matrices. LIPSOL is available on Scilab web site
Optimization routines
Scilab linpro function is based on:
• some Fortran routines written by the authors of linpro
• some Fortran Blas routines
• some Fortran Scilab routines
• some Fortran Lapack routines
9.1.2 Linear quadratic optimization
Mathematical point of view
This kind of optimization is the minimization of function f(x) with
f(x) =
1
2
x
T
Qx +p
T
x
under:
• no constraints
• inequality constraints (9.1)
• or inequality constraints and bound constraints ((9.1) & (9.2))
• or inequality constraints, bound constraints and equality constraints ((9.1) & (9.2) & (9.3)).
Scilab function
Scilab functions called quapro (whatever Q is) and qld (when Q is positive definite) are designed
for linear optimization programming. For more details about these functions, please refer to
Scilab online help for quapro and Scilab online help for qld qld function and associated routine
have been written by K. Schittkowski from the University of Bayreuth, A.L. Tits and J.L. Zhou
from the University of Maryland. quapro function and associated routines have been written
by Cecilia Pola Mendez and Eduardo Casas Renteria from the University of Cantabria. Both
functions can not solve problems based on sparse matrices.
58
Optimization routines
Scilab quapro function is based on:
• some Fortran routines written by the authors of linpro
• some Fortran Blas routines
• some Fortran Scilab routines
• some Fortran Lapack routines
9.2 CUTEr
CUTEr is a versatile testing environment for optimization and linear algebra solvers [31]. This
toolbox is a scilab port by Serge Steer and Bruno Durand of the original Matlab toolbox.
A typical use start from problem selection using the scilab function sifselect. This gives
a vector of problem names corresponding to selection criteria [32]. The available problems are
located in the sif directory.
The sifbuild function can then be used to generate the fortran codes associated to a given
problem, to compile them and dynamically link it to Scilab. This will create a set of problem
relative functions, for example, ufn or ugr. This functions can be called to compute the objective
function or its gradient at a given point.
The sifoptim function automatically applies the optim function to a selected problem.
A Fortran compiler is mandatory to build problems.
This toolbox contains the following parts.
• Problem database
A set of testing problems coded in ”Standard Input Format” (SIF) is included in the sif/
sub-directory. This set comes from www.numerical.rl.ac.uk/cute/mastsif.html. The Scilab
function sifselect can be used to select some of this problems according to objective
function properties, contraints properties and regularity properties
• SIF format decoder
The Scilab function sifdecode can be used to generate the Fortran codes associated to a
given problem, while the Scilab function buildprob compiles and dynamically links these
fortran code with Scilab
• problem relative functions
The execution of the function buildprob adds a set of functions to Scilab. The first one
is usetup for unconstrained or bounded problems or csetup for problems with general
contraints. These functions are to be called before any of the following to initialize the
problem relative data (only one problem can be run at a time). The other functions allow
to compute the objective, the gradient, the hessian values, ... of the problem at a given
point (see ufn, ugr, udh, ... for unconstrained or bounded problems or cfn, cgr, cdh, ...
for problems with general contraints)
59
• CUTEr and optim The Scilab function optim can be used together with CUTEr using either
the external function ucost or the driver function sifoptim.
The following is a list of references for the CUTEr toolbox :
• CUTEr toolbox on Scilab Toolbox center
• CUTEr website
9.3 The Unconstrained Optimization Problem Toolbox
The Unconstrained Optimization Problem Toolbox provides 35 unconstrained optimization prob-
lems.
The goal of this toolbox is to provide unconstrained optimization problems in order to test
optimization algorithms.
The More, Garbow and Hillstrom collection of test functions [29] is widely used in testing
unconstrained optimization software. The code for these problems is available in Fortran from
the netlib software archives.
It provides the function value, the gradient, the function vector, the Jacobian and provides
the Hessian matrix for 18 problems. It provides the starting point for each problem, the optimum
function value and the optimum point x for many problems. Additionnally, it provides finite
difference routines for the gradient, the Jacobian and the Hessian matrix. The functions are
based on macros based functions : no compiler is required, which is an advantage over the CUTEr
toolbox. Finally, all function values, gradients, Jacobians and Hessians are tested.
This toolbox is available in ATOMS :
http://atoms.scilab.org/toolboxes/uncprb
and is manage under Scilab’s Forge :
http://forge.scilab.org/index.php/p/uncprb
To install it, type the following statement in Scilab v5.2 (or better).
1 at oms I ns t al l ( ’ uncprb ’ )
9.4 Other toolboxes
• Interface to CONMIN: An interface to the NASTRAN / NASA CONMIN optimization
program by Yann Collette. CONMIN can solve a nonlinear objective problem with non-
linear constraints. CONMIN uses a two-step limited memory quasi-Newton-like Conjugate
Gradient. The CONMIN optimization method is currently used in NASTRAN (a profes-
sionnal finite element tool) and the optimization part of NASTRAN (the CONMIN tool).
The CONMIN fortran program has been written by G. Vanderplaats (1973).
– CONMIN on Scilab Toolbox center
• Differential Evolution: random search of a global minimum by Helmut Jarausch. This
toolbox is based on a Rainer-Storn algorithm.
60
– Differential Evolution on Scilab Toolbox center
• FSQP Interface: interface for the Feasible Sequential Quadratic Programming library.
This toolbox is designed for non-linear optimization with equality and inequality constraints.
FSQP is a commercial product.
– FSQP on Scilab Toolbox center
– FSQP website
• IPOPT interface: interface for Ipopt, which is based on an interior point method which
can handle equality and inequality nonlinear constraints. This solver can handle large scale
optimization problems. As open source software, the source code for Ipopt is provided
without charge. You are free to use it, also for commercial purposes. This Scilab-Ipopt
interface was based on the Matlab Mex Interface developed by Claas Michalik and Steinar
Hauan. This version only works on linux, scons and Scilab >=4.0. Tested with gcc 4.0.3.
Modifications to Scilab Interface made by Edson Cordeiro do Valle.
– Ipopt on Scilab Toolbox center
– Ipopt website
• Interface to LIPSOL: sparse linear problems with interior points method by H. Rubio Scola.
LIPSOL can minimize a linear objective with linear constraints and bound constraints. It
is based on a primal-dual interior point method, which uses sparse-matrix data-structure to
solve large, sparse, symmetric positive definite linear systems. LIPSOL is written by Yin
Zhang . The original Matlab-based code has been adapted to Scilab by H. Rubio Scola
(University of Rosario, Argentina). It is distributed freely under the terms of the GPL.
LIPSOL also uses the ORNL sparse Cholesky solver version 0.3 written by Esmond Ng and
Barry Peyton by H. Rubio Scola.
– LIPSOL on Scilab Toolbox center
– LIPSOL website
– LIPSOL User’s Guide
• LPSOLVE: an interface to lp solve. lp solve is a free mixed integer/binary linear program-
ming solver with full source, examples and manuals. lp solve is under LGPL, the GNU lesser
general public license. lp solve uses the ’Simplex’ algorithm and sparse matrix methods for
pure LP problems.
– LPSOLVE toolbox on Scilab Toolbox center
– lp solve solver on Sourceforge
– lp solve on Geocities
– lp solve Yahoo Group
• NEWUOA: NEWUOA is a software developped by M.J.D. Powell for unconstrained op-
timization without derivatives. The NEWUOA seeks the least value of a function F(x)
(x is a vector of dimension n ) when F(x) can be calculated for any vector of variables x
61
. The algorithm is iterative, a quadratic model being required at the beginning of each
iteration, which is used in a trust region procedure for adjusting the variables. When the
quadratic model is revised, the new model interpolates F at m points, the value m=2n+1
being recommended.
– NEWUOA toolbox on Scilab Toolbox center
– NEWUOA at INRIA Alpes
62
Chapter 10
Missing optimization features in Scilab
Several optimization features are missing in Scilab. Two classes of missing features are to analyse :
• features which are not available in Scilab, but which are available as toolboxes (see previous
section),
• features which are not available neither in Scilab, nor in toolboxes.
Here is a list of features which are not available in Scilab, but are available in toolboxes. These
features would be to include in Scilab.
• integer parameter with linear objective solver and sparse matrices : currently available in
LPSOLVE toolbox, based on the simplex method,
• linear objective with sparse matrices : currently available in LIPSOL, based on interior
points method,
• nonlinear objective and non linear constraints : currently available in interface to IPOPT
toolbox, based on interior point methods,
• nonlinear objective and non linear constraints : currently available in interface to CONMIN
toolbox, based on method of feasible directions,
Notice that IPOPT is a commercial product and CONMIN is a domain-public library. Therefore
the only open-source, free, nonlinear solver with non linear constraints tool available with Scilab
is the interface to CONMIN.
Here is a list of features which are not available neither in Scilab, nor in toolboxes.
• quadratic objective solver with sparse objective matrix,
• simplex programming method (*),
• non-linear objective with nonlinear constraints (*),
• non-linear objective with nonlinear constraints problems based on sparse linear algebra,
• enabling/disabling of unknowns or constraints,
• customization of errors for constraints.
Functionalities marked with a (*) would be available in Scilab if the MODULOPT library
embedded in Scilab was updated.
63
Conclusion
Even if Scilab itself has lacks in optimization functionalities, all embedded functions are very
useful to begin with. After that, by downloading and installing some toolboxes, you can easily
improve your Scilab capabilities.
One of the questions we can ask is: “Why are these toolboxes not integrated in Scilab dis-
tribution?”. The answer is often a problem of license. All GPL libraries can not be included in
Scilab since Scilab is not designed to become GPL.
64
Bibliography
[1] Nicholas I. M. Gould Andrew R. Conn and Philippe L. Toint. The sif reference document.
http://www.numerical.rl.ac.uk/lancelot/sif/sifhtml.html.
[2] Michael Baudin. Nelder mead user’s manual. http://wiki.scilab.org/The_Nelder-Mead_
Component, 2009.
[3] Michael Baudin and Serge Steer. Optimization with scilab, present and future. To appear
in Proceedings Of 2009 International Workshop On Open-Source Software For Scientific
Computation (Ossc-2009).
[4] Fr´ed´eric Bonnans. A variant of a projected variable metric method for bound constrained
optimization problems. Technical Report RR-0242, INRIA - Rocquencourt, Octobre 1983.
[5] Joseph Fr´ed´eric Bonnans, Jean-Charles Gilbert, Claude Lemar´echal, and Claudia A. Sagas-
tiz´ abal. Numerical Optimization. Theoretical and Practical Aspects. Universitext. Springer
Verlag, November 2006. Nouvelle ´edition, revue et augment´eee. 490 pages.
[6] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan. Linear Matrix Inequalities in System
and Control Theory, volume 15 of Studies in Applied Mathematics. SIAM, Philadelphia, PA,
June 1994.
[7] Carlos A. Coello Coello. List of references on evolutionary multiobjective optimization.
http://www.lania.mx/~ccoello/EMOObib.html.
[8] coin or.org. Optimization services instance language (osil). https://www.coin-or.org/OS/
OSiL.html.
[9] Yann Collette. Personnal website. http://ycollette.free.fr.
[10] AMPL company. A modeling language for mathematical programming. http://en.
wikipedia.org/wiki/AMPL_programming_language.
[11] Goldfarb D. and Idnani A. Dual and primal-dual methods for solving strictly convex
quadratic programs. Lecture Notes in Mathematics, Springer-Verlag, 909:226–239, 1982.
[12] Goldfarb D. and Idnani A. A numerically stable dual method for solving strictly convex
quadratic programs. Mathematical Programming, 27:1–33, 1982.
[13] Lawrence Davis. Genetic Algorithms and Simulated Annealing. Morgan Kaufmann Publishers
Inc., San Francisco, CA, USA, 1987.
65
[14] Kalyanmoy Deb, Samir Agrawal, Amrit Pratap, and T Meyarivan. A fast elitist non-
dominated sorting genetic algorithm for multi-objective optimization: Nsga-ii. pages 849–858.
Springer, 2000. http://www.lania.mx/%7Eccoello/deb00.ps.gz.
[15] Carlos M. Fonseca and Peter J. Fleming. Genetic algorithms for multiobjective optimization:
Formulationdiscussion and generalization. In Proceedings of the 5th International Conference
on Genetic Algorithms, pages 416–423, San Francisco, CA, USA, 1993. Morgan Kaufmann
Publishers Inc. http://www.lania.mx/%7Eccoello/fonseca93.ps.gz.
[16] gams.com. The general algebraic modeling system. http://www.gams.com/.
[17] David E. Goldberg. Genetic Algorithms in Search, Optimization & Machine Learning.
Addison-Wesley, 1989.
[18] Jean-Baptiste Hiriart-Urruty and Claude Lemar´echal. Convex Analysis and Minimization
Algorithms I: Fundamentals. Springer, October 1993.
[19] Jean-Baptiste Hiriart-Urruty and Claude Lemar´echal. Convex Analysis and Minimization
Algorithms II: Advanced Theory and Bundle Methods. Springer, October 1993.
[20] hsl.rl.ac.uk. A lonesome sif decoder. http://hsl.rl.ac.uk/cuter-www/sifdec/doc.html.
[21] Stephen J. Wright Jorge Nocedal. Numerical Optimization. Springer, 1999.
[22] P.P. Khargonekar and M.A. Rotea. Mixed h2/h? control: a convex optimization approach.
Automatic Control, IEEE Transactions on, 36(7):824–837, Jul 1991.
[23] Vandenberghe L. and S. Boyd. Semidefinite programming. Internal Report, Stanford Uni-
versity, 1994 (submitted to SIAM Review).
[24] P. J. M. Laarhoven and E. H. L. Aarts, editors. Simulated annealing: theory and applications.
Kluwer Academic Publishers, Norwell, MA, USA, 1987.
[25] P. J. M. van Laarhoven. Theoretical and Computational Aspects of Simulated Annealing.
Amsterdam, Netherlands : Centrum voor Wiskunde en Informatica, 1988.
[26] Claude Lemar´echal. A view of line-searches. Lectures Notes in Control and Information
Sciences, 30:59–78, 1981.
[27] lpsolve. Mps file format. http://lpsolve.sourceforge.net/5.5/mps-format.htm.
[28] Zbigniew Michalewicz. Genetic Algorithms + Data Structures = Evolution Programs.
Springer, 1996.
[29] J. J. Mor´e, Burton S. Garbow, and Kenneth E. Hillstrom. Algorithm 566: Fortran subrou-
tines for testing unconstrained optimization software [c5], [e4]. ACM Trans. Math. Softw.,
7(1):136–140, 1981.
[30] B. Murtagh. Advanced Linear Programming: Computation and Practice. McGraw-Hill, 1981.
[31] Philippe L. Toint Nicholas I.M. Gould, Dominique Orban. A constrained and unconstrained
testing environment, revisited. http://hsl.rl.ac.uk/cuter-www/.
66
[32] Philippe L. Toint Nicholas I.M. Gould, Dominique Orban. The cuter test problem set.
http://www.numerical.rl.ac.uk/cute/mastsif.html.
[33] R. Nikoukhah, A.S. Willsky, and B.C. Levy. Kalman filtering and riccati equations for
descriptor systems. Automatic Control, IEEE Transactions on, 37(9):1325–1342, Sep 1992.
[34] Jorge Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of Com-
putation, 35(151):773–782, 1980.
[35] optimizationservices.org. Optimization services. http://www.optimizationservices.org/.
[36] Jun Ma Robert Fourer and Kipp Martin. Osil: An instance language for optimization.
Computational Optimization and Applications, 2008.
[37] N. Srinivas and Kalyanmoy Deb. Multiobjective optimization using nondominated sorting
in genetic algorithms. Evolutionary Computation, 2:221–248, 1994. http://www.lania.mx/
%7Eccoello/deb95.ps.gz.
[38] Berwin A Turlach. Quadprog, (quadratic programming routines). http://www.maths.uwa.
edu.au/~berwin/software/quadprog.html.
[39] Wikipedia. Ampl (programming language). http://en.wikipedia.org/wiki/AMPL_
programming_language.
[40] Wikipedia. General algebraic modeling system. http://en.wikipedia.org/wiki/General_
Algebraic_Modeling_System.
[41] Wikipedia. Mps (format). http://en.wikipedia.org/wiki/MPS_(format).
[42] Wikipedia. Simulated annealing. http://en.wikipedia.org/wiki/Simulated_annealing.
[43] Patrick Siarry Yann Collette. Optimisation Multiobjectif. Eyrolles, Collection Algorithmes,
1999.
67

This document has been written by Michaël Baudin and Vincent Couvert from the Scilab Consortium and by Serge Steer from INRIA Paris - Rocquencourt. © July 2010 The Scilab Consortium – Digiteo / INRIA. All rights reserved.

Abstract In this document, we make an overview of optimization features in Scilab. The goal of this document is to present all existing and non-existing features, such that a user who wants to solve a particular optimization problem can know what to look for. In the introduction, we analyse a classification of optimization problems. In the first chapter, we analyse the flagship of Scilab in terms of nonlinear optimization: the optim function. We analyse its features, the management of the cost function, the linear algebra and the management of the memory. Then we consider the algorithms which are used behind optim, depending on the type of algorithm and the constraints. In the remaining chapters, we present the algorithms available to solve quadratic problems, nonlinear least squares problems, semidefinite programming, genetic algorithms, simulated annealing and linear matrix inequalities. A chapter focus on optimization data files managed by Scilab, especially MPS and SIF files. Some optimization features are available in the form of toolboxes, the most important of which are the Quapro and CUTEr toolboxes. The final chapter is devoted to missing optimization features in Scilab.

. . . . . . . . . . . .5 An example . . . . . . . . . . . . . . . .Contents Introduction 1 Non-linear optimization 1. . . . . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . .3 Optimization routines . 3 Non-linear least square 3. . . . . . . . . . 2. . . . . . . . . . . . . . 2. . . . 1. . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . .1 Mathematical point of view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Scilab function . 4. . . . 1. . . . . . . .7. . . . . .1 Management of the approximated Hessian matrix 1. .6 Management of memory . . . . . . . . 2 Quadratic optimization 2. . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . .4 Termination criteria . . . . . . . . . . 1. 1. . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . 1. . 1.2 Scilab function . . . . . . . . . . . . 4 Semidefinite programming 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Mathematical point of view . . . . . . . . 1. . . . . . . . . . . . .9 L-BFGS ”gc” without constraints : n1qn3 . . . . . . . . . . . . . . .3 Optimization routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Non smooth method without constraints : n1fc1 .1 Mathematical point of view . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . .10 L-BFGS ”gc” with bounds constraints : gcbd . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . .2 qpsolve . . . . . . . . . .4 The cost function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . 5 10 10 11 11 11 12 12 13 15 16 16 17 18 20 20 21 22 23 23 23 24 24 24 26 26 26 26 27 27 27 27 . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Mathematical point of view 2. . . . . . . . 1 . . . . . . . . . . . .5 Internal design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Quasi-Newton ”qn” without constraints : n1qn1 . . . . . .5 Linear algebra . . . . . . . . . . . . . . . . . . . 3. . . . 3. . . . . . . . . .3 Optimization routines . 2. . . . . . . . . . . . . . . . . . . . . . . .4 Memory requirements . . . . .3 Initial Hessian matrix . . . .3 qp solve . . . . . . . . . . . . . . . . . . 1. . .2 Optimization features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 Quasi-Newton ”qn” with bounds constraints : qnbd . . . . .2 Line search .

. . . . . . . . .4. . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Genetic algorithms 5. . . . 6 Simulated Annealing 6. .2 Function lmisolver . . . . . 5. . .3 optim nsga . . . . . . .1 Purpose . . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . 7. . . . . . . . .5 Other versions . . . . 7. . . .2. . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . . . . . . . 9.3 The Unconstrained Optimization Problem Toolbox 9. . . . . . . . .4 optim nsga2 . . . . . . . . . . . . . . . . . . . . . . . . . 28 28 29 30 32 32 32 32 32 33 33 34 34 35 35 35 36 37 38 39 39 41 41 42 42 43 49 49 50 52 53 56 56 56 57 57 57 58 59 60 60 63 . . . . . . . . . . . . . . 8 Optimization data files 8. .1 Coding . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . .4 Neighbor functions . . . . . . . . . . . . . . . . . . . . . . . pareto 5. . . . . . .4. . . . . . . . . . . . . . 6. . . . . . .1 Quapro .2 SIF files and the CUTEr toolbox . . . . . . . . . . .2 optim moga. . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . .1 Linear optimization . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Examples . . . . . . . . . . . . . . . . . . .4 How lmisolver works . . . . . . . . . . . . . . . . 5. . . 5. . . . . . . . .2 Interactive mode . . . . 5. . . . . . . . . . . . . . . 10 Missing optimization features in Scilab 2 . . . . . . .6 Temperature laws . 7. . . . . . . . . . . . . . . . filter . . . . . . . . . . . . . . .1 Non-interactive mode . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . .1 MPS files and the Quapro toolbox . . . . . . . .2 Overview . . . . . . . . . . . . . . . . . . . .1 optim ga . . . . . . . . . . . . . .7 optim sa .4 Initialization . . .1 Introduction . . . . . . . . . . . . . . . . . . . . .4 Solvers . . . . . . . . . . . . . . .3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Acceptance functions 6. . .2 Cross-over . . . 5. . . . . . . . . . . . . . . .2 CUTEr . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . .4 Other toolboxes . . . . . . . . . . . . . . . . .1 Syntax . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . .3. 9 Scilab Optimization Toolboxes 9. 8. . . . . . . . . . . . . .3 Support functions . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Linear quadratic optimization . . . . . . . . . . . . . . . . . . . . .3 Function LMITOOL . . . . . . . . . . . . . . . . .2 Example . . . . . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . Scilab . . . . . . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . 7 LMITOOL: a Package for LMI Optimization in 7. . . 5. . . . . . 6. . 5. . . . . . . . . . . 9. . . . . . . . . . . . . . . . .3 Selection . . . .4. . . . . . . . . . . . . . . . . . . .

Conclusion Bibliography 64 65 3 .

INRIA .Copyright c 2008-2010 .Vincent Couvert Copyright c 2008-2009 .0 Unported License: http://creativecommons.Consortium Scilab .Consortium Scilab .org/licenses/by-sa/3.0 4 .Digiteo .Michael Baudin Copyright c 2008-2009 .Serge Steer This file must be used under the terms of the Creative Commons Attribution-ShareAlike 3.Digiteo .

• semidefinite programming with the semidef function. • There may be one or several cost functions (multi-objective optimization). These problems are partly illustrated in figure 1. we present the following optimization features of Scilab.lri. • genetic algorithms with the optim_ga function. such as FSQP for example.10 000 and above). • quadratic optimization with the qpsolve function. The core of this document is an analysis of current Scilab optimization features.1 000) or large (from 1 000 . • simulated annealing with the optim_sa function. some contributions (toolboxes) have been written to improve Scilab capabilities. In the final part. Several properties of the problem to solve may be taken into account by the numerical algorithms : • The unknown may be a vector of real values or integer values. • The cost function can be linear. It is written for Scilab partners needs in OMD project (http://omd. • nonlinear least-square optimization with the lsqrsolve function. linear or non-linear constraints. • linear matrix inequalities with the lmisolver function. quadratic or a general non linear function.Introduction This document aims at giving Scilab users a complete overview of optimization features in Scilab. In this document.100). we give a short list of new features which would be interesting to find in Scilab. • The constraints may be bounds constraints. An overview of Scilab optimization tools is showed in figure 2. • nonlinear optimization with the optim function. In this document. medium (from 10 to 100 .fr/tiki-index. leading to dense or sparse linear systems. • The number of unknowns may be small (from 1 to 10 .php). Above all the embedded functionalities of Scilab itself. • There may be constraints or no constraints. 5 . • The cost function may be smooth or non-smooth. we consider optimization problems in which we try to minimize a cost function f (x) with or without constraints. Many of these toolboxes are interfaces to optimization libraries.

Discrete Parameters Several Objectives Non Smooth Without Constraints Bounds Constraints Figure 1: Classes of optimization problems 6 Continuous Parameters One Objective Smooth With Constraints Optimization Linear Constraints Linear Objective Quadratic Objective Non-linear Constraints Non-linear Objective 1-10 Unknowns 10-100 Unknowns >100 Unknowns .

Figure 2: Scilab Optimization Tools 7 .

free of charge. situations where the cost function is the sum of a general nonlinear function and a low magnitude function. This function allows to solve nonlinear optimization problems without constraints or with bound constraints.e. It provides a Quasi-Newton method. the memory requirements and the internal design of the tool. Then we consider the algorithms which are used behind optim. i. We describe the solvers which are used. which is available in Scilab. in fact. The following is the abstract of the paper : ”We present in this paper an overview of optimization algorithms available in theScilab software. i. It provides an object oriented access to the options. For each solver. • The fsqp toolbox provides an interface for a Sequential Quadratic Programming algorithm. The fminsearch function is based on the simplex algorithm of Nelder and Mead (not to be confused with Dantzig’s simplex for linear optimization). • Multi-objective optimization is available in Scilab with the genetic algorithm component.” Several existing optimization features are not presented in this document. we analyze the type of problems that it can solve as well as its advantages and limitations. This algorithm is known to be able to manage ”noisy” functions. This component is presented in in depth in [2]. This algorithm is very efficient but is not free (but is provided by the authors. depending on the type of algorithm and the constraints. the management of the cost function.2 provides the fminsearch function.• reading of MPS and SIF files with the quapro and CUTEr toolboxes. In the second chapter we present the qpsolve and qp_solve functions which allows to solve quadratic problems. It is efficient for small problems. we have to minimize or maximize an objective function and must find a solver suitable for the problem. We also analyse the support functions which allow to configure the behavior of the algorithm and describe the algorithm which is used. Scilab v5. An analysis of optimization in Scilab. The neldermead component provides three simplexbased algorithms which allow to solve unconstrained and nonlinearly constrained optimization problems. We give a tutorial example of the optim_ga function in the case of the Rastrigin function. the linear algebra and the management of the memory. This unconstrained algorithm does not require the gradient of the cost function. The aim of this paper is to give a simple but accurate view of what problems can be solved by Scilab and what behavior can be expected for those solvers. We especially mention the following tools. is presented in ”Optimization with Scilab. for an academic use). Numerical experiments are presented. a specialized use of the neldermead component. up to 10 parameters and its memory requirement is only O(n). We focus on the user’s point of view.e. The chapter 5 focuses on genetic algorithms. that is. 8 . we analyse the flagship of Scilab in terms of nonlinear optimization: the optim function. The organization of this document is as following. In the first chapter. a Limited Memory BFGS algorithm and a bundle method for non-smooth functions. We analyse its features. The fminsearch function is. which indicates that there is no cure-for-all solvers. we use the CUTEr library. The chapter 3 and 4 briefly present non-linear least squares problems and semidefinite programming. present and future” [3]. which a derivative-free algorithm for small problems. including performance tests. In order to compare the respective performances of the algorithms.

the most important of which are the Quapro. In the final section. Some optimization features are available in the form of toolboxes. we analyse the structure of the algorithm used in optim_sa. We present an example of use of this method and shows the convergence of the algorithm. along with other modules including the interface to CONMIN. This chapter was written by Nikoukhah. The chapter 8 focuses on optimization data files managed by Scilab. to NEWUOA. the acceptance functions and the temperature laws. This tool allows to solve linear matrix inequalities. which gives an overview of the algorithm used in optim_sa. These modules are presented in the chapter 9. The chapter 10 is devoted to missing optimization features in Scilab. The syntax of the lmisolver function is analysed and several examples are analysed in depth.The simulated annealing is presented in chapter 6. to FSQP. The LMITOOL module is presented in chapter 7. to LIPSOL. 9 . Then we analyse the support functions and present the neighbor functions. to LPSOLVE. especially MPS and SIF files. CUTEr and the Unconstrained Optimization Problems toolboxes. Delebecque and Ghaoui.

Three non-linear solvers are connected to the optim primitive. We analyse in detail the management of the cost function. we describe both the internal design of the optim primitive. and a Non-Differentiable solver. 10 . a L-BFGS solver. • the author. a BFGS Quasi-Newton solver.Chapter 1 Non-linear optimization The goal of this chapter is to present the current features of the optim primitive Scilab. In this chapter. • the linear algebra system.1 Mathematical point of view min f (x) x The problem of the non linear optimization is to find the solution of with bounds constraints or without constraints and with f : Rn → R the cost function. In this chapter we analyse each solver and present the following features : • the reference articles or reports. The Scilab online help is a good entry point for this function. The linear algebra components are analysed. The optim primitive allows to optimize a problem with a nonlinear objective without constraints or with bound constraints. 1. • the line search method. • the management of memory. The cost function and its gradient can be computed using a Scilab function. since they are used at many places in the algorithms. especially the algorithm name and if dense/sparse cases are taken into account. namely. Since the management of memory is a crucial feature of optimization solvers. a C function or a Fortran 77 function. the current behaviour of Scilab with respect to memory is detailed here.

e This section lists the routines used according to the optimization method used. • Quasi-Newton with limited memory BFGS (L-BGFS) without constraints or with bound constraints. • ”qn” with bounds constraints : Quasi-Newton method with bounds constraints. depending on the input integer flag ”ind”. g . The following is the list of solvers currently available in Scilab. n1qn3. i n d ) where ”x” is the current value of the unknown and ”ind” is the integer flag which states if ”f”. and the corresponding fortran routine : • ”qn” without constraints : a Quasi-Newton method without constraints. • ”nd” without constraints : a Non smooth method without constraints.2 Optimization features Scilab offers three non-linear optimization methods: • Quasi-Newton method with BFGS formula without constraints or with bound constraints. Gilbert and Lemar´chal [5]. In that case.4 The cost function The communication protocol used by optim is direct. gcbd.3 Optimization routines Non-linear optimization in Scilab is based on a subset of the Modulopt library. The cost function may be provided in the following ways : 11 . • ”gc” with bounds constraints : a Limited memory BGFS with bounds constraints. In the most simple use-case. the cost function is a Scilab function. n1qn1. that is. Non smooth problems with bounds constraints cannot be solved with the methods currently implemented in Scilab. qnbd. the name of the routine to call back is declared as ”external” in the source code. i n d ]= c o s t f ( x . the cost function must be passed as a callback argument to the ”optim” primitive. The cost function must compute the objective and/or the gradient of the objective. 1. ”g” or both are to be computed. developed at Inria. • Bundle method for non smooth functions (half derivable functions. with the following header : [ f . The library which is used by optim was created by the Modulopt project at INRIA and developped by Bonnans. • ”gc” without constraints : a Limited memory BGFS methoud without constraints. The cost function is passed to the optimization solver as a callback. non-differentiable problem) without constraints. 1. which is managed with the fortran 77 callback system. n1fc1. Problems involving non linear constraints cannot be solved with the current optimization methods implemented in Scilab.1.

• if not. If the cost function is a C or fortran 77 source code. the descent direction. except ”gcbd”. can be used to provide a compiled cost function. with a ”if” statement : • if the cost function is compiled. the linear algebra is inlined and there is no use the BLAS API. as in the BFGS quasi-Newton method. The optimization function is configured by the ”setfoptim” C service. which are managed at the C level in optimtable. The services implemented in AddFunctionInTable. and finally copy back the output argument from Scilab internal memory into local output variables. the approximated Hessian of the cost function.c. which takes as argument the name of the routine to callback. These names and addresses are stored in the hashmap FTab foptim. 1. where n is the size of the unknown. 1. This limits the performance of the optimization. In the following paragraph. especially to store the value of the cost function. Scilab dynamic link features.5 Linear algebra The linear algebra which is used in the ”optim” primitive is dense. Most of the memory is required by the approximation of the Hessian matrix. The static field foptimfonc with type foptimf is then set to the address of the function to be called back. which maps function names to function pointer. Generally. When the optimization solver needs to compute the cost function. This switch is managed at the gateway level. the ”boptim” routine performs the copy of the input local arguments into Scilab internal memory. the gradient. such as ilib for link for example. the ”boptim” symbol is passed. especially the function ”AddFunctionInTable”. When 12 . be it in statically compiled libraries or in dynamically compiled libraries. In the case where the cost function is compiled. In the case where the cost function is a Scilab script. • the cost function is provided as a C or fortran 77 compiled routine. the ”foptim” symbol is passed. This applies to all optimization methods. it calls the ”foptim” C void function which in returns calls the compiled cost function associated to the configured address (*foptimfonc). which takes the name of the function as input argument and searches the corresponding function address.6 Management of memory The optimization solvers requires memory. in the sci f optim routine. If the full approximated Hessian is stored. but most importantly.• the cost function is provided as a Scilab script.c are used. There is only one exception : the L-BFGS with bounds constraints routine gcbd uses the ”dcopy” routine of the BLAS API. calls the script. This allows the optimization solvers to callback dynamically linked libraries. the amount of memory is the square of the dimension of the problem O(n2 ). Indeed. the computation is based on function pointers. because optimized libraries like ATLAS cannot not used. the cost function can be statically or dynamically linked against Scilab. we analyse the very internal aspects of the management of the cost function.

More precisely. The management of memory is very important for large-scale problems. • Quasi-Newton ”qn” without constraints (n1qn1) : O(n2 ). the following is a map from the algorithm to the memory required. Simplifying these array sizes leads to the following map. only a given number m of vectors of size n are stored.7 Quasi-Newton ”qn” without constraints : n1qn1 The author is C. which stores only one vector in memory. which clearly shows why Limited Memory BFGS algorithms in Scilab are more suitable for large problems. • Quasi-Newton BFGS ”qn” with bounds constraints (qnbd) : n(n + 1)/2 + 4n + 1. Note that n1fc1 requires an additionnal 2(m + 1) array of integers. inside the stack and the storage area is passed to the solvers as an input argument. This memory is allocated by Scilab. • Non smooth method without constraints (n1fc1) : O(n). 1. • Limited Memory BFGS ”gc” without constraints (n1qn3) : 4n + m(2n + 1). It is known that L-BFGS convergence may be slow for large-scale problems (see [21]. This large storage memory is then split into pieces like a piece of cake by each solver to store the data. The following is the header for the n1qn1 routine : 13 . There is no reference report for this solver. chap. • Limited Memory BFGS ”gc” without constraints (n1qn3) : O(n). This explains why the name ”cg” was chosen: it refers to the Conjugate Gradient method. • Quasi-Newton BFGS ”qn” without constraints (n1qn1) : n(n + 13)/2.a quasi-Newton method with limited memory is used. One main feature of one of the L-BFGS algorithms is to limit the memory required. • Limited Memory BFGS ”gc” with bounds constraints (gcbd) : O(n). • Non smooth method without constraints (n1fc1) : (n + 4)m/2 + (m + 9) ∗ m + 8 + 5n/2. where n is from 100 to 1000. That explains why L-BFGS methods associated with the ”gc” option of the optim primitive are recommended for large-scale optimization. as the number of required double precision values. • Quasi-Newton ”qn” with bounds constraints (qnbd) : O(n2 ). 1987. Lemarechal. based on ”real arrayname(*)” statements. The memory system used by the fortran solvers is the fortran 77 ”assumed-size-dummy-arrays” mechanism. But the name ”cg” is wrongly chosen and this is why we consistently use L-BFGS to identify this algorithm. • Limited Memory BFGS ”gc” with bounds constraints (gcbd) : n(5 + 3nt) + 2nt + 1 with nt = max(1. 9). m/3).

subroutine n1qn1 (simul. • n (e) : nombre de variables dont depend f. le module de simulation doit se presenter sous la forme subroutine simul(n. inria. • f (e-s) : scalaire . dzs) et ˆtre declare en external dans le programme e appelant n1qn1. n1qn1 appelle toujours simul avec indic = 4 . rzs. 1 mode.izs.n. en entree le point initial .g.f. en entree valeur de f en x (initial). • g (e-s) : vecteur de dimension n : en entree valeur du gradient en x (initial). • var (e) : vecteur strictement positif de dimension n. f. • nsim (e-s) : en entree nombre maximal d’appels a simul (c’est a dire avec indic = 4).x. lemarechal. amplitude de la modif souhaitee a la premiere iteration sur x(i). • imp (e) : contrˆle les messages d’impression : o 14 . • simul : point d’entree au module de simulation (cf normes modulopt i).rzs. en sortie valeur du gradient en x (final).dzs) c!but c minimisation d une fonction reguliere sans contraintes c!origine c c. en sortie le nombre de tels appels reellement faits. Le programme considere que la convergence est obtenue lorque il lui est impossible de diminuer f en attribuant a au ` moins une coordonn´e x(i) une variation superieure a eps*var(i). e • mode (e) : definit l’approximation initiale du hessien – =1 n1qn1 l’initialise lui-meme – =2 le hessien est fourni dans zm sous forme compressee (zm contient les colonnes de la partie inferieure du hessien) • niter (e-s) : en entree nombre maximal d’iterations : en sortie nombre d’iterations reellement effectuees. izs. En sortie. en sortie : le point final calcule par n1qn1. en sortie valeur de f en x (final).nsim.var. 1987 c Copyright INRIA c!methode c direction de descente calculee par une methode de quasi-newton c recherche lineaire de type wolfe The following is a description of the arguments of this routine. • x (e-s) : vecteur de dimension n .niter.lp. eps contient le e carr´ de la norme du gradient en x (final).zm.x.une bonne valeur est 10% de la difference (en valeur absolue) avec la coordonee x(i) optimale • eps (e-s) : en entree scalaire definit la precision du test d’arret.imp.eps. g.

– = 0 rien n’est imprime – = 1 impressions initiales et finales – = 2 une impression par iteration (nombre d’iterations, nombre d’appels a simul, valeur courante de f). – >=3 informations supplementaires sur les recherches lineaires ; tres utile pour detecter les erreurs dans le gradient. • lp (e) : le numero du canal de sortie, i.e. les impressions commandees par imp sont faites par write (lp, format). • zm : memoire de travail pour n1qn1 de dimension n*(n+13)/2. • izs,rzs,dzs memoires reservees au simulateur (cf doc) The n1qn1 solver is an interface to the n1qn1a routine, which really implements the optimization method. The n1qn1a file counts approximately 300 lines. The n1qn1a routine is based on the following routines : • simul : computes the cost function, • majour : probably (there is no comment) an update of the BFGS matrix. Many algorithms are in-lined, especially the line search and the linear algebra.

1.7.1

Management of the approximated Hessian matrix

The current BFGS implementation is based on a approximation of the Hessian [21], which is based on Cholesky decomposition, i.e. the approximated Hessian matrix is decomposed as G = LDLT , where D is a diagonal n × n matrix and L is a lower triangular n × n matrix with unit diagonal. To compute the descent direction, the linear system Gd = LDLT d = −g is solved. The memory requirements for this method is O(n2 ) because the approximated Hessian matrix computed from the BFGS formula is stored in compressed form so that only the lower part of the approximated Hessian matrix is stored. This is why this method is not recommended for large-scale problems (see [21], chap.9, introduction). The approximated hessian H ∈ Rn×n is stored as the vector h ∈ Rnh which has size nh = n(n + 1)/2. The matrix is stored in factored form as following h = (D11 L21 . . . Ln1 |H21 D22 . . . Ln2 | . . . |Dn−1n−1 Lnn−1 |Dnn ) . (1.1)

Instead of a direct acces to the factors of D and L, integers algebra is necessary to access to the data stored in the vector h. The algorithm presented in figure 1.1 is used to set the diagonal terms the diagonal terms of D, the diagonal matrix of the Cholesky decomposition of the approximated Hessian. The right-hand side 0.01c2max of this initialization is analysed in the next section of this document. v
i

15

k←1 for i = 1 to n do h(k) = 0.01c2max vi k ←k+n+1−i end for Figure 1.1: Loop over the diagonal terms of the Cholesky decomposition of the approximated Hessian Solving the linear system of equations The linear system of equations Gd = LDLT d = −g must be solved to computed the descent direction d ∈ Rn . This direction is computed by the following algorithm • compute w so that Lw = −g, • computed d so that DLT d = w. This algorithm requires O(n2 ) operations.

1.7.2

Line search

The line search is based on the algorithms developped by Lemar´chal [26]. It uses a cubic intere polation. The Armijo condition for sufficient decrease is used in the following form
T f (xk + αpk ) − f (xk ) ≤ c1 α fk pk

(1.2)

with c1 = 0.1. The following fortran source code illustrates this condition if (fb-fa.le.0.10d+0*c*dga) go to 280 where f b = f (xk + αpk ), f a = f (xk ), c = α and dga =
T fk pk is the local directional derivative.

1.7.3

Initial Hessian matrix

Several modes are available to compute the initial Hessian matrix, depending on the value of the mode variable • if mode = 1, n1qn1 initializes the matrix by itself, • if mode = 2, the hessian is provided in compressed form, where only the lower part of the symetric hessian matrix is stored. An additionnal mode=3 is provided but the feature is not clearly documented. In Scilab, the n1qn1 routine is called with mode = 1 by default. In the case where a hot-restart is performed, the mode = 3 is enabled. If mode = 1 is chosen, the initial Hessian matrix H 0 is computed by scaling the identity matrix H 0 = Iδ 16 (1.3)

where δ ∈ Rn is a n-vector and I is the n × n identity matrix. The scaling vector δ ∈ Rn is based on the gradient at the initial guess g 0 = g(x0 ) = f (x0 ) ∈ Rn and a scaling vector v ∈ Rn , given by the user δi = where cmax > 0 is computed by
0 cmax = max 1.0, max(|gi )|) i=1,n

0.01cmax 2 vi

(1.4)

(1.5)

In the Scilab interface for optim, the scaling vector is set to 0.1 : vi = 0.1, i = 1, n. (1.6)

1.7.4

Termination criteria

The following list of parameters are taken into account by the solver • niter, the maximum number of iterations (default value is 100), • nap, the maximum number of function evaluations (default value is 100), • epsg, the minimum length of the search direction (default value is %eps ≈ 2.22e − 16). The other parameters epsf and epsx are not used. The termination condition is not based on the gradient, as the name epsg would indicate. The following is a list of termination conditions which are taken into account in the source code. • The iteration is greater than the maximum. if (itr.gt.niter) go to 250 • The number of function evaluations is greater than the maximum. if (nfun.ge.nsim) go to 250 • The directionnal derivative is positive, so that the direction d is not a descent direction for f. if (dga.ge.0.0d+0) go to 240 • The cost function set the indic flag (the ind parameter) to 0, indicating that the optimization must terminate. call simul (indic,n,xb,fb,gb,izs,rzs,dzs) [...] go to 250 17

.0d+0) goto 250 • During the line search.] go to 250 • The Armijo condition is not satisfied and step size is below a limit during the line search.5 An example The following script illustrates that the gradient may be very slow.0. if (ir. if(c.8) (1. The gradient of the function is g(x) = and the Hessian matrix is H(x) = p(p − 1)xp−2 0 1 0 p(p − 1)xp−2 2 18 (1.0d+0 [. The problem has two parameters so that n = 2..] if (stepbd..10d+0*c*dga) go to 280 [..lt.fb.9) f (x) = (pxp−1 .gb. call simul (indic.eq. if (stmin+step.steplb) go to 270 • During the line search.0... Here we choose p = 10.steplb) go to 240 • The rank of the approximated Hessian matrix is lesser than n after the update of the Cholesky factors. the step length is lesser than a computed limit. This shows that the termination criteria is not based on the gradient..n) go to 250 1.. The cost function is the following f (x) = xp + xp 1 2 where p ≥ 0 is an even integer. a cubic interpolation is computed and the computed minimum is associated with a zero step length.dzs) [.steplb) goto 170 [.7) .n.le.xb.gt.] if (step.gt. pxp−1 )T 1 2 (1.le. but on the length of the step. if (fb-fa.izs. but the algorithm continues. The step is reduced by a factor 10.] step=step/10. but gets below a limit so that the algorithm terminates.rzs.• The cost function set the indic flag to a negative value indicating that the function cannot be evaluated for the given x.7.

.817592 e −001 . f =2. −−>[ f o p t . nap .] | x | = 1 . 4 7 3 6 4 0 e +000 .norm( x ) . xopt . At each iteration. | g | = 5 . e p s g . f =3. 1 7 9 9 6 6 e −002 . 0 ] .10) The following Scilab script defines the cost function. | g | = 6 . x0 . 1 8 2 5 9 2 e +000 [. mprintf ( ”Computed g(x0) = \n” ) . f . 4 ) .2 1 . f= %e. | g | = 5 .432396 e −021 . 2 ) endfunction x0 = [ −1. f =4. 7 8 5 6 6 3 e −018 | x | = 1 . 2 5 5 7 9 0 e +001 | x | = 1 .279551 e −021 . .415994 e +000 . g . f =3.833319 e −022 .092124 e +000 . | g | = 1 . 0 9 8 3 6 7 e +000 . mprintf ( ”Computed f (x0) = %f\n” . f =1. | g|=%e\n” . 19 . 0 0 2 2 3 7 e −002 . | g | = 1 . f =7. imp = −1) The script produces the following output. | g | = 2 . disp ( g ’ ) .458198 e +000 .The optimum of this optimization problem is at x = (0. (1. g r a d o p t ] = optim ( myquadratic . ”ar ” . . imp = −1) | x | = 1 . 8 1 7 1 2 6 e −018 | x | = 1 . | g | = 5 . i t e r . 3 0 7 4 4 8 5D−18. f =7. . [ f . 0 7 4 4 8 5 e −019 Norm o f p r o j e c t e d g r a d i e n t l o w e r than 0 . 0 1 3 2 2 7 e +000 .191736 e +000 . g ] = myquadratic ( x0 . ”ar ” . | g | = 2 . nap . 5 6 2 0 5 0 e +000 . the norm of the gradient of the cost function is displayed so that one can see if the algorithm terminates when the gradient is small. 3 4 0 8 6 4 e −001 . 0 8 2 5 4 2 e +001 | x | = 9 . .. 4 0 9 8 9 8 e −019 | x | = 9 . 3 3 6 8 0 2 e −018 | x | = 1 . g r a d o p t ] = optim ( myquadratic . xopt . 2 8 0 5 6 4 e −002 . f =2. mprintf ( ”Expected g(x0) = \n” ) .450507 e −021 . end i f i n d == 1 | i n d == 2 | i n d == 4 then g ( 1 ) = p ∗ x ( 1 ) ˆ ( p−1) g ( 2 ) = p ∗ x ( 2 ) ˆ ( p−1) end i f i n d == 1 then mprintf ( ”| x|=%e. 2 4 6 7 5 2 e +001 | x | = 1 . 0)T .409611 e −022 . f ) . function [ f . | g | = 2 . x0 . disp ( derivative ( q u a d f o r n u m d i f f . i n d ) p = 10 i f i n d == 1 | i n d == 2 | i n d == 4 then f = x(1)ˆp + x(2)ˆp . . 2 3 6 6 9 4 e −003 .. | g | = 3 . norm( g ) ) end endfunction function f = q u a d f o r n u m d i f f ( x ) f = myquadratic ( x . checks that the derivatives are correctly computed and performs an optimization. 0 8 7 5 2 6 e −002 . x0 ’ ) ) nap = 100 i t e r = 100 e p s g = %eps [ f o p t . i t e r . 5 0 2 5 9 9 e +001 | x | = 1 . f =6. f =1. i n d ] = myquadratic ( x . e p s g .

1. so that connecting the BLAS may be difficult. which updates the BFGS matrix.2332982 xopt = 0. Claude Lemarechal. • ajour : probably (there is no comment) an update of the BFGS matrix. • simul : computes the cost function The rlbd routine is documented as using an extrapolation method to computed a range for the optimal t parameter. The stopping criteria is commented as ”an extension of the Wolfe criteria”.0065865 fopt = 2 . The linear algebra does not use the BLAS API. The BFGS update is based on the article [34]. 0D−18 ∗ 0. The cost function is very near zero f (x) ≈ 10−22 . The difficulty is because the function is extremely flat near the optimum.9 L-BFGS ”gc” without constraints : n1qn3 The comments in this solver are clearly written. Bonnans [4]. The architecture is clear and the source code is well commented. the algorithm performs significant iterations which are associated with relatively large steps. the algorithm would stop in the early iterations.0064757 One can see that the algorithm terminates when the gradient is extremely small g(x) ≈ 10−18 . • a cubic interpolation. 8 3 3D−22 0. The n1qn3a routine is based on the following routines : 20 .g r a d o pt = 1 .9. Because this is not the case. which shows why this method is not recommended for large-scale problems (see [21]. 1. The solver n1qn3 is an interface to the n1qn3a routine. The zqnbd routine is based on the following routines : • calmaj : calls majour. 1988. It is in-lined. • rlbd : line search method with bound constraints.8 Quasi-Newton ”qn” with bounds constraints : qnbd The comments state that the reference report is an INRIA report by F. If the termination criteria was based on the gradient. The solver qnbd is an interface to the zqnbd routine. • proj : projects the current iterate into the bounds constraints.2002412 0. • a linear interpolation. The memory requirements for this method are O(n2 ). This is a very difficult test case for optimization solvers. The authors are Jean Charles Gilbert. chap. but the solution is not accurate only up to the 3d digit. introduction). The range is then reduced depending on the situation by : • a dichotomy method.

It is well suited for medium-scale problems. The linear algebra does not use the BLAS API but is based on the prosca and ctonb routines. • majysa : updates the vectors y = g(k + 1) − g(k). • gcp : gradient conjugate method for Ax = b. The gcbd solver is an interface to the zgcbd routine. This implements the scalar product. which limits the feature to small size optimization problems. • nlis0 : line search based on Wolfe criteria. Bonnans. The zgcbd routine is based on the following routines : • simul : computes the cost function • proj : projects the current iterate into the bounds constraints. • rlbd : line search method with bound constraints. 1. The linear algebra is dense. • relvar : computes the variables to relax by Bertsekas method. chap. The prosca routine is a call back input argument of the n1qn3 routine. • ctonb : copies array u into v. 21 . 9.227). which allows for some optimizations. • simul : computes the cost function. 9. s = x(k + 1) − x(k). extrapolation and cubic interpolation. connected to the fuclid routine. chap.227). • ddd2 : computed the descent direction by performing the product hxg. • shanph : scalse the diagonal by Shanno-Phua method. But zgcbd uses the ”dcopy” BLAS routine. although convergence may be slow (see [21]. Connecting BLAS may be easy for n1qn3. 1985. which really implements the optimization method. The linear algebra is dense. The algorithm is a limited memory BFGS method with m levels. although the convergence may be slow (see [21]. p. ys. so that the memory cost is O(n). but without optimizaion. The algorithm is a limited memory BFGS method with m levels. • bfgsd : updates the diagonal by Powell diagonal BFGS. so that the memory cost is O(n).10 L-BFGS ”gc” with bounds constraints : gcbd The author is F.zs. • dcopy : performs a vector copy (BLAS API). It is well suited for medium-scale problems. • majz : updates z.• prosca : performs a vector x vector scalar product. There is no reference report for gcbd. p.

19].1. The n1fc1a routine is based on the following routines : • simul : computes the cost function.g. which really implements the optimization method. References for this e algorithm include the ”Part II. 22 . • prosca : performs a vector x vector scalar product. the objective function is the maximum of several continously differentiable functions). It is designed for functions which have a non-continuous derivative (e. • frdf1 : reduction du faisceau • nlis2 : line search.11 Non smooth method without constraints : n1fc1 This routine is probably due to Lemar´chal. • fprf2 : computes the direction. The n1fc1 solver is an interface to the n1fc1a routine. and the in-depth presentation in [18. Nonsmooth optimization” in [5]. who is an expert of this topic.

3) 2. We especially analyse the management of the linear algebra as well as the memory requirements of these solvers. This routine uses the Goldfarb/Idnani algorithm [11. The constraints matrix can be dense or sparse.1 Mathematical point of view 1 f (x) = xT Qx + pT x 2 This kind of optimization is the minimization of function f (x) with under the constraints : T C 1 x = b1 T C 2 x ≥ b2 (2. The qpsolve macro calls the qp_solve compiled primitive. the same input/output arguments) as the quapro solver. The internal source code for qpsolve manages the equality and inequality constraints so that it can be processed by the qp_solve primitive. please refer to Scilab online help for qpsolve The qpsolve Scilab macro is based on the work by Berwin A Turlach from The University of Western Australia. For more details about this solver. 2. 12]. The solver is implemented in Fortran 77. 23 . we presents these two primitives.2 qpsolve The Scilab function qpsolve is a solver for quadratic problems when Q is symmetric positive definite. Crawley [38]. which are meant to be a replacement for the former Quapro solver (which has be transformed into a Scilab toolbox). The qpsolve function is a Scilab macro which aims at providing the same interface (that is.Chapter 2 Quadratic optimization Quadratic problems can be solved with the qpsolve Scilab macro and the qp_solve Scilab primitive.2) (2.1) (2. In this chapter.

lw = 2n + r(r + 5)/2 + 2m + 1. The interface is implemented in the C source code sci qp solve.f) • If the constraints matrix C is a Scilab sparse matrix. m). Depending on the constraints matrix. This formulae may be simplified in the following cases : • if n m. that is the number of unknowns n is negligible with respect to the number of constraints m.2. This temporary array is de-allocated when the qpsolve primitive returns.3 qp solve The qp_solve compiled primitive is an interface for the fortran 77 solver. that is the number of constraints m is negligible with respect to the number of unknowns n. the temporary work array which is allocated in the primitive has the size r = min(n. in order to adapt to the specific Scilab sparse matrices storage.5 Internal design The original Goldfarb/Idnani algorithm [11. 2. then the memory required is O(n). The original package provides two routines : 24 . unmodified algorithm which was written by Turlach (the original name was solve. • if m = n. • if m n.QP.4) (2.c. It was considered as a building block for a Sequential Quadratic Programming solver.7) where the matrix D is assumed to be symmetric positive definite. then the memory required is O(n2 ).4 Memory requirements Suppose that n is the dimension of the quadratic matrix Q and m is the sum of the number of equality constraints me and inequality constraints md. (2. the qpgen2 fortran 77 routine is used.6) (2. then the memory required is O(m). a dense or sparse solver is used : • If the constraints matrix C is dense. This routine is a modification of the original routine qpgen1. 12] was designed to solve the following minimization problem: 1 min −dT x + xT Dx x 2 where A T x = b1 1 AT x >= b2 2 (2.5) 2. the qpgen1sci routine is called. The qpgen2 routine is the original. Then.

• solve.f containing routine qpgen1 which implements the algorithm for sparse matrices.QP.f containing routine qpgen2 which implements the algorithm for dense matrices.compact.QP.• solve. 25 .

Chapter 3 Non-linear least square 3. For more details about this function.3 Optimization routines Scilab lsqrsolve function is based on the routines lmdif and lmder of the library Minpack (Argonne National Laboratory).2 Scilab function Scilab function called lsqrsolve is designed for the minimization of the sum of the squares of nonlinear functions using a Levenberg-Marquardt algorithm. 26 . 3.1 Mathematical point of view min x x The problem of the non linear least-quare optimization is to find the solution of f (x)2 with bounds constraints or without constraints and with f : Rn → Rm the cost function. please refer to Scilab online help 3.

+ xm Fm ≥ 0 or its dual problem.2) (4..1) This kind of optimization is the minimization of f (x) = cT x under the constraint: 4. Z) = ci . 27 .. please refer to Scilab online help 4. i = 1. m Z≥0 (4.. the maximization or −T race(F0 .. Z) under the constraints: T race(Fi . .1 Mathematical point of view F0 + x1 F1 + . Vandenberghe and Stephen Boyd.2 Scilab function The Scilab function called semidef is designed for this kind of optimization problems. For more details about this function.3 Optimization routines Scilab semidef function is based on a routine from L.Chapter 4 Semidefinite programming 4.3) (4..

• GAs search from a population of points. • GAs use probabilistic transition rules. The GA macros are based on the ”parameters” Scilab module for the management of the (many) optional parameters. which enables a high-level programming model for this optimization solver. 28]. • mutation.1 Introduction Genetic algorithms are search algorithms based on the mechanics on natural selection and natural genetics [17. • GAs use payoff (objective function) information. A simple genetic algorithm that yields good results in many practical problems is composed of three operators [17] : • reproduction. not the parameters themselves. A brief introduction to GAs is done in [43]. Genetic algorithms are different from more normal optimization and search procedures in four ways : • GAs work with a coding of the parameter set. The problems solved by the current genetic algorithms in Scilab are the following : • minimization of a cost function with bound constraints. The solver is made of Scilab macros. Genetic algorithms have been introduced in Scilab v5 thanks to a work by Yann Collette [9]. • cross-over.Chapter 5 Genetic algorithms 5. not derivativess or other auxiliary knowledge. 28 . not deterministic rules. • multi-objective non linear minimization with bound constraints. not a single point. Coello Coello on his website [7]. Many articles on this subject have been collected by Carlos A.

’y = ’+func+’(x)’). NbCouples = 110. params = add param ( ga params . 0 .1.2 Example In the current section. we give an example of the user of the GA algorithms. 2 ) . which are cleanly handled by the ”parameters” module. eval ( ’ min bd ’+f u n c+’ ( ) ’ ) ) . such as the size of the population. The add_param function allows to set individual named parameters. 1 2 3 4 5 6 7 8 9 10 11 12 ga // ga ga ga ga ga // ga ga ga ga params = i n i t p a r a m ( ) . endfunction function y = rastrigin(x) y = x(1)^2+x(2)^2-cos(12*x(1))-cos(18*x(2)). Proba_cross = 0. set of parameters. params = add param ( ga params . eval ( ’max bd ’+f u n c+’ ( ) ’ ) ) . params = add param ( ga params . endfunction function Res = opti_rastrigin() Res = [0 0]’. deff(’y=f(x)’. empty. c r o s s o v e r g a d e f a u l t ) . NbGen = 10. ’ mutation func ’ . function Res = min_bd_rastrigin() Res = [-1 -1]’. func = ’rastrigin’. c o d i n g g a i d e n t i t y ) .05. params = add param ( ga params . Log = %T. ’minbound ’ . 0 ) . Parameters to f i n e tune the Genetic algorithm . pressure = 0. ’ dimension ’ . PopSize = 100. ’ codage func ’ . params = add param ( ga params . i n i t g a d e f a u l t ) . params = add param ( ga params .5. 1 ) . Genetic Algorithms require many settings. params = add param ( ga params . are defined in the following sample Scilab script. The following is the definition of the Rastrigin function. nb_disp = 10. which are configure with key-value pairs. Parameters to adapt to the shape of the optimization problem params = add param ( ga params . endfunction This cost function is then defined with the generic name ”f”. m u t a t i o n g a d e f a u l t ) . ’ beta ’ . Other algorithmic parameters. 29 . params = add param ( ga params . endfunction function Res = max_bd_rastrigin() Res = [1 1]’. Proba_mut = 0. This module provides the nit_param function. ’ crossover func ’ . ’ delta ’ . ’ i n i t f u n c ’ . ’maxbound ’ . which returns a new.7.

999977 / -1. Log .999974 7: x(1) = 0. ’ s e l e c t i o n f u n c ’ .000427 -> f = -1.984184 / -1.000252 -> f = -1.000268 -> f = -1. The optim_ga function search a population solution of a single-objective problem with bound constraints. ’ pressure ’ .000034 x(2) = -0.13 ga params = add param ( ga params . ’ nb couples ’ .000442 -> f = -1.000118 x(2) = 0.871632 iteration 7 / 10 . pop opt ( i ) ( 1 ) .853613 iteration 3 / 10 .984184 / -0.314217 iteration 4 / 10 .000335 -> f = -1. p o p i n i t .999989 2: x(1) = -0.000351 -> f = -1. Proba mut . we analyze the GA services to configure a GA computation.000519 x(2) = -0.000136 -> f = -1.000497 x(2) = -0. 14 ga params = add param ( ga params . n b d i s p ) . NbCouples ) .min / max value found = -1. f o b j p o p o p t ( i ) ) . NbGen . .3 Support functions In this section.000235 x(2) = -0. 2 for i =1: n b d i s p 3 p r i n t f ( ’ Individual %d: x(1) = %f x(2) = %f −> f = %f\n ’ .000197 -> f = -1.min / max value found = -1.999979 5: x(1) = 0.998123 iteration 10 / 10 .min / max value found = -1. 15 ga params = add param ( ga params . 4 i .999534 The initial and final populations for this simulation are shown in 5.691332 iteration 6 / 10 . P r o b a c r o s s .000193 x(2) = -0. optim ga ( f .999970 8: x(1) = -0. .min / max value found = -1.000188 x(2) = -0.000215 x(2) = -0. 1 2 [ pop opt . The following are the messages which are displayed in the Scilab console : optim_ga: optim_ga: optim_ga: optim_ga: optim_ga: optim_ga: optim_ga: optim_ga: optim_ga: optim_ga: optim_ga: Initialization of the population iteration 1 / 10 . 1 p r i n t f ( ’ Genetic Algorithm : %d points from pop opt\n ’ .999989 / -1.999977 6: x(1) = -0. .min / max value found = -1. f o b j p o p i n i t ] = .min / max value found = -1.998183 / -1.999989 / -1. s e l e c t i o n g a e l i t i s t ) .999551 / -1. . ga params ) .000260 -> f = -1. The following script is a loop over the optimum individuals of the population. f o b j p o p o p t . pop opt ( i ) ( 2 ) . PopSize .000101 x(2) = 0.984543 / -1.999966 10: x(1) = 0.513463 iteration 5 / 10 .682413 / 0.min / max value found = -1.081632 iteration 2 / 10 . 30 . .min / max value found = -1.999982 4: x(1) = -0.980356 iteration 8 / 10 .999968 9: x(1) = 0.999979 / -1.994628 iteration 9 / 10 .min / max value found = -1.000409 -> f = -1.999987 3: x(1) = 0. Individual Individual Individual Individual Individual Individual Individual Individual Individual Individual 1: x(1) = -0.000558 x(2) = 0.min / max value found = -1. p r e s s u r e ) . 5 end The previous script make the following lines appear in the Scilab console.999964 5.1.

Figure 5.1: Optimum of the Rastrigin function – Initial population is in red. Optimum population is accumulated on the blue dot 31 .

we analyze the 4 GA solvers which are available in Scilab : • optim_ga : flexible genetic algorithm • optim_moga : multi-objective genetic algorithm • optim_nsga : multi-objective Niched Sharing Genetic Algorithm 32 . We select the best individuals in the set of parents and childs individuals. 5. The following is the list of crossover functions available in Scilab : • crossover_ga_default : A crossover function for continuous variable functions. The following is the list of selection functions available in Scilab : • selection_ga_random : A function which performs a random selection of individuals. based on the Wheel algorithm : the crossover algorithm is a loop over the couples.3. We select pop size individuals in the set of parents and childs individuals at random. when the new population is computed by processing a selection over the individuals.1 Coding The following is the list of coding functions available in Scilab’s GA : • coding_ga_binary : A function which performs conversion between binary and continuous representation • coding_ga_identity : A ”no-operation” conversion function The user may configure the GA parameters so that the algorithm uses a customized coding function. This randomization is based on the Scilab primitive rand. which modifies both elements of each couple. The Scilab macro init_ga_default computes this population by performing a randomized discretization of the domain defined by the bounds as minimum and maximum arrays. • crossover_ga_binary : A crossover function for binary code 5.4 Initialization The initialization function returns a population as a list made of ”pop size” individuals.3. • selection_ga_elitist : An ’elitist’ selection function.2 Cross-over The crossover function is used when mates have been computed.3 Selection The selection function is used in the loop over the generations.4 Solvers In this section. 5.3. 5.5.3.

• reproduction : two list of children populations are computed. the ”hidden” function _ga_f simply encapsulate the input function.1 optim ga The Scilab macro optim_ga implements a Genetic Algorithm to find the solution of an optimization problem with one objective function and bound constraints.• optim_nsga2 : multi-objective Niched Sharing Genetic Algorithm version 2 While optim_ga is designed for one objective. The method is based on [15]. pareto filter The optim_moga function is a multi-objective genetic algorithm. based on a randomized Wheel. The function pareto_filter extracts non dominated solution from a set. If the input cost function is a regular Scilab function.2 optim moga. one defines the ”hidden” function _ga_f which computes the cost function. • initialization One computes the initial population with the init_func callback function (the default value for init_func is init_ga_default) • coding One encodes the initial population with the codage_func callback function (default : coding_ga_ide • evolutionary algorithm as a loop over the generations • decoding One decodes the optimum population back to the original variable system The loop over the generation is made of the following steps. 33 . • crossover : the two populations are processed through the crossover_func callback function (default : crossover_ga_default) • mutation : the two populations are processed throught the mutation_func callback function (default : mutation_ga_default) • computation of cost functions : the _ga_f function is called to compute the fitness for all individuals of the two populations • selection : the new generation is computed by processing the two populations through the selection_func callback function (default : selection_ga_elitist) 5.4.4. 5. • processing of input arguments In the case where the input cost function is a list. the 3 other solvers are designed for multiobjective optimization. The following is an overview of the steps in the GA algorithm.

5.3 optim nsga The optim_nsga function is a multi-objective Niched Sharing Genetic Algorithm.4 optim nsga2 The function optim_nsga2 is a multi-objective Niched Sharing Genetic Algorithm. 5. The method is based on [14].4. 34 .4. The method is based on [37].

The current Simulated Annealing solver aims at finding the solution of min f (x) x with bounds constraints and with f : Rn → R the cost function. To use the SA algorithm. – the temperature law. a new feature available in Scilab v5 . 13]. which enables a high-level programming model for this optimization solver. one must perform the following steps : • configure the parameters with calls to ”init param” and ”add param” especially.g. • compute an initial temperature with a call to ”compute initial temp” • find an optimum by using the ”optim sa” solver 35 . we describe the Simulated Annealing optimization methods.1 Introduction Simulated annealing (SA) is a generic probabilistic meta-algorithm for the global optimization problem. Reference books on the subject are [24. Genetic algorithms have been introduced in Scilab v5 thanks to the work by Yann Collette [9]. The GA macros are based on the ”parameters” Scilab module for the management of the (many) optional parameters. namely locating a good approximation to the global optimum of a given function in a large search space. 6. 25. 6..2 Overview The solver is made of Scilab macros. It is often used when the search space is discrete (e. – the neighbor function. all tours that visit a given set of cities) [42].Chapter 6 Simulated Annealing In this document. – the acceptance function.

sa sa sa sa sa sa sa sa params params params params params params params params = = = = = = = = init add add add add add add add param ( ) . 1 ∗ ( Max−Min ) ) . p r i n t f ( ’ I n i t i a l temperature T0 = %f\n ’ .6. P r o b a s t a r t . f o p t . T0 = c o m p u t e i n i t i a l t e m p ( x0 . sa params ) . 0 . param ( sa params param ( sa params param ( sa params param ( sa params param ( sa params param ( sa params param ( sa params . endfunction function Res = o p t i r a s t r i g i n ( ) Res = [ 0 0 ] ’ . endfunction // // Set parameters // func = ’ rastrigin ’ . . . t e m p l i s t ] = . 1 ) .3 Example The following example is extracted from the SA examples. 8 . . d e f f ( ’y=f (x) ’ . ’ temp law ’ .Max ) . Proba start = 0 . a c c e p t f u n c d e f a u l t ) . ∗ rand ( s i z e ( Min . Min = eval ( ’ min bd ’+f u n c+’ ( ) ’ ) . It extern = 30. ’ min delta ’ . endfunction function y = r a s t r i g i n ( x ) y = x (1)ˆ2+ x (2)ˆ2 − cos ( 1 2 ∗ x (1)) − cos ( 1 8 ∗ x ( 2 ) ) . f . [ x opt .1∗(Max−Min ) ) . . It Pre = 100. . . // // Simulated Annealing with default parameters // p r i n t f ( ’SA: geometrical decrease temperature law\n ’ ) . I t i n t e r n = 1000. T0 ) . ’ accept func ’ . s i z e ( Min . I t P r e . endfunction function Res = m a x b d r a s t r i g i n ( ) Res = [ 1 1 ] ’ . s a m e a n l i s t . 2 ) ) + Min . t e m p l a w d e f a u l t ) . ’max bound ’ . Max = eval ( ’max bd ’+f u n c+’ ( ) ’ ) . ’ neigh func ’ . −0. 36 . s a v a r l i s t . ’y=’+f u n c+’ (x) ’ ) . ’ max delta ’ . The Rastrigin functin is used as an example of a dimension 2 problem because it has many local optima but only one global optimum. . . ’min bound ’ . n e i g h f u n c d e f a u l t ) . x0 = (Max − Min ) . Min ) . 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 // // Rastrigin function // function Res = m i n b d r a s t r i g i n ( ) Res = [−1 − 1 ] ’ .

t = 1 : length ( s a m e a n l i s t ) . p r i n t f ( ’ optimal solution : \n ’ ) . a neighbor will be produced by adding some noise to each component of the vector. ’Temperature ’ ) . After some time. plot ( t . a neighbour function is used in order to explore the domain [43]. For a binary string. s a v a r l i s t . subplot ( 2 .4 Neighbor functions In the simulated annealing algorithm. ’ Iteration ’ . ’Mean / Variance ’ ) . I t i n t e r n . T0 . t . 37 .46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 o p t i m s a ( x0 . ’ Variance ’ ] ) . ’ g ’ ) . ’ r ’ . param ) where: • x current represents the current point. the following messages appear in the Scilab console. disp ( x o p t ) . • param is a list of parameters.0. a neighbor will be produced by changing one bit from 0 to 1 or from 1 to 0. p r i n t f ( ’ value of the objective function = %f\n ’ . drawlater . The prototype of a neighborhood function is the following : 1 function x n e i g h = n e i g h f u n c d e f a u l t ( x c u r r e n t . x t i t l e ( ’ Geometrical annealing ’ .0000935 value of the objective function = -1. subplot ( 2 . scf (). ’k−’ ) . f . 1 . ’ Iteration ’ . Log = %T.0.0006975 . For example. [ t e m p l i s t ( i ) t e m p l i s t ( i ) ] . T. 1 . 1 ) . f o p t ) . • T represents the current temperature. for a continuous vector. The following is a list of the neighbour functions available in the SA context : • neigh_func_default : SA function which computes a neighbor of a given point. x t i t l e ( ’Temperature evolution ’ . 2 ) . Variance and Temperature depending on the iteration. s a m e a n l i s t . I t e x t e r n . optimal solution: . 6.1 presents the evolution of Mean. end drawnow . for i =1: length ( t )−1 plot ( [ t ( i ) t ( i + 1 ) ] .999963 The figure 6. l e g e n d ( [ ’Mean ’ . sa params ) .

you need not to change the default acceptance function.. To implement these various simulated annealing optimization methods. The corresponding distribution is a Cauchy distribution which is more and more peaked as the temperature decreases. The neighbors distribution is a gaussian distribution which is more and more peaked as the temperature decrease.Figure 6.. This distribution is more and more peaked as the temperature decreases. • etc. 6.5 Acceptance functions There exist several kind of simulated annealing optimization methods: • the Fast Simulated Annealing. • neigh_func_fsa : The Fast Simulated Annealing neghborhood relationship. you only need to change the acceptance function. For common optimization.1: Convergence of the simulated annealing algorithm • neigh_func_csa : The classical neighborhood relationship for the simulated annealing. The following is a list of acceptance functions available in Scilab SAs : 38 . • neigh_func_vfsa : The Very Fast Simulated Annealing neighborhood relationship. • the simulated annealing based on metropolis-hasting acceptance function.

The algorithm is based on the following steps. If not. the one for which the convergence of the simulated annealing has been proven • temp_law_fsa : The Szu and Hartley Fast simulated annealing • temp_law_huang : The Huang temperature decrease law for the simulated annealing • temp_law_vfsa : This function implements the Very Fast Simulated Annealing from L. provided that a statistical criteria is satisfied.6 Temperature laws In the simulated annealing algorithm.7 optim sa The optim_sa macro implements the simulated annealing solver. If the new (neighbor) point improves the current optimum. Ingber 6. and an internal loop. 39 . the update may still be processed. the update is done with the new point replacing the old optimum. It allows to find the solution of an minimization problem with bound constraints. There are 5 temperature laws available in the SA context : • temp_law_default : A SA function which computes the temperature of the next temperature stage • temp_law_csa : The classical temperature decrease law. The statistical law decreases while the iterations are processed. It is based on an iterative update of two points : • the current point is updated by taking into account the neighbour function and the acceptance criterium. only the best point is returned as the algorithm output. external loop over the temperature decreases. which include a main.• accept_func_default : is the default acceptance function. a temperature law is used in a statistical criteria for the update of the optimum [43]. • the best point is the point which achieved the minimum of the objective function over the iterations. based on the exponential function Fneigh − Fcurrent level = exp − T • accept_func_vfsa : is the Very Fast Simulated Annealing function. While the current point is used internally to explore the domain. defined by : Level = 1 + exp − 1 Fcurrent −Fneigh T 6.

It is based on the following steps. The internal loop allows to explore the domain and is based on the neighbour function. the values of f. • initialization. the x iterates. For each iteration over the temperature decreases. with constant temperature. • update the temperature with the temperature law. • if history is required by user. • loop over the number of temperature decreases. • compute a neighbour of the current point. the following steps are processed. • loop over internal iterations. 40 . store the temperature.• processing of input arguments. • compute the objective function for that neighbour • if the objective decreases or if the acceptance criterium is true. overwrite the best point by the current point. then overwrite the current point with the neighbour • if the cost of the best point is greater than the cost of the current point.

. • f is a real linear scalar function of the entries of the unknown matrices X1 . . France. . Σ:  subject to Hj (X1 . .fr L. .fr.. . . where • X1 ... XM . Nikoukhah Ramine. . . XM ) ≥ 0. Internet: elghaoui@ensta. an interface to the program Semidefinite Programming SP (Copyright c 1994 by Lieven Vandenberghe and Stephen Boyd) distributed with Scilab. 2. . . i = 1.Delebecque@inria. . referred to as the unknown matrices. 7. it is referred to as the objective function.Nikoukhah@inria. j = 1. Delebecque Francois. the V ≥ 0 stands for V positive semi-definite unless stated otherwise). .fr F. XM ) Gi (X1 . El Ghaoui ENSTA. . X1 . . and in particular its two main functions lmisolver and lmitool for solving Linear Matrix Inequalities problems. . . Bvd. . . . . p. . q.1 Purpose Many problems in systems and control can be formulated as follows (see [6]):   minimize f (X1 . . This package uses Scilab function semidef. XM . 2. • Hj ’s are real symmetric matrices with entries which are affine functions of the entries of the unknown matrices X1 . 41 . . . . they are referred to as “Linear Matrix Equality” (LME) functions.Chapter 7 LMITOOL: a Package for LMI Optimization in Scilab R.. XM ) = 0.. (In this report. . 75739 Paris. they are referred to as “Linear Matrix Inequality” (LMI) functions. XM are unknown real matrices. • Gi ’s are real matrices with entries which are affine functions of the entries of the unknown matrices. 32. Victor. . . Research supported in part by DRET under contract 92017-BC14 This chapter describes a user-friendly Scilab package. . XM .

using the code SP [23]. This function computes the solution X1 . User can either invoke lmisolver directly. – LME: a list of matrices containing values of the LME functions Gi ’s for X values in XLIST. The size and structure of XLIST0 are used to set up the problem and determine the size and structure of the output XLISTF. if compatible with the LME functions.The purpose of LMITOOL is to solve problem Σ in a user-friendly manner in Scilab. up to a few hundred variables).LMI.2.OPT]] = lmisolver(XLIST0. This code is intended for small and medium-sized problems (say.2). In general. given the values of the unknown matrices. . 7.EVALFUNC[.OBJ]=EVALFUNC(XLIST) where – XLIST: a list.2 Function lmisolver LMITOOL is built around the Scilab function lmisolver. LMI can also be a matrix (in case there is only one LMI function to be evaluated). Gi and Hj as a function the unknown matrices.2. the ith element of XLIST0 is the initial guess on the value of the unknown matrix Xi . It contains initial guess on the values of the unknown matrices. 7. • EVALFUNC: a Scilab function called evaluation function (supplied by the user) which evaluates the LME. To solve Σ.options]) • XLIST0: a list structure including matrices and/or list of matrices. 42 . identical in size and structure to XLIST0. XM of problem Σ. they are ignored otherwise. LME can be a matrix in case there is only one LME function to be evaluated (instead of a list containing this matrix as unique element).1 where Syntax [XLISTF[. – LMI: a list of matrices containing the values of the LMI functions Hj ’s for X values in XLIST. The values of the matrices in XLIST0. by providing the necessary information in a special format or he can use the interactive function lmitool described in Section 7. . This is a useful feature when the number of unknown matrices is not fixed a priori (see Example of Section 7. LMI and objective functions. Gi and Hj . It can also be a list of a mixture of matrices and lists which in turn contain values of of LMI’s. user must provide an evaluation function which “evaluates” f . In some cases however it is more convenient to define one or more elements of XLIST0 to be lists (of unknown matrices) themselves. as well as an initial guess on the values of the unknown matrices. given functions f . are used as intial condition for the optimization algorithm. and so on.3. . . The syntax is: [LME. It can also be a list of a mixture of matrices and lists which in turn contain values of LME’s. and so on.

abstol. function [LME. This argument is optional.. identical in size and structure to XLIST0 containing the solution of the problem (optimal values of the unknown matrices). There exists a stabilizing state feedback K such that for every initial condition x(0) with x(0) ≤ 1.Y’. i. Similarly for LMI and OBJ. 7.Q]. • OPT: a scalar corresponding to the optimal value of the minimization problem Σ. default parameters are used. B and umax already exist in the environment. To solve this problem using lmisolver. and reltol.2 Examples State-feedback with control saturation constraint Consider the linear system x = Ax + Bu ˙ where A is an n × n and B.– OBJ: a scalar equal to the value of the objective function f for X values in XLIST. • XLISTF: a list. we can call lmisolver. see manual page for semidef for details (Mbound is a multiplicative coefficient for M). maxiters.2. we first need to construct the evaluation function.LMI. the resulting control satisfies u(t) for all t ≥ 0. • options: a 5×1 vector containing optimization parameters Mbound.e.Q-eye()) OBJ=[] Note that OBJ=[] indicates that the problem considered is a feasibility problem. if and only if there exist an n × n matrix Q and an nu × n matrix Y satisfying the equality constraint Q − QT = 0 and the inequality constraints Q ≥ 0 −AQ − QA − BY − Y B T > 0 u2 I Y max ≥ 0 YT Q T T in which case one such K can be constructed as K = Y Q−1 .OBJ]=sf_sat_eval(XLIST) [Q. nu.Y. we are only interested in finding a set of X’s that satisfy LME and LMI functions. as follows: 43 .Y]=XLIST(:) LME=Q-Q’ LMI=list(-A*Q-Q*A’-B*Y-Y’*B’. if omitted. Assuming A.[umax^2*eye(Y*Y’). an n × nu matrix. If the Σ problem has no equality constraints then LME should be []. and reconstruct the solution in Scilab.

. such that Q(i) − Q(i)T = 0. . . can be used to model linear systems subject to failures. . the Y ’s. N. To solve this problem. . . The control law we look for is a mode-dependent linear state-feedback. say sf_sat. . . The evaluation function required for invoking lmisolver can be constructed as follows: 44 . TrQ(1) + . .Y]=sf_sat(A. The scalar parameter r(t) is a continuous-time Markov process taking values in a finite set {1. We seek a state-feedback control law such that the resulting closed-loop system is mean-square stable. . . > 0.Y]=XLIST(:) These Scilab commands can of course be encapsulated in a Scilab function. . It can be shown that this problem has a solution if and only if there exist n × n matrices Q(1). Then. and Q(i) Y (i)T Y (i) I N > 0. . . for every initial condition x(0).sf_sat_eval). Such systems. . A(N ). In this case. we can let XLIST be a list of two lists: one representing the Q’s and the other. the data matrices are A(1). . The transition probabilities of the process r are defined by a “transition matrix” Π = (πij ). it has the form u(t) = K(r(t))x(t). + TrQ(N ) − 1 = 0. and nu × n matrices Y (1). Control of jump linear systems We are given a linear system x = A(r(t))x + B(r(t))u. . . − A(i)Q(i) + Q(i)A(i)T + B(i)Y (i) + Y (i)T B(i)T + j=1 πji Q(j) If such matrices exist. referred to as “jump linear systems”. .B. XLIST=lmisolver(XLIST0. . where πij ’s are the transition probability rates from the i-th mode to the j-th.e. Y (N ). In the above problem. a stabilizing state-feedback is given by K(i) = Y (i)Q(i)−1 . . . ˙ where A is n × n and B is n × nu . . Y_init=zeros(B’). . .umax) We call sf_sat the solver function for this problem. The above problem is obviously a Σ problem. . i = 1. In this case. That is.Y_init). all we need to do is type: --> [Q. Q(N ). XLIST0=list(Q_init. . the resulting trajectory of the closed-loop system satisfies limt→∞ E x(t) 2 = 0. both the number of the data matrices and that of the unknown matrices are a-priori unknown. K(i)’s are nu × n matrices (the unknowns of our control problem). B(N ) and the transition matrix Π. The unknown matrices are Q(i)’s (which are symmetric n × n matrices) and Y (i)’s (which are nu × n matrices). B(1). i = 1.--> --> --> --> --> Q_init=zeros(A). . [Q. i. . N }. . N .

n).OBJ]=jump_sf_eval(XLIST) [Q.end Then. and matrix PI have already been defined). --> N=size(A).eye(nu.LMI2) LME(N+1)=tr-1 OBJ=[] Note that LMI is also a list of lists containing the values of the LMI matrices.nu)] SUM=zeros(n. [n. LMI1=list(). Now.jump_sf_eval) --> [Q.nu]=size(B(1)).nu]=size(B(1)) LME=list(). [n. which clearly is a Σ problem. This is just a matter of convenience. in which case we simply need to type: --> [Q. say jump_sf. Q_init(i)=zeros(n.function [LME. we can use lmisolver as follows: --> XLIST0=list(Q_init. Descriptor Lyapunov inequalities In the study of descriptor systems.n) for j=1:N SUM=SUM+PI(j. In this problem.Y_init) --> XLISTF=lmisolver(XLIST0.Y_init(i)=zeros(nu.Y(i)’. Q_init=list(). the LME functions play important role. it is sometimes necessary to find (or find out that it does not exist) an n × n matrix X satisfying ET X = X T E ≥ 0 AT X + X T A + I ≤ 0 where E and A are n × n matrices such that E. LMI2=list() tr=0 for i=1:N tr=tr+trace(Q(i)) LME(i)=Q(i)-Q(i)’ LMI1(i)=[Q(i).PI) to obtain the solution. Y_init=list(). The evaluation function can be written as follows 45 . --> for i=1:N.Y]=XLIST(:) N=size(A). A is a regular pencil. The above commands can be encapsulated in a solver function. we can solve the problem in Scilab as follows (assuming lists A and B. First we should initialize Q and Y.n).Y(i).Y]=jump_sf(A.i)*Q(j) end LMI2(i)= A(i)*Q(i)+Q(i)*A(i)’+B(i)*Y(i)+Y(i)’*B(i)’+SUM end LMI=list(LMI1.LMI.Y]=XLISTF(:).B.

it is shown that the solution to this problem can be expressed as K = LX −1 where X and L are obtained from the problem of minimizing Trace(Y ) subject to: X − X T = 0.E’*X) OBJ=[] and the problem can be solved by (assuming E and A are already defined) --> XLIST0=list(zeros(A)) --> XLISTF=lmisolver(XLIST0. In [22].LMI.OBJ]=dscr_lyap_eval(XLIST) X=XLIST(:) LME=E’*X-X’*E LMI=list(-A’*X-X’*A-eye().. and − T T T T AX + B2 L + (AX + B2 L)T + B1 B1 XC1 + LT D12 + B1 D11 T T C1 X + D12 L + D11 B1 −γ 2 I + D11 D11 > 0 > 0 Y C2 X + D22 L (C2 X + D22 L)T X To solve this problem with lmisolver. we define the evaluation function: function [LME. Y_init=zeros(C2*C2’). LMI=list(-[A*X+B2*L+(A*X+B2*L)’+B1*B1’.Y. Y − Y T = 0.Y-Y’)..X*C1’+L’*D12’+B1*D11’.L_init)..function [LME.Y.Y_init. (X*C1’+L’*D12’+B1*D11’)’.OBJ]=h2hinf_eval(XLIST) [X... [X. [Y.h2hinf_eval).L]=XLISTF(:) 46 .-gamma^2*eye()+D11*D11’]. L_init=zeros(B2’) XLIST0=list(X_init..C2*X+D22*L.LMI.L]=XLIST(:) LME=list(X-X’.dscr_lyap_eval) --> X=XLISTF(:) Mixed H2 /H∞ Control Consider the linear system x = Ax + B1 w + B2 u ˙ z1 = C1 x + D11 w + D12 u z2 = C2 x + D22 u The mixed H2 /H∞ control problem consists in finding a stabilizing feedback which yields Tz1 w ∞ < γ and minimizes Tz2 w 2 where Tz1 w ∞ and Tz2 w 2 denote respectively the closed-loop transfer functions from w to z1 and z2 .(C2*X+D22*L)’. XLISTF=lmisolver(XLIST0.X]) OBJ=trace(Y). and use it as follows: --> --> --> --> X_init=zeros(A).

one needs to obtain the positive solution to the descriptor Riccati equation (see [33]) −1   AP AT + Q 0 E 0  0 R C   0 .P*E’.C*P. A.P] OBJ=-trace(P) which can be used as follows (asuming E.LMI. Cx + d = 0.zeros(A*C’). b and d are vectors with appropriate dimensions. T T T T P E P C P The evaluation function is: function [LME.ric_dscr_eval) Linear programming with equality constraints Consider the following classical optimization problem minimize eT x subject to Ax + b ≥ 0. white Gaussian noise sequences with covariance Q and R respectively.n]=size(A) 47 .R. Q and R are defined and have compatible sizes–note that E and A need not be square). where A and C are matrices and e. ET CT 0 I  P =− 0 0 I It can be shown that this problem can be formulated as a Σ problem as follows: maximize Trace(P) under constraints P − PT = 0 and   AP AT + Q 0 EP  0 R CP  ≥ 0. C.zeros(C*A’).Descriptor Riccati equations In Kalman filtering for descriptor system Ex(k + 1) = Ax(k) + u(k) y(k + 1) = Cx(k + 1) + r(k) where u and r are zero-mean. This problem can be formulated in LMITOOL as follows: function [LME. Here the sign ≥ is to be understood elementwise.E*P.OBJ]=linprog_eval(XLIST) [x]=XLIST(:) [m. --> P_init=zeros(A’*A) --> P=lmisolver(XLIST0.LMI.P*C’.OBJ]=ric_dscr_eval(XLIST) LME=P-P’ LMI=[A*P*A’+Q.

mc]=size(C).LME=C*x+d LMI=list() tmp=A*x+b for i=1:m LMI(i)=tmp(i) end OBJ=e’*x and solved in Scilab by (assuming A. These two problems can easily be formulated as Σ problems as follows: function [LME.LMI.[nb.’d’) for the discrete problem. We refer to the first equation as the continuous Sylvester equation and the second.C.flag) [na.ma]=size(A). C.[nc.B. the discrete Sylvester equation. if ma<>na|mb<>nb|nc<>na|mc<>nb then error("invalid dimensions").C.linprog_eval) Sylvester Equation The problem of finding matrix X satisfying AX + XB = C or AXB = C where A and B are square matrices (of possibly different sizes) is a well-known problem.end XLISTF=lmisolver(zeros(nc. 48 .C.OBJ]=sylvester_eval(XLIST) [X]=XLIST(:) if flag==’c’ then LME=A*X+X*B-C else LME=A*X*B-C end LMI=[] OBJ=[] with a solver function such as: function [X]=sylvester(A. all we need to do is to (assuming A. to solve the problem.’c’) for the continuous problem and --> X=sylvester(A. e.mb]=size(B).mc).B. B and C are defined) --> X=sylvester(A.sylvester_eval) X=XLISTF(:) Then. b and d and an initial guess x0 exist in the environment): --> x=lmisolver(x0.B.

sci file including the solver function and the evaluation function or at least their skeleton. • dlist: a string containing the names of data matrices (separated by commas if there are more than one). In this mode. The name of this file is obtained by adding “. • txt: a string providing information on what the user should do next. it generates a *.B. Example Suppose we want to use lmitool to solve the problem presented in Section 7. In particular.’A. you need to ! !1.3 Function LMITOOL The purpose of LMITOOL is to automate most of the steps required before invoking lmisolver. The solver function is used to define the initial guess and to modify optimization parameters (if needed).varlist. one or three arguments.3.’Q.umax’) yields the output --> txt = ! ! ! ! ! ! ! ! To solve your problem.7.datalist) where • probname: a string containing the name of the problem. 7. • xlist: a string containing the names of the unknown matrices (separated by commas if there are more than one).’c’) 49 .Y’. lmitool can be invoked with zero.2.2.sci’.edit file /usr/home/DrScilab/sf_sat.1 Non-interactive mode lmitool can be invoked with three input arguments as follows: Syntax txt=lmitool(probname. lmitool generates a file in the current directory. Invoking -->txt=lmitool(’sf_sat’.sci ! !2.load (and compile) your functions: ! ! getf(’/usr/home/DrScilab/sf_sat.sci” to the end of probname. This file is the skeleton of a solver function and the corresponding evaluation function.

reltol = 1e-10.umax) ! !To check the result..sf_sat_eval.OBJ]=sf_sat_eval(XLIST) [Q.nu. LMI and OBJ BELOW LME=.2 Interactive mode lmitool can be invoked with zero or one input argument as follows: 50 ..3. ///////////DEFINE INITIAL GUESS BELOW Q_init=. use [LME. nu = 10. /////////// XLIST0=list(Q_init...umax) // Generated by lmitool on Tue Feb 07 10:30:35 MET 1995 Mbound = 1e3..B. It is easy to see how a small amount of editing can do the rest! 7..B.sci’ with the following content: function [Q.Y_init) XLIST=lmisolver(XLIST0.LMI.maxiters. OBJ=.Y)) ! ! ! ! ! ! and results in the creation of the file ’/usr/home/curdir/sf sat.Y]=sf_sat(A.Y]=sf_sat(A.umax and call sf_sat function: ! ! [Q..abstol. Y_init=.Y]=XLIST(:) /////////////////EVALUATION FUNCTION//////////////////////////// function [LME.. abstol = 1e-10..OBJ]=sf_sat_eval(list(Q.. maxiters = 100.Y]=XLIST(:) /////////////////DEFINE LME. LMI=.LMI.reltol].B.options) [Q. options=[Mbound.! !3.Define A.

. H(N )} .. V (N )} where Co denotes the convex hull and H(i) and V (i). Example Consider the following estimation problem y = Hx + V w where x is unknown to be estimated. V ∈ Co {V (1). N. are given matrices. .Syntax txt=lmitool() txt=lmitool(file) where • file: is a string giving the name of an existing “... we invoke it as follows: --> lmitool() This results is an interactive session which is partly illustrated in following figures. y is known. In this mode.. i = 1. To use lmitool for this problem.. N. .. L(i)V (i) X(i) γ − Trace(X(i)) ≥ 0. The objective is to find L such that the estimate x = Ly ˆ is unbiased and the worst case estimation error variance E( x − x 2 ) is minimized.... i = 1... . 51 . N. . This mode is useful for modifying existing files or when the new problem is not too much different from a problem already treated by lmitool. N. X(i) − X(i)T = 0.. Using a succession of dialogue boxes.. . i = 1.. i = 1. This mode is very easy to use and its operation completely self explanatory. i = 1. user can completely define his problem. Invoking lmitool with one argument allows the user to start off with an existing file. lmitool is fully interactive. . N.. and H ∈ Co {H(1).. w is a unit-variance zero-mean Gaussian vector. ˆ It can be shown that this problem can be formulated as a Σ problem as follows: minimize γ subject to I − LH(i) = 0.sci” file generated by lmitool. ..... and I (L(i)V (i))T ≥ 0.

and in particular the size of the unknown vector. ˜ ˜ where z contains the coefficients of all matrix variables.7.) Figure 7. XM . lmisolver eliminates the redundant variables. 3. a column compression is performed. 2. . . The equality constraints are eliminated by computing the null space N of A and a solution z0 (if any) of Ax + b = 0. and F0 . Elimination of equality constraints. . At this stage. Fm are symmetric matrices. Az + b = 0. where x is a vector containing the independent variables. 52 .1: This window must be edited to define problem name and the name of variables used. . Elimination of variables. . . The sizes and structure of the initial guess are used to set up the problem. lmisolver generates a canonical representation of the form minimize cT z ˜ ˜ ˜ ˜ subject to F0 + z1 F1 + · · · + zm Fm ≥ 0. The computation of N. and x contains the independent elements in the matrix variables X1 . (If the Fi ’s are dependent. . This step uses extensively sparse matrices to speed up the computation and reduce memory requirement. Then. Initial set-up. z0 is done using sparse LU functions of Scilab. where c is a vector. Once the equality constraints are eliminated.4 How lmisolver works The function lmisolver works essentially in four steps: 1. the problem is reformulated as minimize cT x subject to F0 + x1 F1 + · · · + xm Fm ≥ 0. . Making repeated calls to the evaluation function. all solutions of the equality constraints are parametrized by z = N x + z0 .

2: For the example at hand the result of the editing should look something like this. 53 . The Matlab version can be obtained by anonymous ftp from ftp. maxiters. reltol. The feasibility phase is avoided if the initial guess is found to be feasible. nu.fr under /pub/elghaoui/lmitool. The parameter M is set above the value Mbnd*max(sum(abs([F0 . Finally.ensta..5 Other versions LMITOOL is also available on Matlab. Fm]))) For details about the optimization phase. Optimization. lmisolver makes a call to the function semidef (an interface to SP [23]). 7. The function semidef is called with the optimization parameters abstol. This phase is itself divided into a feasibility phase and a minimization phase (only if the linear objective function is not empty).. 4.Figure 7. and the meaning of the above optimization parameters see manual page for semidef.

Figure 7.3: This is the skeleton of the solver function and the evaluation function generated by LMITOOL using the names defined previously. 54 .

Figure 7. You can modify it if you want.5: A file is proposed in which the solver and evaluation functions are to be saved. Figure 7.4: After editing. we obtain. 55 .

Gondzio).f. The sifdecode function is based on the Sifdec code [20]. All these files are created in the directory whose path is given in Pathout.d and an Output messages file OUTMESS are also generated. Given a SIF [1] file the function sifdecode generates associated Fortran routines RANGE. For a description of the variables. EXTERA.f.1 MPS files and the Quapro toolbox The Quapro toolbox implements the readmps function. An associated data file named OUTSDIF. EXTER. 8.f. MPS format is a standard ASCII medium for LP codes. The following sections describe Scilab tools to manage optimization data files.2 SIF files and the CUTEr toolbox The SIF file format can be processed with the CUTEr Scilab toolbox.f and if automatic differentiation is required ELFUND. More precisely it results of an interface of SDLANC Fortran procedure. It is an interface with the program rdmps1. such as the XML-based file format OSiL [35. 56 . 30]. which reads a file containing description of an LP problem given in MPS format and returns a tlist describing the optimization problem. 16] • AMPL : A Mathematical Programming Language [10.f.f of hopdm (J. MPS format is described in more detail in Murtagh’s book [30]. 8.f. ELFUN. • GAMS : General Algebraic Modeling System [40.Chapter 8 Optimization data files This section presents the optimization data files which can be used to configure a specific optimization problem in Scilab. 36].f. see the file rdmps1.f. 39] • MPS : Mathematical Programming System [27. GROUPD. GROUP. 8. The following is a (non-exhaustive) list of ASCII file formats often used in optimization softwares : • SIF : Standard Input Format [1. 41] but other file formats appeared in recent years.

3) 57 . we begin by presenting the Quapro toolbox. C ∗x≤b ci ≤ x ≤ cs C e ∗ x = be (9. 9.1) & (9.1) & (9.1.2) (9.1) • or inequality constraints and bound constraints ((9. bound constraints and equality constraints ((9. Then we outline other main optimization toolboxes. In this chapter. 9.1 Quapro The Quapro toolbox was formely a Scilab built-in optimization tool.1 Linear optimization Mathematical point of view This kind of optimization is the minimization of function f (x) with f (x) = pT x under: • no constraints • inequality constraints (9.3)).1) (9.Chapter 9 Scilab Optimization Toolboxes Some Scilab toolboxes are designed to solve optimization problems.2) & (9. It has been transformed into a toolbox for license reasons. which allows to solve linear and quadratic problems.2)) • or inequality constraints.

1) & (9. bound constraints and equality constraints ((9. 58 . Zhou from the University of Maryland. For more details about these functions.1) & (9.L. Tits and J. please refer to Scilab online help for quapro and Scilab online help for qld qld function and associated routine have been written by K. quapro function and associated routines have been written by Cecilia Pola Mendez and Eduardo Casas Renteria from the University of Cantabria.L.Scilab function Scilab function called linpro is designed for linear optimization programming.1.3)). For more details about this function. Please note that this function can not solve problems based on sparse matrices.2)) • or inequality constraints. A.2) & (9. For this kind of problem. Scilab function Scilab functions called quapro (whatever Q is) and qld (when Q is positive definite) are designed for linear optimization programming. Schittkowski from the University of Bayreuth. Both functions can not solve problems based on sparse matrices.2 Linear quadratic optimization Mathematical point of view This kind of optimization is the minimization of function f (x) with 1 f (x) = xT Qx + pT x 2 under: • no constraints • inequality constraints (9. you can use a Scilab toolbox called LIPSOL that gives an equivalent of linpro for sparse matrices. please refer to Scilab online help This function and associated routines have been written by Cecilia Pola Mendez and Eduardo Casas Renteria from the University of Cantabria. LIPSOL is available on Scilab web site Optimization routines Scilab linpro function is based on: • some Fortran routines written by the authors of linpro • some Fortran Blas routines • some Fortran Scilab routines • some Fortran Lapack routines 9.1) • or inequality constraints and bound constraints ((9.

The first one is usetup for unconstrained or bounded problems or csetup for problems with general contraints. These functions are to be called before any of the following to initialize the problem relative data (only one problem can be run at a time). the hessian values. This toolbox is a scilab port by Serge Steer and Bruno Durand of the original Matlab toolbox. for example.. This set comes from www.2 CUTEr CUTEr is a versatile testing environment for optimization and linear algebra solvers [31]. The sifbuild function can then be used to generate the fortran codes associated to a given problem. the gradient. • Problem database A set of testing problems coded in ”Standard Input Format” (SIF) is included in the sif/ sub-directory. .numerical. This gives a vector of problem names corresponding to selection criteria [32]. cgr.. contraints properties and regularity properties • SIF format decoder The Scilab function sifdecode can be used to generate the Fortran codes associated to a given problem. while the Scilab function buildprob compiles and dynamically links these fortran code with Scilab • problem relative functions The execution of the function buildprob adds a set of functions to Scilab. ugr. The other functions allow to compute the objective. to compile them and dynamically link it to Scilab. This functions can be called to compute the objective function or its gradient at a given point. This toolbox contains the following parts.. cdh. udh.. of the problem at a given point (see ufn.rl.Optimization routines Scilab quapro function is based on: • some Fortran routines written by the authors of linpro • some Fortran Blas routines • some Fortran Scilab routines • some Fortran Lapack routines 9.html.. . A Fortran compiler is mandatory to build problems. The Scilab function sifselect can be used to select some of this problems according to objective function properties. for problems with general contraints) 59 . This will create a set of problem relative functions.. The available problems are located in the sif directory.ac.uk/cute/mastsif. A typical use start from problem selection using the scilab function sifselect. for unconstrained or bounded problems or cfn. ufn or ugr. The sifoptim function automatically applies the optim function to a selected problem. .

org/toolboxes/uncprb and is manage under Scilab’s Forge : http://forge. type the following statement in Scilab v5. which is an advantage over the CUTEr toolbox. the optimum function value and the optimum point x for many problems. This toolbox is available in ATOMS : http://atoms. the Jacobian and the Hessian matrix. the function vector. Additionnally.scilab. the gradient. Finally. This toolbox is based on a Rainer-Storn algorithm. gradients. it provides finite difference routines for the gradient. CONMIN can solve a nonlinear objective problem with nonlinear constraints. all function values. – CONMIN on Scilab Toolbox center • Differential Evolution: random search of a global minimum by Helmut Jarausch.4 Other toolboxes • Interface to CONMIN: An interface to the NASTRAN / NASA CONMIN optimization program by Yann Collette.php/p/uncprb To install it. Garbow and Hillstrom collection of test functions [29] is widely used in testing unconstrained optimization software.3 The Unconstrained Optimization Problem Toolbox The Unconstrained Optimization Problem Toolbox provides 35 unconstrained optimization problems. The code for these problems is available in Fortran from the netlib software archives. Vanderplaats (1973). 60 . The More. It provides the starting point for each problem.org/index. 1 a t o m s I n s t a l l ( ’ uncprb ’ ) 9. CONMIN uses a two-step limited memory quasi-Newton-like Conjugate Gradient. Jacobians and Hessians are tested. It provides the function value. The functions are based on macros based functions : no compiler is required. The CONMIN optimization method is currently used in NASTRAN (a professionnal finite element tool) and the optimization part of NASTRAN (the CONMIN tool). the Jacobian and provides the Hessian matrix for 18 problems. The following is a list of references for the CUTEr toolbox : • CUTEr toolbox on Scilab Toolbox center • CUTEr website 9. The CONMIN fortran program has been written by G. The goal of this toolbox is to provide unconstrained optimization problems in order to test optimization algorithms.• CUTEr and optim The Scilab function optim can be used together with CUTEr using either the external function ucost or the driver function sifoptim.scilab.2 (or better).

– LPSOLVE toolbox on Scilab Toolbox center – lp solve solver on Sourceforge – lp solve on Geocities – lp solve Yahoo Group • NEWUOA: NEWUOA is a software developped by M. the GNU lesser general public license. scons and Scilab >=4. Rubio Scola (University of Rosario. Tested with gcc 4. This solver can handle large scale optimization problems. FSQP is a commercial product. which uses sparse-matrix data-structure to solve large. The original Matlab-based code has been adapted to Scilab by H.– Differential Evolution on Scilab Toolbox center • FSQP Interface: interface for the Feasible Sequential Quadratic Programming library.0. LIPSOL also uses the ORNL sparse Cholesky solver version 0. – FSQP on Scilab Toolbox center – FSQP website • IPOPT interface: interface for Ipopt. – Ipopt on Scilab Toolbox center – Ipopt website • Interface to LIPSOL: sparse linear problems with interior points method by H.3 written by Esmond Ng and Barry Peyton by H.0. lp solve is a free mixed integer/binary linear programming solver with full source. Argentina). symmetric positive definite linear systems. lp solve is under LGPL. sparse. LIPSOL can minimize a linear objective with linear constraints and bound constraints.D. – LIPSOL on Scilab Toolbox center – LIPSOL website – LIPSOL User’s Guide • LPSOLVE: an interface to lp solve. This Scilab-Ipopt interface was based on the Matlab Mex Interface developed by Claas Michalik and Steinar Hauan. As open source software. also for commercial purposes. It is distributed freely under the terms of the GPL. The NEWUOA seeks the least value of a function F(x) (x is a vector of dimension n ) when F(x) can be calculated for any vector of variables x 61 . the source code for Ipopt is provided without charge. Powell for unconstrained optimization without derivatives. examples and manuals.J. Rubio Scola. lp solve uses the ’Simplex’ algorithm and sparse matrix methods for pure LP problems. You are free to use it. This version only works on linux.3. Rubio Scola. Modifications to Scilab Interface made by Edson Cordeiro do Valle. This toolbox is designed for non-linear optimization with equality and inequality constraints. It is based on a primal-dual interior point method. LIPSOL is written by Yin Zhang . which is based on an interior point method which can handle equality and inequality nonlinear constraints.

a quadratic model being required at the beginning of each iteration. When the quadratic model is revised. the value m=2n+1 being recommended. The algorithm is iterative. which is used in a trust region procedure for adjusting the variables. – NEWUOA toolbox on Scilab Toolbox center – NEWUOA at INRIA Alpes 62 . the new model interpolates F at m points..

Two classes of missing features are to analyse : • features which are not available in Scilab. nonlinear solver with non linear constraints tool available with Scilab is the interface to CONMIN. Here is a list of features which are not available in Scilab. Here is a list of features which are not available neither in Scilab. Functionalities marked with a (*) would be available in Scilab if the MODULOPT library embedded in Scilab was updated. • integer parameter with linear objective solver and sparse matrices : currently available in LPSOLVE toolbox. based on interior points method. • simplex programming method (*). Notice that IPOPT is a commercial product and CONMIN is a domain-public library. but are available in toolboxes. nor in toolboxes.Chapter 10 Missing optimization features in Scilab Several optimization features are missing in Scilab. • enabling/disabling of unknowns or constraints. • nonlinear objective and non linear constraints : currently available in interface to IPOPT toolbox. • customization of errors for constraints. • linear objective with sparse matrices : currently available in LIPSOL. based on interior point methods. nor in toolboxes. free. • non-linear objective with nonlinear constraints problems based on sparse linear algebra. but which are available as toolboxes (see previous section). based on method of feasible directions. based on the simplex method. • quadratic objective solver with sparse objective matrix. These features would be to include in Scilab. • nonlinear objective and non linear constraints : currently available in interface to CONMIN toolbox. • features which are not available neither in Scilab. Therefore the only open-source. 63 . • non-linear objective with nonlinear constraints (*).

One of the questions we can ask is: “Why are these toolboxes not integrated in Scilab distribution?”. all embedded functions are very useful to begin with. The answer is often a problem of license. by downloading and installing some toolboxes. 64 . After that.Conclusion Even if Scilab itself has lacks in optimization functionalities. All GPL libraries can not be included in Scilab since Scilab is not designed to become GPL. you can easily improve your Scilab capabilities.

Springer-Verlag. [9] Yann Collette. present and future. [4] Fr´d´ric Bonnans.org/The_Nelder-Mead_ Component. Dual and primal-dual methods for solving strictly convex quadratic programs.lania. and Idnani A. El Ghaoui. 909:226–239. Lecture Notes in Mathematics. Claude Lemar´chal. M. e e [6] S. http://en. 65 . E. Optimization with scilab.uk/lancelot/sif/sifhtml. http://ycollette. Sagase e e tiz´bal. 1987. Technical Report RR-0242.numerical. June 1994.coin-or. San Francisco. The sif reference document. and Claudia A. wikipedia.org/wiki/AMPL_programming_language. 1982. [8] coin or.. L. http://www.fr. Conn and Philippe L. 1982. 490 pages.org. Nouvelle ´dition. USA. Octobre 1983. A modeling language for mathematical programming.rl. Universitext. Feron. Balakrishnan. Morgan Kaufmann Publishers Inc. [2] Michael Baudin.scilab. To appear in Proceedings Of 2009 International Workshop On Open-Source Software For Scientific Computation (Ossc-2009). Nelder mead user’s manual.Bibliography [1] Nicholas I. [10] AMPL company.org/OS/ OSiL. Optimization services instance language (osil). Springer a Verlag.Rocquencourt. Jean-Charles Gilbert. [7] Carlos A.html. [3] Michael Baudin and Serge Steer. A variant of a projected variable metric method for bound constrained e e optimization problems. Numerical Optimization. Personnal website. Toint. A numerically stable dual method for solving strictly convex quadratic programs. http://www. [13] Lawrence Davis. https://www. 2009. Gould Andrew R. Genetic Algorithms and Simulated Annealing.html. Theoretical and Practical Aspects. November 2006. and V. revue et augment´ee. List of references on evolutionary multiobjective optimization. Coello Coello. [11] Goldfarb D.mx/~ccoello/EMOObib. [5] Joseph Fr´d´ric Bonnans. SIAM. and Idnani A. volume 15 of Studies in Applied Mathematics. Linear Matrix Inequalities in System and Control Theory. [12] Goldfarb D. INRIA .free.ac. Boyd. PA. Philadelphia. CA.html. http://wiki. Mathematical Programming. 27:1–33.

[29] J.uk/cuter-www/. USA. Aarts. 1988. Gould. Genetic Algorithms + Data Structures = Evolution Programs.lania. Internal Report. and Kenneth E.ac. [16] gams. J. Jul 1991.uk/cuter-www/sifdec/doc.sourceforge. Springer. 1987.ac. Goldberg. MA. 36(7):824–837. IEEE Transactions on. Amrit Pratap. Toint Nicholas I. http://hsl.A. USA. Mor´. revisited.mx/%7Eccoello/fonseca93.. CA.rl. http://www. Dominique Orban. Amsterdam. Morgan Kaufmann Publishers Inc.M. Netherlands : Centrum voor Wiskunde en Informatica.html.[14] Kalyanmoy Deb.htm. Semidefinite programming. M. Simulated annealing: theory and applications.P. http://www. Kluwer Academic Publishers.gams.rl.gz. Optimization & Machine Learning. http://www. Springer.net/5. 1996. Burton S.gz. [20] hsl. [28] Zbigniew Michalewicz. [25] P. Lectures Notes in Control and Information e Sciences. A constrained and unconstrained testing environment. and T Meyarivan. [26] Claude Lemar´chal. http://lpsolve. editors. Fleming. H. 2000. In Proceedings of the 5th International Conference on Genetic Algorithms. [24] P. Mixed h2/h? control: a convex optimization approach.com. Math. Genetic algorithms for multiobjective optimization: Formulationdiscussion and generalization. Springer. Springer. Advanced Linear Programming: Computation and Practice. [21] Stephen J. Mps file format. Garbow. [19] Jean-Baptiste Hiriart-Urruty and Claude Lemar´chal. 1999. 1993. [18] Jean-Baptiste Hiriart-Urruty and Claude Lemar´chal. A lonesome sif decoder. 30:59–78. Convex Analysis and Minimization e Algorithms II: Advanced Theory and Bundle Methods. October 1993.lania. October 1993. M. Softw. Murtagh. 1981.ps. [15] Carlos M. 1981. Springer. The general algebraic modeling system. 1994 (submitted to SIAM Review). [30] B. Fonseca and Peter J. L. Automatic Control. Samir Agrawal. [e4].ps. pages 416–423. Stanford University. Theoretical and Computational Aspects of Simulated Annealing. 7(1):136–140. McGraw-Hill. Boyd. San Francisco. Hillstrom. Wright Jorge Nocedal.com/. [23] Vandenberghe L. Algorithm 566: Fortran subroue tines for testing unconstrained optimization software [c5]. 1989.mx/%7Eccoello/deb00. http://hsl. Norwell. Convex Analysis and Minimization e Algorithms I: Fundamentals. Khargonekar and M. [22] P.ac. 1981. A fast elitist nondominated sorting genetic algorithm for multi-objective optimization: Nsga-ii. van Laarhoven. A view of line-searches. [31] Philippe L. ACM Trans.rl. Laarhoven and E.5/mps-format. Rotea.uk. Addison-Wesley. Genetic Algorithms in Search. J. Numerical Optimization. [27] lpsolve. 66 . pages 849–858. J. [17] David E. and S.

org/wiki/AMPL_ [40] Wikipedia. Evolutionary Computation.ps. Gould. Optimization services.[32] Philippe L.wikipedia. 1980.lania. Simulated annealing. and B. Computational Optimization and Applications.uk/cute/mastsif. General algebraic modeling system. Dominique Orban. http://www. http://en. The cuter test problem set. Sep 1992.ac. Eyrolles. Updating quasi-newton matrices with limited storage. http://en.M.html. [35] optimizationservices. Srinivas and Kalyanmoy Deb. http://www. [42] Wikipedia. Willsky.mx/ %7Eccoello/deb95. 35(151):773–782.html.org/. programming_language. A. Nikoukhah. http://www.maths.au/~berwin/software/quadprog. http://en.optimizationservices. Mps (format).org.S.org/wiki/MPS_(format). http://en. edu. Levy. [38] Berwin A Turlach.wikipedia. Ampl (programming language). 67 . Multiobjective optimization using nondominated sorting in genetic algorithms. 37(9):1325–1342. Toint Nicholas I. 1999. Kalman filtering and riccati equations for descriptor systems. [43] Patrick Siarry Yann Collette. Collection Algorithmes.rl.wikipedia.org/wiki/General_ Algebraic_Modeling_System. 2008. Automatic Control. [37] N. (quadratic programming routines). http://www. [33] R.wikipedia.numerical. [36] Jun Ma Robert Fourer and Kipp Martin. Optimisation Multiobjectif. 1994. Quadprog. [34] Jorge Nocedal. 2:221–248. [39] Wikipedia. Mathematics of Computation. Osil: An instance language for optimization. [41] Wikipedia.uwa.org/wiki/Simulated_annealing.C. IEEE Transactions on.gz.

Sign up to vote on this title
UsefulNot useful