16 views

Uploaded by fwlwllw

- BUILDING FUZZY GOAL PROGRAMMING WITH FUZZY RANDOM LINEAR PROGRAMMING FOR MULTI-LEVEL MULTI-OBJECTIVE PROBLEM
- Math 24 SG 5
- Mathematical Modeling-Optimisation
- Whittle Discretetimew
- IJAIEM-2014-03-23-068
- 09_AIAA_co
- Product Mix Problems
- 00_RN2022EN13GLN00_Course_Description
- 6. Mathematics - Ijmcar - Optimization of Student’s Academic - m c Agarana
- Book Review
- hr_om11_modB_GE.ppt
- Operation Resarch Mmm
- Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate Environment
- 10101002_deva
- wang_2012
- PP 108-111 Implementation of Genetic Algorithm in Dyamic Channel Allocation
- Fracture
- CSC-9_2
- EM Like Algoritm
- CB717 Evolutionary

You are on page 1of 39

unconstrained nonconvex optimization

mixed convex-Boolean optimization

convex optimization methods are (roughly) always global, always fast

for general nonconvex problems, we have to give up one

local optimization methods are fast, but need not find global solution

(and even when they do, cannot certify it)

global optimization methods find global solution (and certify it), but

are not always fast (indeed, are often slow)

methods for global optimization for nonconvex problems

nonheuristic

maintain provable lower and upper bounds on global objective value

terminate with certificate proving -suboptimality

often slow; exponential worst case performance

but (with luck) can (sometimes) work well

Basic idea

rely on two subroutines that (efficiently) compute a lower and an upper

bound on the optimal value over a given region

upper bound can be found by choosing any point in the region, or by

a local optimization method

lower bound can be found from convex relaxation, duality, Lipschitz

or other bounds, . . .

basic idea:

partition feasible set into convex sets, and find lower/upper bounds

for each

form global lower and upper bounds; quit if close enough

else, refine partition and repeat

Prof. S. Boyd, EE364b, Stanford University

m-dimensional rectangle Qinit, to within some prescribed accuracy

global optimal value is f = min(Qinit)

well use lower and upper bound functions lb and ub, that satisfy, for

any rectangle Q Qinit,

lb(Q) min(Q) ub(Q)

bounds must become tight as rectangles shrink:

> 0 > 0 Q Qinit, size(Q) = ub(Q) lb(Q)

where size(Q) is diameter (length of longest edge of Q)

to be practical, ub(Q) and lb(Q) should be cheap to compute

Prof. S. Boyd, EE364b, Stanford University

1. compute lower and upper bounds on f

set L1 = lb(Qinit) and U1 = ub(Qinit)

terminate if U1 L1

2. partition (split) Qinit into two rectangles Qinit = Q1 Q2

3. compute lb(Qi) and ub(Qi), i = 1, 2

4. update lower and upper bounds on f

update lower bound: L2 = min{lb(Q1), lb(Q2)}

update upper bound: U2 = min{ub(Q1), ub(Q2)}

terminate if U2 L2

5. refine partition, by splitting Q1 or Q2, and repeat steps 3 and 4

Prof. S. Boyd, EE364b, Stanford University

at each step we have a partially developed binary tree; children

correspond to the subrectangles formed by splitting the parent rectangle

leaves give the current partition of Qinit

need rules for choosing, at each step

which rectangle to split

which edge (variable) to split

where to split (what value of variable)

some good rules: split rectangle with smallest lower bound, along

longest edge, in half

Example

partitioned rectangle in R2, and associated binary tree, after 3 iterations

Pruning

can eliminate or prune any rectangle Q in tree with lb(Q) > Uk

every point in rectangle is worse than current upper bound

hence cannot be optimal

does not affect algorithm, but does reduce storage requirements

can track progress of algorithm via

total pruned (or unpruned) volume

number of pruned (or unpruned) leaves in partition

Convergence analysis

number of rectangles in partition Lk is k (without pruning)

total volume of these rectangles is vol(Qinit), so

vol(Qinit)

min vol(Q)

QLk

k

so for k large, at least one rectangle has small volume

need to show that small volume implies small size

this will imply that one rectangle has U L small

hence Uk Lk is small

Prof. S. Boyd, EE364b, Stanford University

10

condition number of rectangle Q = [l1, u1] [ln, un] is

cond(Q) =

maxi(ui li)

mini(ui li)

cond(Q) max{cond(Qinit), 2}

for any rectangle in partition

other rules (e.g., cycling over variables) also guarantee bound on

cond(Q)

Prof. S. Boyd, EE364b, Stanford University

11

m1

Y

(ui li) max(ui li) min(ui li )

vol(Q) =

i

(2 size(Q))

m1

cond(Q)

2 size(Q)

cond(Q)

m

therefore if cond(Q) is bounded and vol(Q) is small, size(Q) is small

12

minimize f0(x, z)

subject to fi(x, z) 0, i = 1, . . . , m

zj {0, 1}, j = 1, . . . , n

x Rp is called continuous variable

z {0, 1}n is called Boolean variable

f0, . . . , fn are convex in x and z

optimal value denoted p

for each fixed z {0, 1}n, reduced problem (with variable x) is convex

Prof. S. Boyd, EE364b, Stanford University

13

Solution methods

brute force: solve convex problem for each of the 2n possible values of

z {0, 1}n

possible for n 15 or so, but not n 20

branch and bound

in worst case, we end up solving all 2n convex problems

hope that branch and bound will actually work much better

14

convex relaxation

minimize f0(x, z)

subject to fi(x, z) 0, i = 1, . . . , m

0 zj 1, j = 1, . . . , n

convex with (continuous) variables x and z, so easily solved

optimal value (denoted L1) is lower bound on p, optimal value of

original problem

L1 can be + (which implies original problem infeasible)

15

Upper bounds

can find an upper bound (denoted U1) on p several ways

simplest method: round each relaxed Boolean variable zi to 0 or 1

more sophisticated method: round each Boolean variable, then solve the

resulting convex problem in x

randomized method:

generate random zi {0, 1}, with Prob(zi = 1) = zi

(optionally, solve for x again)

take best result out of some number of samples

upper bound can be + (method failed to produce a feasible point)

if U1 L1 we can quit

Prof. S. Boyd, EE364b, Stanford University

16

Branching

pick any index k, and form two subproblems

first problem:

minimize f0(x, z)

subject to fi(x, z) 0, i = 1, . . . , m

zj {0, 1}, j = 1, . . . , n

zk = 0

second problem:

minimize f0(x, z)

subject to fi(x, z) 0, i = 1, . . . , m

zj {0, 1}, j = 1, . . . , n

zk = 1

Prof. S. Boyd, EE364b, Stanford University

17

optimal value of original problem is the smaller of the optimal values of

the two subproblems

can solve convex relaxations of subproblems to obtain lower and upper

bounds on optimal values

18

U

be lower, upper bounds for zk = 0

let L,

U

be lower, upper bounds for zk = 1

let L,

L}

L1

min{L,

, U

} U1

can assume w.l.o.g. that min{U

thus, we have new bounds on p:

L}

p U2 = min{U

, U

}

L2 = min{L,

19

continue to form binary tree by splitting, relaxing, calculating bounds on

subproblems

convergence proof is trivial: cannot go more than 2n steps before U = L

can prune nodes with L excceding current Uk

common strategy is to pick a node with smallest L

can pick variable to split several ways

least ambivalent: choose k for which z = 0 or 1, with largest

Lagrange multiplier

most ambivalent: choose k for which |zk 1/2| is minimum

Prof. S. Boyd, EE364b, Stanford University

20

Small example

nodes show lower and upper bounds for three-variable Boolean LP

[0.143, )

z1 = 0

z1 = 1

[, ]

z2 = 0

[, ]

[0.2, ]

z2 = 1

[1, 1]

21

find sparsest x satisfying linear inequalities

minimize card(x)

subject to Ax b

equivalent to mixed Boolean-LP:

minimize 1T z

subject to Lizi xi Uizi, i = 1, . . . , n

Ax b

zi {0, 1}, i = 1, . . . , n

with variables x and z and lower and upper bounds on x, L and U

22

Bounding x

Li is optimal value of LP

minimize xi

subject to Ax b

Ui is optimal value of LP

maximize xi

subject to Ax b

solve 2n LPs to get all bounds

if Li > 0 or Ui < 0, we can just set zi = 1

Prof. S. Boyd, EE364b, Stanford University

23

Relaxation problem

relaxed problem is

minimize 1T z

subject to Lizi xi Uizi, i = 1, . . . , n

Ax b

0 zi 1, i = 1, . . . , n

(assuming Li < 0, Ui > 0) equivalent to

Pn

minimize

i=1 ((1/Ui)(xi )+ + (1/Li )(xi ))

subject to Ax b

objective is asymmetric weighted 1-norm

Prof. S. Boyd, EE364b, Stanford University

24

relaxed problem is well known heuristic for finding a sparse solution, so

we take card(x) as our upper bound

for lower bound, we can replace L from LP with L, since card(x) is

integer valued

at each iteration, split node with lowest lower bound

split most ambivalent variable

25

Small example

random problem with 30 variables, 100 constraints

230 109

takes 8 iterations to find a point with globally minimum cardinality (19)

but, takes 124 iterations to prove minimum cardinality is 19

requires 309 LP solves (including 60 to calculate lower and upper

bounds on each variable)

26

Algorithm progress

tree after 3 iterations (top left), 5 iterations (top right), 10 iterations

(bottom left), and 124 iterations (bottom right)

27

cardinality

20

18

16

14

upper

bound

lower

bound

ceil(lower bound)

12

0

20

40

60

80

100

120

iteration

28

100

10-1

10-2

0

20

40

60

80

100

120

iteration

29

60

50

40

30

20

10

0

0

20

40

60

80

100

120

iteration

Prof. S. Boyd, EE364b, Stanford University

30

Larger example

random problem with 50 variables, 100 constraints

250 1015

took 3665 iterations (1300 to find an optimal point)

minimum cardinality 31

same example as used in 1-norm methods lecture

basic 1-norm relaxation (1 LP) gives x with card(x) = 44

iterated weighted 1-norm heuristic (4 LPs) gives x with

card(x) = 36

Prof. S. Boyd, EE364b, Stanford University

31

40

cardinality

35

30

25

20

15

0

upper bound

lower bound

ceil(lower bound)

500

1000

1500

2000

2500

3000

3500

4000

iteration

Prof. S. Boyd, EE364b, Stanford University

32

100

10-1

10-2

10-3

10-4

10-5

0

500

1000

1500

2000

2500

3000

3500

4000

iteration

Prof. S. Boyd, EE364b, Stanford University

33

1600

1400

1200

1000

800

600

400

200

0

0

500

1000

1500

2000

2500

3000

3500

4000

iteration

Prof. S. Boyd, EE364b, Stanford University

34

random problem with 200 variables, 400 constraints

2200 1.6 1060

we quit after 10000 iterations (50 hours on a single processor machine

with 1 GB of RAM)

only know that optimal cardinality is between 135 and 179

but have reduced number of possible sparsity patterns by factor of 1012

35

190

180

170

cardinality

160

150

140

130

120

upper bound

lower bound

ceil(lower bound)

110

100

1000

2000

3000

4000

5000

iteration

Prof. S. Boyd, EE364b, Stanford University

36

100

10-1

10-2

10-3

10-4

10-5

10-6

10-7

10-8

10-9

10-10

10-11

10-12

0

1000

2000

3000

4000

5000

iteration

Prof. S. Boyd, EE364b, Stanford University

37

1000

800

600

400

200

0

0

1000

2000

3000

4000

5000

iteration

Prof. S. Boyd, EE364b, Stanford University

38

- BUILDING FUZZY GOAL PROGRAMMING WITH FUZZY RANDOM LINEAR PROGRAMMING FOR MULTI-LEVEL MULTI-OBJECTIVE PROBLEMUploaded byInternational Journal of New Computer Architectures and their Applications (IJNCAA)
- Math 24 SG 5Uploaded byMysteryAli
- Mathematical Modeling-OptimisationUploaded byDr Srinivasan Nenmeli -K
- Whittle DiscretetimewUploaded byIvoGraziotin
- IJAIEM-2014-03-23-068Uploaded byAnonymous vQrJlEN
- 09_AIAA_coUploaded byAnonymous wdJupG
- Product Mix ProblemsUploaded bySiegfred Laborte
- 00_RN2022EN13GLN00_Course_DescriptionUploaded bysadefkarish
- 6. Mathematics - Ijmcar - Optimization of Student’s Academic - m c AgaranaUploaded byTJPRC Publications
- Book ReviewUploaded byfarhan danish
- hr_om11_modB_GE.pptUploaded bySeng Soon
- Operation Resarch MmmUploaded byVishal Gaonkar
- Optimal Design of Truss Structures Using a Neutrosophic Number Optimization Model under an Indeterminate EnvironmentUploaded byMia Amalia
- 10101002_devaUploaded byVarun Bhatt
- wang_2012Uploaded byelectrotehnica
- PP 108-111 Implementation of Genetic Algorithm in Dyamic Channel AllocationUploaded byEditorijset Ijset
- FractureUploaded byJake Low
- CSC-9_2Uploaded byDEEPAK
- EM Like AlgoritmUploaded byPeyman Mahouti
- CB717 EvolutionaryUploaded byRenoMasr
- Gavish&Hantler-IEEE-1983-An Algorithm for Optimal Route Selection in SNA Networks.pdfUploaded byratnesht87
- 2007-302 Fgc Dl ComparisonUploaded byDavid Halomoan
- Finite Element Model Updating Multiple AlternativesUploaded byIamzura Abdullah
- 14092402Uploaded byPraveen Kumar Arjala
- Technical PaperUploaded byCdiq Aziz
- v27-84Uploaded bymoji7611
- v1-1-7Uploaded bypragatinaresh
- L06a MGT 3050 Linear Programming - Simplex Based Sensitivity AnalysisUploaded byMadihah Azmi
- PdfUploaded byDina
- Robust Monotonic Convergent ILCUploaded byHoàng Hứa

- Introduction to CVX.pdfUploaded byfwlwllw
- Select Cruise Passengers & Preferences _ CruiseDirect.pdfUploaded byfwlwllw
- How to Summarize a Research ArticleUploaded bySumy Bee
- ESI_6448_syl.pdfUploaded byfwlwllw
- ConvexUploaded byBasuki Rachmad
- Introduction to OptimizationUploaded byfwlwllw
- PJM ARR and FTR MarketUploaded byfwlwllw
- Op Tim MatlabUploaded byBiguest Bags
- Lecture 17Uploaded bySe Samnang
- MATPOWER-OPF-slides.pdfUploaded byfwlwllw
- Economic Dispatch by Tom Overbye and Ross BaldickUploaded byfwlwllw
- Introduction to CVXUploaded byfwlwllw

- Nigerian_Civil_WarUploaded byOlaoluwa Olatona
- OB-PEDS Chapter 15 Dictation and Reading Notes CopyUploaded byloglesb1
- Enthalpy ChangeUploaded bypaulcampbell37
- Alcaraz vs Tangga-AnUploaded byPaolo Mendioro
- Fast Character Maker _ Character Sheet for Human Cleric 7 (Domain of War)Uploaded byAllan Roscoche
- Infinity Specifications ChartUploaded byGopalakrishnan Raju
- 3 4 a sr dimensioningguidelinesUploaded byapi-310705480
- Watts - Stresses in a Pressure Vessel With a Conical HeadUploaded bym5416
- (2009) Salivary Cortisol as a Biomarker in Stress Research.pdfUploaded bykameliasitorus
- Facts_ALS_P-193_0Uploaded byidaayupranita
- Past TenseUploaded byCahya
- research critiqueUploaded byapi-274357826
- Graph TheoryUploaded byJohn Anderson Torres Mosquera
- case digest for leg resUploaded byKringCntna
- Wildcat ArticleUploaded byCoachHuff
- Quick Technology Intelligence ProcessesUploaded byBobby Wong
- Jane EyreUploaded byRangothri Sreenivasa Subramanyam
- Capstone 08.01 - Leadership ExpressUploaded byhutsup
- Biochrom Asys Expert Plus Microplate Reader v2.0 - Expert-Plus-QSG-V2.0Uploaded byluroguita
- Chapter 3. BioenergeticsUploaded byMaria Montelibano
- south sandy and muddy guideUploaded byapi-277699810
- ch01 introduction acounting & businessUploaded bykuncoroooo
- Chapter 3Uploaded byChrisYap
- log4JUploaded byjaffarshaik
- Jama Intern Med 2014 Aug 174(8) 6 EvidenceUploaded bygiseladelarosa2006
- Foundry Theatre, "This is How We Do It: A Festival of Dialogues About Another World Under Construction"Uploaded byBeloved Community Center of Greensboro
- brandstrategytoolkit-090408005118-phpapp02Uploaded byoxade21
- Time Table Semester II 2015-16 (8 Jan 2016)Uploaded bySayan Biswas
- Faith of Islam Abdullah QuilliamUploaded bysunnivoice
- IGD Presentation at IRF 29.09.10 - FINALUploaded byHarsh Kumar Jhunjhunwala