7 views

Uploaded by debasishmee5808

hjjjjjj

- CalcIII_PartialDerivApps
- Complete Book
- Methodology of Compliant Mechanisms Review
- Almgren & Chriss - Optimal Execution of Portfolio Transactions
- Public Economics
- xm81
- Hull Form Design and Optimization
- optimizationhw3
- [IJCT-V3I1P6]
- Definitions of CEC2017 Benchmark Suite Final Version Updated
- Constrained Optimization Rel. 01
- FE Notes Current
- Macro Theory Review
- spreadsheet
- Energy Properties of Mechanical Systems
- Eco
- Examples
- Industrial Energy Group Overview Flyer v2.0
- s970-burdick
- 14 Chapter 6

You are on page 1of 25

http://ocw.mit.edu

Spring 2008

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

16.323 Lecture 2

Nonlinear Optimization

Constrained nonlinear optimization

Lagrange multipliers

Penalty/barrier functions also often used, but will not be discussed here.

Spr 2008

Constrained Optimization

16.323 21

with equality constraints

min F (y)

y

a vector of n constraints

separated into a decision m-vector u and a state n-vector x related to

the decision variables through the constraints. Problem now becomes:

min F (x, u)

u

Assume that p > n otherwise the problem is completely specied

by the constraints (or over specied).

Solving for x in terms of u using f

Substituting this expression into F and solving for u using an

unconstrained optimization.

Works best if f is linear (assumption is that not both of f and F

are linear.)

Spr 2008

16.323 22

x1 + x2 + 2 = 0

Clearly the unconstrained minimum is at x1 = x2 = 0

Substitution in this case gives equivalent problems:

min F2 = (2 x2)2 + x22

x2

or

min F1 = x21 + (2 x1)2

x1

2

1.5

x2

0.5

0.5

1.5

2

2

1.5

0.5

0

x1

0.5

1.5

Bottom line: substitution works well for linear constraints, but pro

cess hard to generalize for larger systems/nonlinear constraints.

Spr 2008

Lagrange Multipliers

16.323 23

Since f (x, u) = 0, we can adjoin it to the cost with constants

T

= 1 . . . n

without changing the function value along the constraint to create

Lagrangian function

L(x, u, ) = F (x, u) + T f (x, u)

Given values of x and u for which f (x, u) = 0, consider dierential

changes to the Lagrangian from dierential changes to x and u:

where

L

u

L

u1

L

L

dL =

dx +

du

x

u

L

um (row vector)

L F

f

=

+ T

0

(2.1)

x

x x

1

F

f

T =

(2.2)

x x

To proceed, must determine what changes are possible to the cost

keeping the equality constraint satised.

Changes to x and u are such that f (x, u) = 0, then

f

f

dx +

du 0

x u

1

f

f

dx =

du

x

u

df =

(2.3)

(2.4)

Spr 2008

16.323 24

F

F

dx +

du

dF =

x

u

1

F f

f F

=

+

du

u u

x x

F

f

=

+ T

du

u

u

L

du

u

(2.5)

(2.6)

(2.7)

constraint f (x, u) = 0 is just

L

u

and we need this gradient to be zero to have a stationary point so

that dF = 0 du = 0.

Thus the necessary conditions for a stationary value of F are

L

x

L

u

L

= 0

(2.8)

= 0

(2.9)

= f (x, u) = 0

(2.10)

in 2n + m unknowns.

L

= 0

y

L

= 0

June 18, 2008

(2.11)

(2.12)

Spr 2008

Intuition

16.323 25

Can develop the intuition that the constrained solution will be a point

of tangency of the constant cost curves and the constraint function

No further improvements possible while satisfying the constraints.

Equivalent to saying that the gradient of the cost ftn (normal to

the constant cost curve) F/y [black lines] must lie in the space

spanned by the constraint gradients f /y [red lines]

Means cost cannot be improved without violating the constraints.

In 2D case, this corresponds to F/y being collinear to f /y

the negative of the direction of the component of the cost gradient

orthogonal to the constraint gradient, thereby reducing the cost and

still satisfying the constraint.

Can see that at the points on the constraint above and blow the

optimal value of x2

June 18, 2008

Spr 2008

16.323 26

Figure 2.9: Minimization with equality constraints: shows that function and cost

gradients are nearly collinear near optimal point and clearly not far away.

f (x1 , x2 ) = x2 ((x1 )3 (x1 )2 + (x1 ) + 2) = 0 and F = 12 xT

1

1

Spr 2008

16.323 27

1 T

2x

1

1

Figure 2.11: Change constraint - note that the cost and constraint gradients are

collinear, but now aligned

Spr 2008

16.323 28

to notion that the cost gradient must lie in the space spanned

by the constraint gradients.

Equivalent to saying that it is possible to express the cost gradient

as a linear combination of the constraint gradients

Again, if this was not the case, then improvements can be made

to the cost without violating the constraints.

that the cost gradient satises:

F

f1

f2

fn

2

n

= 1

y

y

y

y

f

= T

y

or equivalently that

F

f

+ T

=0

y

y

which is, of course, the same as Eq. 2.11.

(2.13)

(2.14)

Spr 2008

Constrained Example

16.323 29

Form the Lagrangian

L F (x1, x2) + f (x1, x2) = x21 + x22 + (x1 + x2 + 2)

Where is the Lagrange multiplier

The solution approach without constraints is to nd the stationary

point of F (x1, x2) (F/x1 = F/x2 = 0)

With constraints we nd the stationary points of L

y=

x1

x2

L

= 0,

y

L

=0

which gives

L

= 2x1 + = 0

x1

L

= 2x2 + = 0

x2

L

= x1 + x2 + 2 = 0

The key point here is that due to the constraint, the selection of x1

and x2 during the minimization are not independent

The Lagrange multiplier captures this dependency.

Diculty can be solving the resulting equations for the optimal points

(can be ugly nonlinear equations)

June 18, 2008

Spr 2008

Inequality Constraints

16.323 210

min F (y)

(2.15)

(2.16)

n with respect to the state dimension p since not all inequality

constraints will limit a degree of freedom of the solution.

Have similar picture as before, but now not all constraints are active

Black line at top is inactive since x1 + x2 1 < 0 at the optimal

value x = [1 0.60] it does not limit a degree of freedom in

the problem.

Blue constraint is active, cost lower to the left, but f1 > 0 there

Spr 2008

16.323 211

Spr 2008

16.323 212

Intuition in this case is that at the minimum, the cost gradient must

lie in the space spanned by the active constraints - so split as:

fi fj

F

=

i

j

(2.17)

y

y

y

i

j

active

inactive

be collinear, but they could be in any orientation.

to the allowable changes in the state.

Must restrict condition 2.17 so that the cost gradient points in

the direction of the allowable side of the constraint (f < 0).

Cost cannot be reduced without violating constraint.

Cost and function gradients must point in opposite directions.

Given 2.17, require that i 0 for active constraints

Spr 2008

16.323 213

and the necessary conditions for optimality are

L

= 0

y

(2.18)

L

= 0 i

i

where the second property applies to all constraints

i

L

i

(2.19)

= fi = 0

L

i

= fi < 0.

Equations 2.18 and 2.19 are the essence of the Kuhn-Tucker the

orem in nonlinear programming - more precise statements available

with more careful specication of the constraints properties.

Must also be careful in specifying the second order conditions for

a stationary point to be a minimum - see Bryson and Ho, sections

1.3 and 1.7.

the active constraint gradients are linearly independent for the s

to be well dened.

Avoids redundancy

Spr 2008

Cost Sensitivity

16.323 214

arbitrarily - some exibility in the limits.

Thus it would be good to establish the extent to which those

choices impact the solution.

Note that at the solution point,

L

F

f

=0

= T

y

y

y

If the state changes by y, would expect change in the

Cost

Constraint

F

y

y

f

f =

y

y

F =

F = T

f

y = T f

y

dF

= T

df

Sensitivity of the cost to changes in the constraint func

tion is given by the Lagrange Multipliers.

Makes sense because if it is active, then allowing f to increase will

move the constraint boundary in the direction of reducing F

Correctly predicts that inactive constraints will not have an impact.

Spr 2008

16.323 215

Revise the constraints so that they are of the form f c, where

c 0 is a constant that is nominally 0.

The constraints can be rewritten as f = f c 0, which means

f

f

y y

and assuming the f constraint remains active as we change c

f

f

I =0

c c

L

F

f

f

=0

= T

= T

y

y

y

y

F

c .

F

F y

=

c

y c

f y

= T

y c

f

= T

c

T

=

Spr 2008

16.323 216

Figure 2.14: Shows that changes to the constraint impact cost in a way that can be

predicted from the Lagrange Multiplier.

Spr 2008

16.323 217

Form Lagrangian

L = x21 + x1x2 + x22 + 1(1 x2) + 2(x1 + x2 3)

Form necessary conditions:

L

=

x1

L

=

x2

L

1

=

1

L

2

=

2

2x1 + x2 + 2 = 0

x1 + 2x2 1 + 2 = 0

1(1 x2) = 0

2(x1 + x2 3) = 0

Assume 1 = 2 = 0 both inactive

= 2x1 + x2 = 0

x1

L

= x1 + 2x2 = 0

x2

gives solution x1 = x2 = 0 as expected, but does not satisfy all

the constraints

Assume 1 = 0 (inactive), 2 0 (active)

= 2x1 + x2 + 2 = 0

x1

L

= x1 + 2x2 + 2 = 0

x2

L

2

= 2(x1 + x2 3) = 0

2

which gives solution x1 = x2 = 3/2, which satises the con

straints, but F = 6.75 and 2 = 9/2

June 18, 2008

Spr 2008

16.323 218

= 2x1 + x2 = 0

x1

L

= x1 + 2x2 1 = 0

x2

L

1

= 1(1 x2) = 0

1

gives solution x1 = 1/2, x2 = 1, 1 = 3/2 which satises the

constraints, and F = 0.75

Spr 2008

16.323 219

1

2

3

5

6

7

global g G f

10

11

12

13

14

15

16

17

18

19

20

testcase=0

if testcase

f=inline((1*(x1+1).^3-1*(x1+1).^2+1*(x1+1)+2));

dfdx=inline((3*1*(x1+1).^2-2*1*(x1+1)+1));

else

f=inline((1*(x1-2).^3-1*(x1-2).^2+1*(x1-2)+2));

dfdx=inline((3*1*(x1-2).^2-2*1*(x1-2)+1));

end

21

22

23

24

25

26

27

28

29

30

31

32

33

34

x1=-3:.01:5;x2=-4:.01:4;

for ii=1:length(x1);

for jj=1:length(x2);

X=[x1(ii) x2(jj)];

F(ii,jj)=g*X+X*G*X/2;

end;

end;

figure(1);clf

contour(x1,x2,F,[min(min(F)) .05 .1 .2 .29 .4 .5 1:1:max(max(F))]);

xlabel(x_1 )

ylabel(x_2 )

hold on;

plot(x1,f(x1),LineWidth,2);

35

36

37

38

39

40

% X=FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB,NONLCON,OPTIONS)

xx=fmincon(meshf,[0;0],[],[],[],[],[],[],meshc);

hold on

plot(xx(1),xx(2),m*,MarkerSize,12)

axis([-3 5 -4 4]);

41

42

43

44

45

46

47

48

49

50

51

Jx=[];

[kk,II1]=min(abs(x1-xx(1)))

[kk,II2]=min(abs(x1-1.1*xx(1)))

[kk,II3]=min(abs(x1-0.9*xx(1)))

ll=[II1 II2 II3];

gam=.8; % line scaling

for ii=1:length(ll)

X=[x1(ll(ii));f(x1(ll(ii)))]

Jx(ii,:)=(g+G*X);

X2=X+Jx(ii,:)*gam/norm(Jx(ii,:));

52

53

54

Nx1=X(1);

df=[-dfdx(Nx1);1];

55

56

57

X3=[Nx1;f(Nx1)];

X4=X3+df*gam/norm(df);

58

59

60

61

62

63

64

65

66

plot(X2(1),X2(2),ko,MarkerSize,12)

plot(X(1),X(2),ks,MarkerSize,12)

plot([X(1);X2(1)],[X(2);X2(2)],k-,LineWidth,2)

plot(X4(1),X4(2),ro,MarkerSize,12)

plot(X3(1),X3(2),rs,MarkerSize,12)

plot([X4(1);X3(1)],[X4(2);X3(2)],r-,LineWidth,2)

if ii==1;

text([1.25*X2(1)],[X2(2)],\partial F/\partial y )

Spr 2008

text([X4(1)-.75],[0*X4(2)],\partial f/\partial y )

67

68

69

70

end

end

hold off

71

72

%%%%%%%%%%%%%%%%%%%%%%%%

73

74

75

f2=inline(-1*x1-1);global f2

df2dx=inline(-1*ones(size(x)));

76

77

78

79

figure(3);gam=2;

xlabel(x_1 );ylabel(x_2 )

80

81

82

83

84

85

86

87

88

89

xx=fmincon(meshf,[0;0],[],[],[],[],[],[],meshc2);

hold on

Jx=(g+G*xx);

X2=xx+Jx*gam/norm(Jx);

plot(xx(1),xx(2),m*,MarkerSize,12)

plot(X2(1),X2(2),mo,MarkerSize,12);

plot([xx(1);X2(1)],[xx(2);X2(2)],m-,LineWidth,2)

text([X2(1)],[X2(2)],\partial F/\partial y)

hold off

90

91

92

93

94

95

96

97

98

99

100

101

hold on;

plot(x1,f(x1),LineWidth,2);

text(-1,1,f_2 > 0)

text(-2.5,0,f_2 < 0)

plot(x1,f2(x1),k-,LineWidth,2);

text(3,2,f_1 < 0)

if testcase

text(0,3,f_1 > 0)

else

text(1,3,f_1 > 0)

end

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

dd=[xx(1) 0 xx(1)];

X=[dd f(dd)];

df=[-dfdx(dd) 1*ones(size(dd))];

X2=X+gam*df/norm(df);

for ii=3

plot([X(ii,1);X2(ii,1)],[X(ii,2);X2(ii,2)],LineWidth,2)

text([X2(ii,1)-1],[X2(ii,2)],\partial f/\partial y)

end

X=[dd f2(dd)];

df2=[-df2dx(dd) 1*ones(size(dd))];

X2=X+gam*df2/norm(df2);

%for ii=1:length(X)

for ii=1

plot([X(ii,1);X2(ii,1)],[X(ii,2);X2(ii,2)],k,LineWidth,2)

text([X2(ii,1)],[X2(ii,2)],\partial f/\partial y)

end

hold off

120

121

%%%%%%%%%%%%%%%%%%%%%%

122

123

124

f2=inline(-1*x1+1);global f2

df2dx=inline(-1*ones(size(x)));

125

126

127

128

figure(4);clf;gam=2;

contour(x1,x2,F,[min(min(F)) .05 .1 .2 .3 .4 .5 1:1:max(max(F))]);

xlabel(x_1);ylabel(x_2)

129

130

131

132

133

134

135

136

137

138

xx=fmincon(meshf,[1;-1],[],[],[],[],[],[],meshc2);

hold on

Jx=(g+G*xx);

X2=xx+Jx*gam/norm(Jx);

plot(xx(1),xx(2),m*,MarkerSize,12)

plot(X2(1),X2(2),mo,MarkerSize,12);

plot([xx(1);X2(1)],[xx(2);X2(2)],m-,LineWidth,2)

text([X2(1)],[X2(2)],\partial F/\partial y)

hold off

16.323 220

Spr 2008

139

140

141

142

143

144

145

146

147

148

149

150

hold on;

plot(x1,f(x1),LineWidth,2);

text(-1,3,f_2 > 0)

text(-2.5,2,f_2 < 0)

plot(x1,f2(x1),k-,LineWidth,2);

text(3,2,f_1 < 0)

if testcase

text(0,3,f_1 > 0)

else

text(1,3,f_1 > 0)

end

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

dd=[xx(1) 0 xx(1)];

X=[dd f(dd)];

df=[-dfdx(dd) 1*ones(size(dd))];

X2=X+gam*df/norm(df);

for ii=3

plot([X(ii,1);X2(ii,1)],[X(ii,2);X2(ii,2)],LineWidth,2)

text([X2(ii,1)-1],[X2(ii,2)],\partial f/\partial y)

end

X=[dd f2(dd)];

df2=[-df2dx(dd) 1*ones(size(dd))];

X2=X+gam*df2/norm(df2);

%for ii=1:length(X)

for ii=1

plot([X(ii,1);X2(ii,1)],[X(ii,2);X2(ii,2)],k,LineWidth,2)

text([X2(ii,1)],[X2(ii,2)],\partial f/\partial y)

end

hold off

169

170

%%%%%%%%%%%%%%%%%%%%%%%%%

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

if testcase

figure(1)

axis([-4 0 -1 3]);

figure(3)

figure(4)

else

figure(1)

axis([-.5 4 -2 2]);

figure(3)

figure(4)

end

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

% sensitivity study

% changes are made to the constraint so that x_2-f(x_1) <= alp > 0

figure(5);clf

xlabel(x_1)

ylabel(x_2)

hold on;

f=inline((1*(x1-2).^3-1*(x1-2).^2+1*(x1-2)+2));

dfdx=inline((3*1*(x1-2).^2-2*1*(x1-2)+1));

plot(x1,f(x1),k-,LineWidth,2);

alp=1;

plot(x1,f(x1)+alp,k--,LineWidth,2);

20

6

207

208

209

210

global alp

[xx1,temp,temp,temp,lam1]=fmincon(meshf,[0;0],[],[],[],[],[],[],meshc3);

alp=0;

[xx0,temp,temp,temp,lam0]=fmincon(meshf,[0;0],[],[],[],[],[],[],meshc3);

16.323 221

Spr 2008

211

212

213

214

215

216

217

218

hold on

plot(xx0(1),xx0(2),mo,MarkerSize,12,MarkerFaceColor,m)

plot(xx1(1),xx1(2),md,MarkerSize,12,MarkerFaceColor,m)

219

220

text(xx0(1)+.5,xx0(2),[\lambda_0 = ,num2str(lam0.ineqnonlin)])

221

222

223

function F=meshf(X);

2

3

global g G

4

5

F=g*X+X*G*X/2;

6

7

end

function [c,ceq]=meshc(X);

2

3

global f

4

5

6

7

c=[];

%ceq=f(X(1))-X(2);

ceq=X(2)-f(X(1));

8

9

return

function [c,ceq]=meshc(X);

2

3

global f f2

4

5

6

7

%c=[f(X(1))-X(2);f2(X(1))-X(2)];

c=[X(2)-f(X(1));X(2)-f2(X(1))];

ceq=[];

8

9

return

16.323 222

Spr 2008

1

2

3

4

5

figure(1),clf

xx=[-3:.1:3]; for ii=1:length(xx);for jj=1:length(xx); %

FF(ii,jj)= xx(ii)^2+xx(ii)*xx(jj)+xx(jj)^2;end;end;%

hh=mesh(xx,xx,FF);%

hold on;%

6

7

8

plot3(xx,ones(size(xx)),xx.^2+1+xx,m-,LineWidth,2);%

plot3(xx,3-xx,xx.^2+(3-xx).^2+xx.*(3-xx),g-,LineWidth,2);%

9

10

11

12

13

xlabel(x_1); ylabel(x_2); %

hold off; axis([-3 3 -3 3 0 20])%

hh=get(gcf,children);%

set(hh,View,[-109 74],CameraPosition,[-26.5555 13.5307 151.881]);%

14

15

16

17

18

xx=fmincon(simplecaseF,[0;0],[],[],[],[],[],[],simplecaseC);

hold on

plot3(xx(1),xx(2),xx(1).^2+xx(2).^2+xx(1).*xx(2),rs,MarkerSize,20,MarkerFace,r)

xx(1).^2+xx(2).^2+xx(1).*xx(2)

19

20

21

function F=simplecaseF(X);

2

3

F=X(1)^2+X(1)*X(2)+X(2)^2;

4

5

return

function [c,ceq]=simplecaseC(X);

2

3

4

c=[1-X(2);X(1)+X(2)-3];

ceq=0;

5

6

return

16.323 223

- CalcIII_PartialDerivAppsUploaded byQilah Kamarudin
- Complete BookUploaded byChandan Chowdhury
- Methodology of Compliant Mechanisms ReviewUploaded byTuan Pham Minh
- Almgren & Chriss - Optimal Execution of Portfolio TransactionsUploaded byehkereiakes
- Public EconomicsUploaded byLe Tung
- xm81Uploaded byDeep Dave
- Hull Form Design and OptimizationUploaded byMohamed Fouad
- optimizationhw3Uploaded bySilvio Paula
- [IJCT-V3I1P6]Uploaded byIjctJournals
- Definitions of CEC2017 Benchmark Suite Final Version UpdatedUploaded bypc
- Constrained Optimization Rel. 01Uploaded byalessandropuce85
- FE Notes CurrentUploaded byanon_688042049
- Macro Theory ReviewUploaded byskmthrk.olympus
- spreadsheetUploaded bySaurabh Sharma
- Energy Properties of Mechanical SystemsUploaded bynoob james
- EcoUploaded byDaisy Gogorza
- ExamplesUploaded byYeeXuan Ten
- Industrial Energy Group Overview Flyer v2.0Uploaded bypartho143
- s970-burdickUploaded byRajkumar Mandal
- 14 Chapter 6Uploaded byakash mishra
- Optimization.docxUploaded byRahul R
- IE1081-SyllabusUploaded byRovick Tarife
- journal mathUploaded byhungkg
- 2518-7322-1-PBUploaded byliaslofr
- Mean Square DesignUploaded byNguyen Tan Tien
- PBP BeniUploaded byAhmad Qadri Yatumia
- ballbouncing (1).pdfUploaded bymfqc
- L12Uploaded byAngad Sehdev
- EOLreportUploaded byBecirspahic Almir
- 00. ProjectWorkFlowUploaded byMário de Freitas

- EE Final Upto 4th Year Syllabus 14.03.14Uploaded bydebabrata_mukherjee34
- Bhimbra-generalized Theory Electrical MachinesUploaded bydebasishmee5808
- nRF24L01P Product Specification 1 0Uploaded byCaelumBlimp
- RID 1 ModrobUploaded bydebasishmee5808
- Graphical AbstractUploaded bydebasishmee5808
- Time ManagementUploaded byShyam_Nair_9667
- RID 1 ModrobUploaded bybalakalees
- extramural_funding2012-13Uploaded bydebasishmee5808
- TrademarksUploaded bySubarna Saha
- Adjunct 2Uploaded bydebasishmee5808
- 10.1109@91.481843Uploaded bydebasishmee5808
- sastry1989Uploaded bydebasishmee5808
- Electromagnetic Field TheoryUploaded bydebasishmee5808
- Control SystemUploaded byKamran Raza
- Sophus3Uploaded bydebasishmee5808
- AssignmentsUploaded bydebasishmee5808
- 1st Year B.tech Syllabus Revised 18.08.10Uploaded bySoumya Sinha
- Book ListUploaded bydebasishmee5808
- Duplicate Doc Diploma CertiUploaded bydebasishmee5808
- Control System - I: Weekly Lesson PlanUploaded bydebasishmee5808
- 1-s2.0-S0019057813000037-mainUploaded bydebasishmee5808
- lec2Uploaded bydebasishmee5808
- lec1Uploaded bycavanzas
- Introduction to Nba 16 NovUploaded bydebasishmee5808
- Report Writing 16.11.2015Uploaded bydebasishmee5808
- 2-14-24Uploaded bydebasishmee5808
- 06820549Uploaded bydebasishmee5808

- MITRES_6_008S11_lec09Uploaded bytiff ras
- WEB QUALIS - ENGENHARIAS IIIUploaded bydiogoscv
- White Paper Clustering Approaches and TechniquesUploaded byGyana Sahoo
- EC202 Slides Lecture 3Uploaded byimplus112
- A Frequency Domain Method for Tuning Hydro GovernorsUploaded byadinamartins
- 2839 Solutions Chap7Uploaded byarunthenua
- cn programs.docUploaded bysagar029
- 22. HeapsUploaded byHumayun Warraich
- EliminationUploaded byAmhyr De Mesa Dimapilis
- Corporate Finance 1Uploaded bykhamaley
- L-3 Simulation of MM1Uploaded by3amerraie
- Hw1Uploaded byaditya_dhananjay
- 106. an M[X]G1 Queue With Bernoulli Schedule, General Vacation Times, Random Breakdowns, General Delay Times and General Repair TimesUploaded byebenesarb
- Visual CryptographyUploaded byHari Ram
- Digital Communication PptUploaded bySaraswathi Asirvatham
- AI Question BankUploaded bygopitheprince
- 38_hw_lpUploaded byNguyen Thi Hong Hanh
- SchedulingUploaded byfauzan rahmatullah
- Data Mining:Uploaded bymahendirana
- PREDICTION OF SURFACE ROUGHNESS IN END-MILLING USING FUZZY LOGIC AND ITS COMPARISON TO REGRESSION ANALYSISUploaded byspriditis
- Winitzki - Approximation to Error FunctionUploaded bywinitzki
- Presentation 1Uploaded byVideesh Kakarla
- MeshesUploaded bySteven Ruiter
- UT Dallas Syllabus for cs6375.002 06f taught by Yu Chung Ng (ycn041000)Uploaded byUT Dallas Provost's Technology Group
- Information Retrieval ModelsUploaded byHellenNdegwa
- Weibull AnalysisUploaded bySebastián D. Gómez
- Queuing.pptUploaded byMeenakshi Dhasmana
- One D Array ExerciseUploaded byAyesha Khan
- Transient VibrationUploaded byPriyadarshini Krishnaswamy
- Siggraph PinocchioUploaded bycarcorice