14 views

Uploaded by Wylie

Lecture Notes 7-10

Lecture Notes 7-10

© All Rights Reserved

- Quiz Machine Learning
- An_Introduction_to_Optimization_Chong_and_Zak.pdf
- Gradient Based Optimization Ppt
- 05-ML (Linear Regression)
- IRJET-OPTIMUM LOCATION OF SHEAR WALL IN A MULTI-STOREY BUILDING SUBJECTED TO SEISMIC BEHAVIOR USING GENETIC ALGORITHM
- Conte OpenSees Snopt a Framework for Finite Element Based Optimization 26Oct2012 Final
- An Introduction to Optimization Chong Solution Manual PDF
- EE 553 Homeworks Moyo
- Studies in Chemical Process Design and Synthesis_PartII_Optimal Synthesis of Dynamic Process Systems with uncertainty.pdf
- A Computationally Efficient Simulation-Based
- operation research
- 3.GameTheory 14q
- Solver Excel User.pdf
- 04428153
- An Introduction to the Conjugate
- Sbe Malta 2016BEKAS
- Relay Co Ordination
- caplib
- Moghadam Et Al
- CSE330 Quiz Solutions

You are on page 1of 28

7. One Dimensional

Unconstrained Optimization

Definition of Optimization

Given a function ( ), the optimization is the process of finding to

minimize or maximize the function.

The function ( ) is called the objective function or cost function.

The point is called a minimizer (denoted as = argmin ( ) ) or a

maximizer (denoted as = argmax ( )) of the objective function.

The value ( ) is called a minimum or maximum of the objective

function, i.e., ( ) = min ( ) or ( ) = max ( ) . Or we can

simply say ( ) is an optimum of the objective function.

The point is called a global minimizer of the objective function if its

function value is smaller than any other function values, i.e. ,

, ( ) ( ). Similarly, the point is called a global maximizer if

its function value is larger than any other function values, i.e.,

, ( ) ( ).

The point is called a local minimizer of the objective function if its

function value is smaller than any other function values of those

points in its neighborhood, i.e., ( ), ( ) ( ), where

( ) is a local neighborhood around . Similarly, the point is

called a local maximizer if ( ), ( ) ( ).

Note:a global minimizer/maximizer is always a local

minimizer/maximizer; but the converse is NOT always true.

Lecture Notes for MBE 2036 Jia Pan

optimum.

vice versa.

For functions with more than two variables, e.g., ( , ), the

optimization will become a high-dimensional optimization. Figure 2

shows the graph of a 2D function ( , ), where the z-axis is the

function value. We can observe that it also has global/local

minimizers.

2

Lecture Notes for MBE 2036 Jia Pan

Convex functions

From Note 1, we know that local optimum may not be a global

optimum. However, for a special type of functions called convex

functions, their local optimum must be a global optimum.

Figure 3 shows a convex 1D function. Intuitively, a function is convex

if connecting any two points on the curve, the segment is always

above the curve. More formally,

, , [0,1], ( + (1 ) ) ( ) + (1 ) ( ).

This property is called the convexity property.

If ( ) is a convex function, then ( ) is called a concave function.

functions. For instance, for 2D convex function, we must have

( , ), ( , _2), [0,1], ( , ) + (1 )( , )

(( , )) + (1 ) (( , )).

A 2D convex function is shown in Figure 4.

3

Lecture Notes for MBE 2036 Jia Pan

Given a differentiable function ( ), a necessary condition for point

to be an optimizer is ( ) = 0. Such kind of points are called the

critical points. If ( ) > 0, then the point is a local minimizer; if

( ) < 0, then the point is a local maximizer; if ( ) = 0, we

cannot make a decision (and need to have more knowledge about

the functions higher order derivatives).

To find all the optimizers of a function , we need to first solve the

root finding problem ( ) = 0 to obtain all the critical points { }.

Next, we need to evaluate ( ) to determine whether it is a local

maximizer or minimizer.

However, the root finding problem is difficult and cannot easily done.

In addition, there are many functions that are not differentiable.

4

Lecture Notes for MBE 2036 Jia Pan

cases.

Note: A brief proof about the above conclusion. For any

locally around , Taylors formula tells: ( ) = ( ) + ( )(

)+ ( )( ) + (( ) ). Thus if ( ) = 0, then ( ) =

( )+ ( )( ) + (( ) ). If ( ) > 0, then ( ) >

( ); if If ( ) < 0, then ( ) < ( ); if ( ) = 0, then we need

more information.

Note: Based on ( ) = 0 and ( ) > 0, we can only say is a

local minimizer. It is possible that is not a global minimizer, unless

we take some more checks (e.g. checking whether is a convex

function).

= 1 and = 1 are two critical points. = 1 is a local minimizer,

= 1 is a local maximizer; but neither of them is global minimizer

or maximizer.

Note: Suppose function is ( ) = . At = 0 there is ( ) =

0, ( ) = 0. And thus we cannot tell anything about = 0. Actually

= 0 is neither a local minimizer nor a local maximizer.

Golden Ratio

Before discussing a numerical method for the optimization, we first

deviate a bit from the optimization topic and discuss a bit about the

Golden ratio.

In mathematics, two quantities are in the golden ratio if their ratio is

the same as the ratio of their sum to the larger of the two quantities.

Figure 5 illustrates the geometric relationship. Expressed

5

Lecture Notes for MBE 2036 Jia Pan

where is the golden ratio. Then we have = 1 + , and thus =

0.618.

Figure 5: Geometric Relationship of Golden ratio. The point A is called the Golden Ratio Point of the

segment.

( ), which implies that = .

In Figure 5, the point which splits the segment into two sub-

segments with lengths and satisfying the Golden ratio

relationship. The point is called the Golden ratio point of the

segment . According to the symmetry, there are two Golden ratio

points on the segment : in Figure 6, we denote them as point

and point respectively. In addition, we note one important and

interesting property: the Golden ratio point , which is a Golden ratio

point of the segment , is also a Golden ratio point of the sub-

segment . This is due to the relationship = = . Similarly,

the point , which is a Golden ratio point of the segment , is also a

Golden ratio point of the sub-segment . We will see that this

property will play an important role in the numerical algorithm that

will be discussed below.

6

Lecture Notes for MBE 2036 Jia Pan

Figure 6: Two Golden ratio points C and D on the segment AB. Note that the point C is also the Golden

ratio point of the sub-segment BD. Similarly, the point D is also the Golden ratio point of the sub-

segment AC.

Iterative search

Iterative search is a technique for finding the minimum or maximum

of a strictly unimodal function by successively narrowing the range of

values inside which the minimum or maximum is known to exist.

Unlike finding the root finding, where two function evaluations with

opposite sign are sufficient to bracket a root, when searching for a

minimum or maximum, three values are necessary.

Definition: A function ( ) is a unimodal function if it belongs to the

following two cases. Case 1: For some value , ( ) is monotonically

increasing for and monotonically decreasing for ; Case

2: For some value , ( ) is monotonically decreasing for and

monotonically increasing for . In Case 1, the maximum value of

( ) is ( ) and there are no other local maxima; in Case 2, the

minimum value of ( ) is ( ) and there are no other local minima.

As a result, we call Case 1 the unimodal maximum case, and Case 2

7

Lecture Notes for MBE 2036 Jia Pan

a multimodal function. Figure 7 shows the difference between the

unimodal and multimodal functions.

unimodal function may not be convex or concave, as shown by the

two examples in Figure 8.

Next, we can look into the details of the iterative search. Lets first

Figure 8: Two examples of unimodal but are not convex or convex

consider the unimodal maximum case (i.e., Case 1). Suppose we what

to achieve the maximum of a unimodal function within the interval

( ) ( )

[ , ], how can we successively narrow this search range?

( ) ( )

One solution is as follows: first, we choose two points and

( ) ( ) ( ) ( )

inside the segment [ , ] with < . These two points

( ) ( )

divide the segment [ , ] into three sub-segments I, II, and III, as

shown in Figure 9.

8

Lecture Notes for MBE 2036 Jia Pan

( ) ( )

Then we evaluate the function values ( ) and ( ) at points

( ) ( )

and . According to the relative magnitude of these two

values, there are two difference cases.

( ) ( )

The Case 1 is when ( ) ( ), as shown in Figure 9. In which

regions among I, II and III can the maximizer of a unimodal

function appear?

( ) ( )

Figure 9Case 1 for a unimodal maximum function where f(x ) f(x ).

Can the maximizer locates in region II and III? Yes, it can; and two

sample unimodal functions are shown in the first two sub-figures in

Figure 10, corresponding to the situations where the maximizer

locates in region II and III respectively.

Can the maximizer locate in region I? No, it is impossible;

otherwise the function cannot be a unimodal function as shown in

( ) ( )

last sub-figure in Figure 10. Why? Because < , but

( ) ( )

( ) ( ), and thus to the right of the function is not

monotonically decreasing. This is in contradict with the property of

the unimodal maximum function and thus not possible.

9

Lecture Notes for MBE 2036 Jia Pan

Figure 10: How the function will look if its maximizer locates in region II, III and I, respectively.

Since the maximizer can only locate in region II and III, we can

narrow the search in the next iteration. In particular, in the next

( )

iteration, the lower bound should be updated from to

( ) ( ) ( ) ( )

= , the upper bound is unchanged, i.e. = ; and we

( ) ( )

need to generate two new points and within the interval

( ) ( )

[ , ].

( ) ( )

The Case 2 is when ( ) ( ), and the discussion is similar.

We can conclude that the maximizer can only locate in region I

and II. As a result, in the next iteration, the lower bound is

( ) ( )

unchanged, i.e. = , and the upper bound should be updated

( ) ( ) ( )

from to = . We also need to generate two new points

( ) ( ) ( ) ( )

and within the interval [ , ].

Okay. We have finished discussing the main part of the algorithm for

the unimodal maximum function, which can be briefly summarized as

( ) ( ) ( ) ( )

follows. We initialize with four points < < < and

then go into iterations. In the i-th step, we first evaluate the values of

() () () ()

( ) and ( ). If ( ) ( ) (case 1), we perform the

update:

( ) () ( ) ()

and ;

10

Lecture Notes for MBE 2036 Jia Pan

() ()

If ( ) ( ) (case 2), we perform the update:

( ) () ( ) ()

and .

( ) ( )

Then we generate two points < in the narrowed

( ) ( ) ( )

segment [ , ]. With the new set of four points <

( ) ( ) ( )

< < , we can start the next step of iteration.

The iteration stops when the stopping criteria is satisfied, which we

will discuss later.

For the unimodal minimum function, there are some tiny differences

with the algorithm presented above for the unimodal maximum

function. The details of these differences are left as Exercises. [Hint:

() ()

the case 1 should now become ( ) ( ) where is in region

() ()

II and III, and case 2 should become ( ) ( ) where is in

region I and II.]

Golden-Section search

In the iterative search algorithm presented above, we need to

perform two function evaluations (i.e. evaluating the values of

() ()

( ) and ( )) in each step of iteration. Suppose we run

iterations, then we need to perform 2 function evaluations. If the

function is complex (e.g., ( ) = sin ), then the

computation can be slow. Can we reduce the number of function

evaluations that are necessary in the iterative search?

The answer is, yes we can. The key point lies in the property we

discussed before in Figure 6: suppose the segment is the

() () ()

segment [ , ], and we choose the point as the Golden ratio

()

point and the point as the Golden ratio point . Suppose we

11

Lecture Notes for MBE 2036 Jia Pan

( ) ( )

meet case 1, and the narrow segment becomes , = .

( ) ( )

We further need to generate and as the Golden ratio

()

point on . But according to our discussion before, the point

( ) ()

(that is ) is also the Golden ratio point of , i.e. = . We

( )

only need to generate one new Golden ratio point . In this way,

( )

we only need to evaluate the function value ( ) in the next

( ) ()

iteration, since the other function value ( )= ( ) has

already been evaluated in the i-th iteration. Similarly, for case 2, we

( ) ()

have = , and we only need to generate a new Golden ratio

( )

point and evaluate its value.

() ()

By using the Golden ratio points as , , given iterations, we

only need to perform one function evaluation in each iteration,

( )

except the first one, where we need to evaluate both ( ) and

( )

( ). As a result, we only need to take 2 + ( 1) = + 1

function evaluations in total. Comparing to the general iterative

search, the Golden-section search only takes 50%

computation, which is a great improvement.

( )

If we stop at the N-th iteration, and the set of four points are <

( ) ( ) ( )

< < . How can we estimate the error after this

iteration? If we are in case 1, then should locate in region II and III,

( )

then we return as the approximate solution to . The actual

can be in any place in region II and III, and thus the error bound |

( ) ( ) ( ) ( ) ( )

| max , , i.e., the larger one in the

lengths of interval II and III. Similarly, If we are in case 2, we shall

12

Lecture Notes for MBE 2036 Jia Pan

( )

return as the approximate solution to , and the error bound

( ) ( ) ( ) ( ) ( )

| | max , .

( ) ( ) ( ) ( )

For the Golden ratio case, we know that = =

( ) ( ) ( ) ( ) ( ) ( )

0.381( ), and = (5

( ) ( ) ( ) ( )

2) 0.236( ). As a result, the error is always

( ) ( ) ( ) ( )

bounded by = (1 ) .

( ) ( ) ( ) ( )

interval is narrowed by , and thus = ( ).

As a result, the error bound when we stop at the N-th iteration would

( ) ( ) ( ) ( )

be: (1 ) = (1 ) ( )

iterations required t achieve a given error bound.

Example: Suppose the relative error bound is , and assume we

( ) ( )

( )

already know . Then = , and thus we

| |

| |

( ) ( )

( )

have 1+ . In practice we dont know before

ahead, so we usually use a weaker estimation e.g. 1+

( )

( ) ( )

( )

.

optimum for a non-unimodal function, and Figure 11 shows one

example.

13

Lecture Notes for MBE 2036 Jia Pan

Figure 11Iterative search may not find the global optimum for a non-unimodal function.

iterative search can always return a local optimum solution. Why?

[Hint: Are all the functions unimodal locally?]

Note: Will the iterative search algorithm work for non-continuous

or non-differential functions as shown in Figure 12?

14

Lecture Notes for MBE 2036 Jia Pan

Newtons method

Newton-Raphson method is an approach for finding the root of a

function such that ( ) = 0. It is an iterative method, and at the i-th

( )

step there is = .

( )

condition for to be an optimizer is ( ) = 0, which can be solved

( )

by the Newton-Raphson method: = . This is called the

( )

Newtons method.

Newtons method does not require initial guesses that bracket the

optimum. Like Newton-Raphson method, depending on the nature

of the function and the quality of the initial guess, this method may

be divergent i.e. it may not find the answer. However, when it works,

it is faster than the Golden-Section search.

Note: How will the Newtons method work for non-continuous or

non-differential functions as shown in Figure 12?

initial guess = 0?

2 with initial guess = 0?

15

Lecture Notes for MBE 2036 Jia Pan

8. Multi-Dimensional

Unconstrained Optimization

Analytical method

Given a multi-dimensional differential function ( ), a necessary

condition for the point = [ , , ] to be the minimizer or

maximizer is ( ) = 0, where ( ) is the gradient evaluated at

point :

( )=

function. These points are also called the critical points, similar to the

one-dimensional case.

To determine whether is a local maximizer or minimizer, we need

to further check the determinant of the Hessian matrix of .

The Hessian matrix of evaluated at is

= ,

16

Lecture Notes for MBE 2036 Jia Pan

there is > 0; is negative definite, if for all non-zero vector ,

there is < 0.

According to the Taylors expansion, for any point in the

neighborhood of , there is:

( ) = ( )+ ( ) ( )+ ( ) ( ) + (|| || ).

( ) = ( )+ ( ) ( ) + (|| || ).

) ( ) > 0, and thus ( ) > ( ), i.e., is a local minimizer.

Similarly, if is negative definite, then for any , there is

( ) ( ) < 0, and thus ( ) < ( ), i.e., is a local

maximizer. If is neither positive nor negative definite, then we

cannot tell too much about .

Intuitively, is positive definite is analogous to ( ) > 0 in the one

dimensional case; and is negative definite is analogous to ( ) < 0

in the one dimensional case.

(x,y) is called a saddle point.

17

Lecture Notes for MBE 2036 Jia Pan

Similarly +2 + < 0 holds for any non-zero vector , if

optimum values.

18

Lecture Notes for MBE 2036 Jia Pan

2 6 ( 1) 6 (1 )

the critical point is (0,0). = and

6 (1 ) 2 (1 )

2 0

thus = at (0,0). is positive definite, and thus (0,0) is a

0 2

local minimizer. It is not a global minimizer, since (2,3) = 5 < 0 =

(0,0).

In any multi-dimensional optimization search algorithm, there are

two important components: 1) choose a search direction ; 2) choose

a step size (i.e., how along) that we should pursue a solution along

the chosen direction. If we start from the point , the iteration runs

as: = + .

Gradient descent chooses the gradient as the search direction, i.e.,

= ( ).

For the step size, one ideal but not practical way is to choose a very

small step size such that the optimization process is always keeping

the steepest direction and walking a shortest distance. However, the

disadvantage lies in the fact that we need to perform a huge number

of computations of the gradient, which would be very expensive.

It provides a better way for determining the step size. In particular,

the step size is chosen as the travel distance that achieves the

best function value along the search direction .

For a maximum optimization max ( ) , we choose the step size as

= argmax ( + );

19

Lecture Notes for MBE 2036 Jia Pan

= argmin ( + ).

Example: Using the steepest ascent method for finding the maximum

point of ( , ) = 2 + 2 2 with the initial guess =

( , ) = (1,1).

2 +22 6

Solution: = ( , )= = .

2 4 ( , ) 6

Then = argmax ( + ).

( + ) = (1 + 6 , 1 6 ) = 180 + 72 7 = ( ).

The optimal needs to satisfy ( ) = 0, and thus = 0.2.

= + = (0.2, 0.2).

After several steps, we can get the solution sequence as show in

Figure 14.

20

Lecture Notes for MBE 2036 Jia Pan

From the above example, we can observe that the search directions

seems to be perpendicular to . Is this just a coincidence?

No, actually this is always true for the steepest decent method.

Theorem: In steepest descent method, the descent direction is

perpendicular to , the descent direction in the last step.

Proof: Let ( ) = ( + ), according to the selection of steepest

descent, there is ( ) = 0, i.e. ( + ) = 0, this actually

means = 0.

This property actually implies one important limitation of the

steepest descent method: if the initial guess is not good, the solution

sequence will be zigzag and converge very slowly, as shown in Figure

15.

Figure 15 Comparison of different initial guesses for the steepest descent algorithm while solving an

optimization problem.

One solution to the above limitation of the steepest descent method

is the Newtons method.

21

Lecture Notes for MBE 2036 Jia Pan

( ) = ( )+ ( ) ( )+ ( ) ( )+

(|| || ), where is the Hessian matrix evaluated at point .

At the optimal point ( ) = 0, and thus ( ) + ( ) = 0. As

a result, we can choose the next point as = ( ).

The Newtons method is more complex than the steepest descent

method, but usually it can converge much faster than the steepest

descent method, as shown in Figure 17.

Figure 16: Comparison between the steepest descent method and Newtons method while maximizing

( , )= + 5 log(1 + ) , where the black curve is the steepest descent, and the Newtons

method is the blue curve.

22

Lecture Notes for MBE 2036 Jia Pan

regression

Polynomial curve fitting

Suppose we observe a real-valued input variable and we wish to

use this observation to predict the value of a real-valued target

variable . Now suppose that we are given a training set comprising

observations of , denoted as , , , , together with

corresponding observations of the values of , denoted as

, , , . Our goal is to exploit the given data set in order to

make predictions of the value of of the target variable for some

new value . The simplest approach to achieve this is via curve fitting.

In particular, we shall fit the data using a polynomial function of the

form ( , ) = + + . ++ = , where

is the order of the polynomial.

The values of the coefficients will be determined by fitting the

polynomial to the training data. This can be done by minimizing an

error function that measures the misfit between the function ( , ),

for any given value of , and the given data set points. One simple

choice of error function, which is widely used, is given by the sum f

the squares of the errors between the predications ( , ) for each

data point , and the corresponding target value , so that we

minimize

( )= ( ( , ) ) .

In this way, the curve fitting problem can be converted into a multi-

dimensional optimization problem min ( ).

23

Lecture Notes for MBE 2036 Jia Pan

different orders of polynomials. We can observe that: if the

polynomial order is too small (e.g., = 0,1 in Figure 17), the fitting

curve cannot fit the points well; if the polynomial order is too large

(e.g., = 9 in Figure 17), the fitting curve can pass through every

data point, but it may not fit well with the underlying real function

and thus cannot provide a high quality prediction. Only when the

polynomial order is appropriate (e.g., = 3 in Figure 17), the fitting

curve can fit the given data points well and also provide a good

prediction.

Figure 17: Polynomials with different orders M, shown as red curves, fitted to the data set shown as blue

dots.

the linear regression and the quadratic regression.

24

Lecture Notes for MBE 2036 Jia Pan

In this special case = 1, ( , , )= + . Then the error

function is ( , ) = ( + ) .

two equations we have:

obtain = and = .

( )

In this special case = 2, ( , , , )= + + . Then

the error function is ( , , )= + + .

solving these three equations we have:

= .

regression.

For general polynomial fitting, the error function can be formulated

as

25

Lecture Notes for MBE 2036 Jia Pan

( )= ( ( , ) )

1

= 1

1

1

Let = , = 1 , the above function can

1

be reformulated as ( ) = ( ) ( ). The optimal

needs to satisfy ( ) = 0, which results in = ( ) .

A function ( ) is periodic if exists such that ( + ) = ( ). Here

is called the period of the function, and = is the frequency of

the function.

Any periodic function with frequency (and some other

appropriate properties) can be represented by the Fourier series, i.e.,

( )= + cos( ) + sin( )+ cos(2 )

+ sin(2 ) + + cos( ) + sin( )+

or ( ) = + cos( )+ sin( ).

In other words we can approximate ( ) by a set of basis functions

= {1, sin( ) , cos( ) , , sin( ) , cos( ) , }, and we

need to find the corresponding Fourier coefficients , , ,

26

Lecture Notes for MBE 2036 Jia Pan

1. Given any two different basis functions , , we have

( ) ( ) =0

2. For any basis function , ( ) 0. In particular, for

1, ( ) = , and 1 = .

Proof: Simple integration math.

easily.

sin(2 ) + + cos( )+ sin( ) + ] = , and

thus = ( ) .

sin( ) + cos(2 ) + sin(2 )++ cos( )+

sin( ) + ] = , and thus = ( )cos( ) .

Similarly, = ( ) sin( ) .

slides.

For some special functions, actually we can compute their Fourier

coefficients using the concept instead of complex calculation.

Example. Compute the Fourier coefficients for function ( ) =

sin + cos( ).

27

Lecture Notes for MBE 2036 Jia Pan

Fourier coefficients. But notice that this function is already a sum of

trigonometric functions, so maybe we can find a more lightweight

solution. First, the period of sin is = 4 , and the period of

( ) is 12 , the lowest common multiple of 6 and 4 . We can verify

this by checking ( + 12 ) = ( ), and there is no smaller that

can satisfy ( + ) = ( ). Thus = = . The function can

then be written as the standard format of Fourier series: ( ) =

sin + cos = sin(3 ) + cos(2 ). This means =1

and = 1, and all other , are zero.

Note: This may help you to greatly simplify the computation when

the function involves trigonometric components.

1, < < 0

Try by yourself: ( ) = , where = 24 . What is

0, 0< <

period now should be the lowest common multiple of 24 (the

period of ( ) and 12 (the period of sin + cos( )).

28

- Quiz Machine LearningUploaded byMatt
- An_Introduction_to_Optimization_Chong_and_Zak.pdfUploaded byNaveen Reddy
- Gradient Based Optimization PptUploaded byLuis
- 05-ML (Linear Regression)Uploaded byfarsun
- IRJET-OPTIMUM LOCATION OF SHEAR WALL IN A MULTI-STOREY BUILDING SUBJECTED TO SEISMIC BEHAVIOR USING GENETIC ALGORITHMUploaded byIRJET Journal
- Conte OpenSees Snopt a Framework for Finite Element Based Optimization 26Oct2012 FinalUploaded bygreenday3
- An Introduction to Optimization Chong Solution Manual PDFUploaded byBhanu Prakash
- EE 553 Homeworks MoyoUploaded bylarasmoyo
- Studies in Chemical Process Design and Synthesis_PartII_Optimal Synthesis of Dynamic Process Systems with uncertainty.pdfUploaded byMarco Ravelo
- A Computationally Efficient Simulation-BasedUploaded byJlassi Sarra
- operation researchUploaded bykaran_tiff
- 3.GameTheory 14qUploaded bysaurabh3074
- Solver Excel User.pdfUploaded byraeti2013
- 04428153Uploaded byqssr123
- An Introduction to the ConjugateUploaded bySushant Pandurangi
- Sbe Malta 2016BEKASUploaded byGeorge Be
- Relay Co OrdinationUploaded byPardeep Khosa
- caplibUploaded byibkhan80
- Moghadam Et AlUploaded byzawadzahin
- CSE330 Quiz SolutionsUploaded bynehal hasnain refath
- UnconstrainedOptimization IUploaded byShubham
- Chapter 7 One-Dimensional Search MethodsUploaded byjairoo1234
- Science 1Uploaded bymsa_imeg
- Characterizing convexity of a function by its Fréchet and limiting second-order subdifferentials.pdfUploaded byPham Duy Khanh
- Assignment 2Uploaded byAshit
- Multi Optimis 12Uploaded byPaul Onubi Ayegba
- AlgebraicUploaded byVerevol Deathrow
- CASD 2017. Điều khiển dự báo động cơ tuyến tính đồng bộ kích thích vĩnh cửu sử dụng quy hoạch nhiều tham số và tuyến tính hóa chính xácUploaded byNam Hoang Thanh
- Quasi Connell RasmusenUploaded byJavier Quintana
- 1-s2.0-S1366554516000302-mainUploaded bySagnika Chakraborty

- MBE2036-1Uploaded byWylie
- gjhllUploaded byWylie
- gjhllUploaded byWylie
- gjhllUploaded byWylie
- gjhllUploaded byWylie
- gjhllUploaded byWylie
- gjhllUploaded byWylie
- gjhllUploaded byWylie
- 04-clsfn.pptUploaded byAatresh Karnam
- MBE2036-8Uploaded byWylie
- MBE2036-7Uploaded byWylie
- MBE2036-6Uploaded byWylie
- MBE2036-5.pdfUploaded byWylie
- MBE2036-3Uploaded byWylie
- Exercise1-6Uploaded byWylie
- MBE3106 Lecture 1Uploaded byWylie
- MBE3106 Lecture 3Uploaded byWylie
- MBE3106 Mini Projects 2017 18Uploaded byWylie
- MBE3106 0001- 01012-2103Uploaded byWylie
- Astigma What AOfA Fitting GuideUploaded byChuk Ifeanyi
- 4010-1 copyUploaded byWylie

- IIT Kharagpur Lec 29 Fourier Integral Representation of a FunctionUploaded byI_Anonymous123
- A2AS MATH PP January 2008 as Mark Scheme 3691 (2)Uploaded byCora Doran
- s7 1500 Compare Table en MnemoUploaded byPeli Jorro
- 5-3 Solving Trigonometric Equations.pdfUploaded byseniorss17
- 01 HandoutUploaded byAnuj Dhorajiwala
- Partial DifferentiationUploaded byAmirah Nasir
- C2 June 06Uploaded byshah143
- M3 R08 MayJune 12Uploaded bysaranya
- 2010-H2 Maths-Integration and Its Applications (Tut 3) - Int Applications Solutions)Uploaded byAnita Yun Chu Peng
- MichellWorkUploaded bytbarros2
- Parsevaus Integral and the Jacob! Expansions in Series of Bessel FuncyionsUploaded byMeisam Tabriz
- Adopt Al Alg and Trig 2012 Analytical Math Sullivan FinalUploaded byHasen Bebba
- Angular Momentum, Laplacian and Gradient in Spherical Coordinates - IzveduvanjeUploaded byElena Kusevska
- Unit 7 Day 7 - Homework MCR3UWUploaded byYassine Elhedhli
- EllipseUploaded byNíkhíl Bansal
- A level Mathematics Practice Paper F – Pure Mathematics.docxUploaded byZaka Ahmed
- ch02Uploaded byAbdulhadi Safadi
- Trig Cheat Sheet from Johns HopkinsUploaded bynv
- sine analysisUploaded byapi-246734219
- Celestial Navigation TeacupUploaded byMax O'Shea
- Cbse X Maths SA1 GRAND TESTUploaded bysagarrajaneni
- final physics rocket reportUploaded byapi-249141434
- Ktet syllabusUploaded byDelvin Davis M
- Trigonometric FunctionsUploaded bywaliullah7861
- MIT TrigUploaded byMaithy Ton
- Add Maths 1986 Paper 1 and 2Uploaded byapi-26423290
- Fourier SeriesUploaded byShivam Dagamwar
- Engineering Mathematics Assignment AnswersUploaded bySatha Panaha
- Ch 3.pdfUploaded byShoshAlmazroeui
- Math.cheat.sheet Too.cool.and.imp MustUploaded byAmbalika Smiti