- Linearization and Newton's Method
- Newton Raphson Method
- Lattice_Optimisation_Tutorial.pdf
- Lecture 25
- chap6
- FE Math Practice
- Newton Raphson
- Nazmy a.S. Three Dimensional Nonlinear Static Analysis of Cable Stayed Bridges 1990
- 27-333
- Project 4.1
- ANew Approach for Solving Systems of Nonlinear Equations
- 24259-86025-1-PB
- Lecture 10 Solutions to Non-Linear Equations [Compatibility Mode]
- Optimization Basics
- Sitnikov Newton
- scs1206005_2
- 1-s2.0-S0306261909004437-main
- bat algorithm
- productFlyer_978-0-7923-5697-4
- AE 5301 HW 3 Solution
- Interactive-Design-of-3D-Printable-Robotic-Creatures-Paper.pdf
- Decision Support Model for Operation of Multi-purpose Water Resources Systems
- Advanced Stochastic Method for Probabilistic Analysis
- IME NumericalAnalysis
- IDENTIFICATION OF ELASTIC-PLASTIC MECHANICAL PROPERTIES FORBIMETALLIC SHEETSBYHYBRID-INVERSE APPROACH
- Optimization Algorithms of Operative Control
- 1-s2.0-S1359431107002839-main
- Matlab NumerAnalyis Book
- Suspencion
- TC Kinematic
- Biosensors.pdf
- tree-searching.ppt
- B-Trees
- Useful RUN Commands
- PROJECT REPORT.docx
- 5989-1396EN.pdf
- AFCAT 2011-01
- Protein Viscosity (1)
- 2011-02
- ELITMUS SYLLABUS
- AFACT_solved GK Sample Paper
- 03.spr.functional-proteomics-in-drug-discovery.pdf
- 5 Best Ways to Run Linux OS on Windows
- Questions Asked in AFCAT 01
- Rev Model
- Somatic Embryogenesis , Callus Culture
- DNA Fingerprinting
- TCS SRM 20012
- Steve-Jobs.pdf
- Windows Commands
- Computer Key Words
- Things That MICROSOFT Could Not Explain
- Tata Consultancy Services Sample Test
- new tcs
- Principle of DNA microarray
- TCS SRM 20012.doc
- Capgemini Selection Procedure
- nonidealreactors.ps
- DNA Vaccination

Multidimensional unconstrained problems 3. Multidimensional constrained problems Problems of the first class are the easiest to solve whereas those of the third class are the most difficult. In practice, multidimensional constrained problems are usually reduced to multidimensional unconstrained problems which, in turn, are reduced to one-dimensional unconstrained problems. In effect, most of the available nonlinear programming algorithms are based on the minimization of a function of a single variable without constraints. Therefore, efficient one dimensional optimization algorithms are required, if efficient multidimensional unconstrained and constrained algorithms are to be constructed. The one-dimensional optimization problem is minimize F = f(x) where f(x) is a function of one variable. This problem has a solution if f(x) is unimodal in some range of x, i.e., f(x) has only one minimum in some range xL ≤ x ≤ xU, where xL and xU are the lower and upper limits of the minimizer x∗ . Two general classes of one-dimensional optimization methods are available, namely, search methods and approximation methods. In search methods, an interval [xL, xU] containing x∗ , known as a bracket, is established and is then repeatedly reduced on the basis of function evaluations until a reduced bracket [xL,k, xU,k] is obtained which is sufficiently small. The minimizer can be assumed to be at the center of interval [xL,k, xU,k]. These methods can be applied to any function and differentiability of f(x) is not essential. In approximation methods, an approximation of the function in the form of a loworder polynomial, usually a second- or third-order polynomial, is assumed. This is then analyzed using elementary calculus and an approximate value of x∗ is deduced. The interval [xL, xU] is then reduced and the process is repeated several times until a sufficiently precise value of x∗ is obtained. In these methods, f(x) is required to be continuous and differentiable, i.e., f(x) ∈ C1. Several one-dimensional optimization approaches are as follows: 1. Dichotomous search 2. Fibonacci search 3. Golden-section search

Dichotomous Search Consider a unimodal function which is known to have a minimum in the interval [xL. this can be achieved by using the values of f(x) at suitable points. . both inequalities xL < x∗ < xb and xa < x∗ < xU must be satisfied as in Fig. we must have xa < x∗ < xb. xU]. This interval is said to be the range of uncertainty. namely.1a. we must have xa < x∗ < xU as in Fig. an immediate reduction is possible. If the value of f(x) is known at a single point xa in the range xL < xa < xU. For case (c). that is. 1c. Consequently. The possibility xb < x∗ < xU is definitely ruled out since this would imply that f(x) has two minima: one to the left of xb and one to the right of xb. that is. The minimizer x∗ of f(x) can be located by reducing progressively the range of uncertainty until a sufficiently small range is obtained. if the value of f(x) is known at two points. xL < x∗ < xb . for case (b). (a) f(xa) < f(xb) (b) f(xa) > f(xb) (c) f(xa) = f(xb) In case (a). x∗ may be located in range xL < x∗ < xa or xa < x∗ < xb. xa and xb. However. 1(a). 4. Three possibilities may arise. say. 1b. the information available is not sufficient to allow the reduction of the range of uncertainty. Similarly. In search methods. as illustrated in Fig. point x∗ is equally likely to be in the range xL to xa or xa to xU as depicted in Fig.

.

do Steps 3 and 4. STEP 5: Let . The objective being to place the two test points as close together as possible. k = k + 1 Return to Step 3. then b = x2 else a = x1. STEP 3: STEP 4: (For a maximization problem) If f (x1) ≥ f (x2). Dichotomous Search Algorithm to maximize f(x) over the interval [a. where ε is a very small number. Calculate the number of iterations n using the formula STEP 2: For k = 1 to n. such as 0.b] STEP 1: Initialize: Choose a small number ε > 0. and then moves slightly to either side of the midpoint to compute two test points ((a+b)/2)+(ϵ/2) and ((a+b)/2) – (ϵ/2): . Select a small t such that 0 < t < b – a. called the length of uncertainty for the search. The procedure continues until it gets within some small interval containing the optimal solution.01.The Dichotomous Search Method computes the midpoint (a+b)/2.

we may wish to continue until the change in the dependent variable is less than some predetermined amount. say Δ. In stead of determining the number of iterations.2 and we choose ε = 0. Assume the optimal tolerance to be less than 0. either maximize –y or switch the directions of the signs in Step 4. That is continue to iterate until f (a) – f (b) ≤ Δ.01. We determine the number of iterations to be . Example Maximize f (x) = –x2 – 2x over the interval –3 ≤ x ≤ 6. To minimize a function y = f (x).

The name Fibonacci search has been attributed to this technique because of the search procedure’s dependency on a numerical sequence called Fibonacci number. n = 2. univariate objective function.. Etc. Step-1: Define the end points of the search. Etc. A and B. .Fibonacci Search Fibonacci search is a univariate search technique that can be used to find the maximum (minimum) of an arbitrary unimodal. 3…… Xn = 1 and X1 = 1 The Fibonacci sequence Identifier Sequence Fibonacci number F0 0 1 F1 1 1 F2 2 2 F3 3 3 F4 4 5 F5 5 8 F6 6 13 F7 7 21 F8 8 34 F9 9 55 F10 10 89 F11 11 144 F12 12 233 F13 13 377 F14 14 610 F15 15 987 Etc. Xn = Xn-1 + Xn-2.

6x + 1. L0 =L1 = (B-A) Step-5: Define the second interval of uncertainty as follows: .5769. Step-3: Define the minimum resolution parameter. ( ) Step-6: Locate the first two functional evaluations at the two symmetric points X1 and X2. L1 = 25 . ( ) . N.477 X2 = 25 – 15. f (X2) = -67. The optimal value of f(x) is assumed to lie in the range (0. Step-8: Use the relationship Ln = Ln-2 – Ln-1 to locate the subsequent points of evaluation within the remaining interval of uncertainity. defined as follows: X1 = A + L2 X2 = B – L2 Step-7: Calculate f(X1) and f(X2).58 15. and eliminate the interval in which the optimum cannot lie.233 0 9. 25 X1 = 0 + 15.4231.3x2 + 21. Step-4: Define the initial interval and first interval of uncertainity as (B-A) Therefore. The final solution can either be an average of the last two points evaluated (XN and XN-1) or the best (max/min) functional evaluation. with a minimum resolution of 0. f (X1) = -379.42 25 . ϵ. Continue to repeat the steps 7 and 8 until N functional evaluations have been executed.4231 = 15. Example Maximize the function f(x) = .4231 = 9.Step-2: Define the number of functional evaluations.5 over six functional evaluations. 25) Solution: L0 = 25. that are to be used in the search. ( ) - The first two functional evaluations will be conducted over the range of 0.

233 4 0.115 32.233 15.5769 -67.115.83 5. Step-2: Determine the first two functional evaluations at points X1 and X2 defined by.618 (B . This is known as the golden ratio or The limit of the ratio of golden section. Step-1: Define the initial interval of uncertainity as L0 = B – A.731 39.83 4. the interval of uncertainity is established as 2.42 can be eliminated.731. 2. where B is the upper bound and A is the lower bound.2304 38.83.744 9.744 5 2. Step-4: Determine the region of uncertainity defined by Lj+1 = Lj-1 – Lj j = 2.4231 9. Functional Interval of Xn-1 f(Xn-1) Xn f(Xn) evaluations Uncertainity (n) 2 0.8462 24. 3. . 9. 5.83 5.115.4231 -379. Golden Section Search Method In performing the a Fibonacci search. X1 = A + 0.477 3 0.8462 6 2.618 (B .5769 -67.8462 24.A) Step-3: Eliminate the appropriate region in which the optimum cannot lie.8462 3. 4. the two primary drawbacks are the a priori specification of the resolution factor and the number of experiments to be performed. f (X5*) = 39. The best estimate of the optimal solution is given by X5* = 3. 15.731 39.731 39. …….618.5769 5.A) X2 = B – 0.2304 3.Hence the region to the right of X1 = 15. Similarly. * + [ ] goes to 0.26 3.688 At the sixth functional evaluation.

1) = 9.609≥ X 3.348.8 9.7 6. and the interval of uncertainity after two functional evaluations is given by 9.304 -100.168. Example Minimize f(x) = x4 – 15x3 + 72x2 – 1135x Terminate the search when | ( ) ( )| The initial range of “x” is 1 ≤ x ≤ 15. Solution: The first two points are placed symmetrically within the interval 1 ≤ x ≤ 15.346 -168.652 3 6.A) L1 = (B .346 -168. f(X1) = 595.304 4 7.Where L0 = (B .305 114.652 can be eliminated.82 Therefore the region to the right of X = 9. Repeat this procedure until a specified convergence criteria is satisfied.A) L2 = X1 – A Or L2 = B – X2 Depending upon the region eliminated at step-3 Step-4: Establish a new functional evaluation using the result of step-4.652.304 .652 ≥ X ≥1.6 9. and then go to step-3.618 (15 . Functional Xn-1 f(Xn-1) Xn f(Xn) Interval of Length evaluation (right) (left) uncertainity (n) 2 9.609 6.64 ≥4.652≥ X ≥1 8. The golden section ratio places at X1 = 1 + 0.618 (15 . Similarly.70 And X2 = 15 – 0.346 -168.652 595.348 ≥4. Evaluate f(x) at this point.652≥ X 5.8 4.1) = 6.8 7. f(X2) = .

8 166.828≥ X ≥5.8 163.61 -168.346 6.53 6. note that f(X9) = -169. the golden section search will stop at this point.643≥ X ≥6.828≥ X ≥6.297 At iteration number 9.643 -168.482 0.780 0.048 6.609≥ X ≥5.566 6.566 6.53 147.828 6.83 7.346 6.34 5.8 169.262 0.34 and f (X8)= -169.8 169.346 6.048 6.5 6 7 8 9 6.346 6.42 -168.34.83 Hence.346 2.566 6.043 1.25 -168. The best answer is given by X* =6.346 6.83 169.828≥ X ≥6. | ( ) ( )| Since termination criteria are satisfied.643 and f(X*) = -169. .

f(b) . It depends only on the choice of end points of the interval [a.REGULA-FALSI METHOD The convergence process is a bisection method is very low.f(a) . f(b)) intersecting the x-axis. b]. The function f(x) does not have any role in finding the point c (which is just mid-point of a and b). A better approximation to c can be obtained by taking the straight line L joining the points (a. To obtain the value of c we can equate the two expressions of the slope m of the line L. c] or [c. It is used only to decide the next smaller interval [a. f(a)) and (b. ( ) ( ) ( ) ) ( ( ) ( ) ) ( ) ( ) ( ( )) ( f(b) * (b-a) c = b . b].

b] such that f (a) * f (b) < 0 Do a*f(b) .Now the next smaller interval which brackets the root can be obtained by checking f(a) * f(b) < 0 then b = c > 0 then a = c = 0 then c is the root.False Position Scheme Given a function f (x) continuos on an interval [a.exp(x) = 0.b*f(a) c= f(b) .0.5) is less than zero and use the regulafalsi scheme to obtain the zero of f (x) = 0. From this it's clear that there is a root between 0 and 0. 0.5 and 2. Iteration No. a b c f(a) * f(c) . Algorithm . Selecting c by the above expression is called Regula-Falsi method or False position method. C2 or C3 is satisfied) The false position method is again bound to converge because it brackets the root in the whole of its convergence process.5 and also another root between 1. Numerical Example : Find a root of 3x + sin(x) . The graph of this equation is given in the figure.f(a) if f (a) * f (c) < 0 then b = c else a = c while (none of the convergence criterion C1. Now let us consider the function f (x) in the interval [0.5] where f (0) * f (0.

133 0.192E-3 (+ve) 4 1.1) / x is approximately 8.209 0.5 0.5 1.218E-3 1.0952 0.159 (+ve) 2 1.5 and b = 0.214 1.376 0.0201 9.811E-9(+ve) So one of the roots of x * cos[(x)/ (x-2)]=0 is approximately 1.218E-3 1.5 1.36 1.5 Iteration No.0727 (-ve) -0.0201 9.032 (+ve) 3 1. .194 1.1 2 3 0 0.2E-3 (-ve) So one of the roots of x2 = (exp(-2x) .1) / x for a = -0.1547 (-ve) -0.214 3.5 1.5 -0.376 0.0438 0.83E-4.085 (-ve) Example -2 Find the root of x * cos[(x)/ (x-2)]=0 a = 1 and b = 1.3211 (-ve) -0.5 0.5 1.5 -0. 1 1 1.222 1.222 1.36 0.586E-4(+ve) 5 1.5 -0.931E-3 8.22 1.5 -0.5 0.646E-5 (+ve) 6 1.102 (-ve) -0.5 1.38 (+ve) -0.5 -0.22 2.194 0.212E-3 4.5 b 0.015 (-ve) -7.1E-3 (-ve) -3.5 -0.376 0.222.5 1. Example-3 Find the root of x2 = (exp(-2x) .0438 0.0952 0.0336 (-ve) -0.646 (-ve) -0.5 -0.222 3.36 0.133 1.931E-3 c 0.208 0.212E-3 4. 1 2 3 4 5 6 7 8 a -0.83E-4 f(a) * f(c) -0.5 Iteration a b c f(a) * f(c) No.

272 0.5 0 0.681 3. Example – 5 Find the root of exp(x)-3x2=0 for a = 3 and b = 4 Iteration No.24 0.731 3.722 3.242 0.211 (+ve) 9.681 3.512 3.24 So one of the roots of exp[x2-1]+10sin(2x)-5 = 0 is approximately 0.733.8E-3 (+ve) 3.5 Iteration No.512 3.731 3.733*10-3 (+ve) So one of the roots of exp(x)-3x2=0 is approximately 3.Example -4 Find the root of exp(x2-1)+10sin(2x)-5 = 0 for a = 0 and b = 0.51E-3(-ve) 0 0.637 (-ve) -0.210 (-ve) -0.375 (+ve) 0.733 b 4 4 4 4 4 4 c 3.137 (+ve) 3.49E-4 (+ve) 1.24 f(a) * f(c) -2.733 3. 1 2 3 4 a b c 0.722 3.272 0 0. .24.242 0 0.014 (-ve) -2.733 f(a) * f(c) 24. 1 2 3 4 5 6 a 3 3.

2 x .027 (-ve) -4. by using regular falsi method.129.5 1.5 0.947E-6 (+ve) So one of the roots of tan(x)-x-1 = 0 is approximately 1.305 0.3) of x3+ .497E-3(-ve) -6.2x . .576 1. .808 0. Given f(x) = x3 .5 0.8274 (+ve) 3 0.2 (3) .692 (+ve) 5 0.384E-4 (-ve) -9.5 and b = 1.5 1.5 0.5 = 16 (positive) Let us take a= 2 and b= 3.5 0. .5 1.859E-4 (+ve) 34 1.76 1.8836 (+ve) 2 0.5 0. .5 Iteration a b c f(a) * f(c) No.76 0.541 (+ve) .305 0.245 f(a) * f(c) -0.5 Iteration No.129 1.245.705 1.5 = -1 (negative) f(3) = 33 . .851 0.129 2.705 0.5 f(2) = 23 .576 0.246 0.246 0. 1 2 3 4 5 a 0 0 0 0 0 b 0.808 1. Example – 8 Find the root between (2.128 1.7 Find the root of sin(2x)-exp(x-1) =0 for a = 0 and b = 0. 33 1.644 0.254 0. Example .144E-5 (-ve) So one of the roots of sin(2x)-exp(x-1) = 0 is approximately 0.616 (+ve) 6 0.782E-5 (-ve) -3.245 0.5 0. 1 0.245 c 0.129 1.644 1.254 0.5 = 0.5 0.Example – 6 Find the root of tan(x)-x-1 = 0 for a = 0.2 (2) .762 (+ve) 4 0.

116 .0583 .093 The required root is 2.2 x 2.5 = 8.058 and b = 3.058)) /(f(3) .4)) = 2.The first approximation to root is x1 and is given by x1 = (a f(a) .(-0.2 x 2.3 (-1))/ (16.058)) = (2.716 .3 x f(2.4) / (16 . we have the second approximation to the root given by x2 = (a f(a) .b f(b))/(f(b)-f(a)) = (2.3 f(2))/(f(3) .4.(-1)) = (32 + 3)/(16+1) =35/17 = 2.081 Now f(2.058 and 3 Taking a = 2.15 The root lies between 2.058 .081 and 3 Take a = 2.0.0812 .b f(b))/(f(b)-f(a)) = (2.b f(b))/(f(b)-f(a)) =(2 f(3).f(2)) =(2 x 16 .081) = 2.081 and b = 3 The third approximation to the root is given by x3 = (a f(a) .f(2.058 Now f(2.(-0.081 .058) = 2.09 Practice Problems Find the approximate value of the real root of x log 10 x = 1.4 The root lies between 2.089 X 16 .3 x (-0.5 = -0.058 x f(3) .2 by regula falsi method Find the root of the x ex = 3 by regula false method and correct to the three decimal places Find a root which lies between 1 and 2 of f(x) = x3 + 2 x2 + 10x -20 (Leonardo's Equation) Using the regula falsi method .062))/ (16 .062)) = 2.058 x 16 -3 x -0.5 = .

f’(X(t+1)). Subsequently. If the current point at iteration t is X(t). the point in the next iteration is governed by the nature of the following expression. The iteration process given by above equation is assumed to have converged when the derivative. this expression is equated to zero to find the initial guess. .Newton-Raphson method The Newton-Raphson method considers a linear approximation to the first derivative of the function using the Taylor’s series expansion. is close to zero.

The following figure depicts the convergence process in the Newton-Raphson method. where x is the true solution. Solution The first and second derivatives of the function f(x) are given by (1) . Use ε = 0.1.where ε is a small quantity. * Example Find the minimum of the function using the Newton-Raphson method with the starting point x = 0.01 in equation (3) for checking the convergence.

- Linearization and Newton's MethodUploaded bybluewaterlily
- Newton Raphson MethodUploaded byblackstarr_19
- Lattice_Optimisation_Tutorial.pdfUploaded byvovanpedenko
- Lecture 25Uploaded byRandhirKumar
- chap6Uploaded byUzma Azam
- FE Math PracticeUploaded byAdam Wilkins
- Newton RaphsonUploaded byAN Nadia
- Nazmy a.S. Three Dimensional Nonlinear Static Analysis of Cable Stayed Bridges 1990Uploaded bybipinsh01
- 27-333Uploaded bybladdee
- Project 4.1Uploaded byKaitlyn M
- ANew Approach for Solving Systems of Nonlinear EquationsUploaded byNikolay Yakimov
- 24259-86025-1-PBUploaded byManzoorKhan
- Lecture 10 Solutions to Non-Linear Equations [Compatibility Mode]Uploaded byAyi Salazar
- Optimization BasicsUploaded byDebdeep Sarkar
- Sitnikov NewtonUploaded byNono4138
- scs1206005_2Uploaded byDEEPAK
- 1-s2.0-S0306261909004437-mainUploaded byResearcherz
- bat algorithmUploaded byDennis Mads Makhandia
- productFlyer_978-0-7923-5697-4Uploaded bysugengsa
- AE 5301 HW 3 SolutionUploaded byumangdighe
- Interactive-Design-of-3D-Printable-Robotic-Creatures-Paper.pdfUploaded byKeenTrager
- Decision Support Model for Operation of Multi-purpose Water Resources SystemsUploaded bympilgir
- Advanced Stochastic Method for Probabilistic AnalysisUploaded bymeyli80
- IME NumericalAnalysisUploaded byCharbel Dalely Tawk
- IDENTIFICATION OF ELASTIC-PLASTIC MECHANICAL PROPERTIES FORBIMETALLIC SHEETSBYHYBRID-INVERSE APPROACHUploaded bydr_kh_ahmed
- Optimization Algorithms of Operative ControlUploaded bybryesanggalang
- 1-s2.0-S1359431107002839-mainUploaded byMoiz Tinwala
- Matlab NumerAnalyis BookUploaded byLucas Cavalcante
- SuspencionUploaded byJavier Labre
- TC KinematicUploaded byMarius Diaconu

- Biosensors.pdfUploaded byDipteemaya Biswal
- tree-searching.pptUploaded byDipteemaya Biswal
- B-TreesUploaded byDipteemaya Biswal
- Useful RUN CommandsUploaded byDipteemaya Biswal
- PROJECT REPORT.docxUploaded byDipteemaya Biswal
- 5989-1396EN.pdfUploaded byDipteemaya Biswal
- AFCAT 2011-01Uploaded byDipteemaya Biswal
- Protein Viscosity (1)Uploaded byDipteemaya Biswal
- 2011-02Uploaded byDipteemaya Biswal
- ELITMUS SYLLABUSUploaded byDipteemaya Biswal
- AFACT_solved GK Sample PaperUploaded byDipteemaya Biswal
- 03.spr.functional-proteomics-in-drug-discovery.pdfUploaded byDipteemaya Biswal
- 5 Best Ways to Run Linux OS on WindowsUploaded byDipteemaya Biswal
- Questions Asked in AFCAT 01Uploaded byDipteemaya Biswal
- Rev ModelUploaded byDipteemaya Biswal
- Somatic Embryogenesis , Callus CultureUploaded byDipteemaya Biswal
- DNA FingerprintingUploaded byDipteemaya Biswal
- TCS SRM 20012Uploaded byDipteemaya Biswal
- Steve-Jobs.pdfUploaded byDipteemaya Biswal
- Windows CommandsUploaded byDipteemaya Biswal
- Computer Key WordsUploaded byDipteemaya Biswal
- Things That MICROSOFT Could Not ExplainUploaded byDipteemaya Biswal
- Tata Consultancy Services Sample TestUploaded byDipteemaya Biswal
- new tcsUploaded byDipteemaya Biswal
- Principle of DNA microarrayUploaded byDipteemaya Biswal
- TCS SRM 20012.docUploaded byDipteemaya Biswal
- Capgemini Selection ProcedureUploaded byDipteemaya Biswal
- nonidealreactors.psUploaded byDipteemaya Biswal
- DNA VaccinationUploaded byDipteemaya Biswal