You are on page 1of 12

ASSIGNMENT

OPTIMIZATION TECHNIQUES
Submitted to :- R.K. Shahu(Asstt. Prof.)

Submitted by:-____________________________

Entry Number:-___________________________

M. Tech(Manufacturing & Automation),Sem –I


Q1-Dynamic programming in mathematical optimization?
Dynamic programming usually refers to a simplification of a decision by breaking it down
into a sequence of decision steps over time. This is done by defining a sequence of value
functions V1 , V2 , ... Vn , with an argument y representing the state of the system at times i
from 1 to n. The definition of Vn(y) is the value obtained in state y at the last time n. The
values Vi at earlier times i=n-1,n-2,...,2,1 can be found by working backwards, using a
recursive relationship called the Bellman equation. For i=2,...n, Vi -1 at any state y is
calculated from Vi by maximizing a simple function (usually the sum) of the gain from
decision i-1 and the function Vi at the new state of the system if this decision is made. Since
Vi has already been calculated for the needed states, the above operation yields V i -1 for
those states. Finally, V1 at the initial state of the system is the value of the optimal solution.
The optimal values of the decision variables can be recovered, one by one, by tracking back
the calculations already performed.

Dynamic programming in computer programming

There are two key attributes that a problem must have in order for dynamic programming
to be applicable: optimal substructure and overlapping subproblems which are only slightly
smaller. When the overlapping problems are, say, half the size of the original problem the
strategy is called "divide and conquer" rather than "dynamic programming". This is why
mergesort, quicksort, and finding all matches of a regular expression are not classified as
dynamic programming problems.

Optimal substructure means that the solution to a given optimization problem can be
obtained by the combination of optimal solutions to its subproblems. Consequently, the first
step towards devising a dynamic programming solution is to check whether the problem
exhibits such optimal substructure. Such optimal substructures are usually described by
means of recursion. For example, given a graph G=(V,E), the shortest path p from a vertex u
to a vertex v exhibits optimal substructure: take any intermediate vertex w on this shortest
path p. If p is truly the shortest path, then the path p1 from u to w and p2 from w to v are
indeed the shortest paths between the corresponding vertices (by the simple cut-and-paste
argument described in CLRS). Hence, one can easily formulate the solution for finding
shortest paths in a recursive manner, which is what the Bellman-Ford algorithm does.

Overlapping subproblems means that the space of sub problems must be small, that is, any
recursive algorithm solving the problem should solve the same subproblems over and over,
rather than generating new subproblems. For example, consider the recursive formulation
for generating the Fibonacci series: Fi = Fi-1 + Fi-2, with base case F1=F2=1. Then F43 = F42 + F41,
and F42 = F41 + F40. Now F41 is being solved in the recursive subtrees of both F 43 as well as F42.
Even though the total number of subproblems is actually small (only 43 of them), we end up
solving the same problems over and over if we adopt a naive recursive solution such as this.
Dynamic programming takes account of this fact and solves each subproblem only once.
Note that the subproblems must be only 'slightly' smaller (typically taken to mean a
constant additive factor than the larger problem; when they are a multiplicative factor
smaller the problem is no longer classified as dynamic programming.
Figure 2. The subproblem graph for the Fibonacci sequence. The fact that it is not a tree
indicates overlapping subproblems.

This can be achieved in either of two ways:

 Top-down approach: This is the direct fall-out of the recursive formulation of any
problem. If the solution to any problem can be formulated recursively using the
solution to its subproblems, and if its subproblems are overlapping, then one can
easily memoize or store the solutions to the subproblems in a table. Whenever we
attempt to solve a new subproblem, we first check the table to see if it is already
solved. If a solution has been recorded, we can use it directly, otherwise we solve
the subproblem and add its solution to the table.

 Bottom-up approach: This is the more interesting case. Once we formulate the
solution to a problem recursively as in terms of its subproblems, we can try
reformulating the problem in a bottom-up fashion: try solving the subproblems first
and use their solutions to build-on and arrive at solutions to bigger subproblems.
This is also usually done in a tabular form by iteratively generating solutions to bigger
and bigger subproblems by using the solutions to small subproblems. For example, if
we already know the values of F41 and F40, we can directly calculate the value of F42.

Uses of dynamic programming

1. Maximum Value Contiguous Subsequence. Given a sequence of n real numbers


A(1) ... A(n), determine a contiguous subsequence A(i) ... A(j) for which the sum of
elements in the subsequence is maximized.

2. Making Change. You are given n types of coin denominations of values v(1) < v(2)
< ... < v(n) (all integers). Assume v(1) = 1, so you can always make change for any
amount of money C. Give an algorithm which makes change for an amount of money
C with as few coins as possible. [on problem set 4]

3. Longest Increasing Subsequence. Given a sequence of n real numbers A(1) ... A(n),
determine a subsequence (not necessarily contiguous) of maximum length in which
the values in the subsequence form a strictly increasing sequence. [on problem set
4]

4. Box Stacking. You are given a set of n types of rectangular 3-D boxes, where the i^th
box has height h(i), width w(i) and depth d(i) (all real numbers). You want to create a
stack of boxes which is as tall as possible, but you can only stack a box on top of
another box if the dimensions of the 2-D base of the lower box are each strictly
larger than those of the 2-D base of the higher box. Of course, you can rotate a box
so that any side functions as its base. It is also allowable to use multiple instances of
the same type of box.

5. Building Bridges. Consider a 2-D map with a horizontal river passing through its
center. There are n cities on the southern bank with x-coordinates a(1) ... a(n) and n
cities on the northern bank with x-coordinates b(1) ... b(n). You want to connect as
many north-south pairs of cities as possible with bridges such that no two bridges
cross. When connecting cities, you can only connect city i on the northern bank to
city i on the southern bank. (Note: this problem was incorrectly stated on the paper
copies of the handout given in recitation.)

6. Integer Knapsack Problem (Duplicate Items Forbidden). This is the same problem as
the example above, except here it is forbidden to use more than one instance of
each type of item.

7. Balanced Partition. You have a set of n integers each in the range 0 ... K. Partition
these integers into two subsets such that you minimize |S1 - S2|, where S1 and S2
denote the sums of the elements in each of the two subsets.

8. Edit Distance. Given two text strings A of length n and B of length m, you want to
transform A into B with a minimum number of operations of the following types:
delete a character from A, insert a character into A, or change some character in A
into a new character. The minimal number of such operations required to transform
A into B is called the edit distance between A and B.

9. Counting Boolean Parenthesizations. You are given a boolean expression consisting


of a string of the symbols 'true', 'false', 'and', 'or', and 'xor'. Count the number of
ways to parenthesize the expression such that it will evaluate to true. For example,
there is only 1 way to parenthesize 'true and false xor true' such that it evaluates to
true.

10. Optimal Strategy for a Game. Consider a row of n coins of values v(1) ... v(n), where n
is even. We play a game against an opponent by alternating turns. In each turn, a
player selects either the first or last coin from the row, removes it from the row
permanently, and receives the value of the coin. Determine the maximum possible
amount of money we can definitely win if we move first.

11. Two-Person Traversal of a Sequence of Cities. You are given an ordered sequence of
n cities, and the distances between every pair of cities. You must partition the cities
into two subsequences (not necessarily contiguous) such that person A visits all cities
in the first subsequence (in order), person B visits all cities in the second
subsequence (in order), and such that the sum of the total distances travelled by A
and B is minimized. Assume that person A and person B start initially at the first city
in their respective subsequences.
12. Bin Packing (Simplified Version). You have n1 items of size s1, n2 items of size s2, and
n3 items of size s3. You'd like to pack all of these items into bins each of capacity C,
such that the total number of bins used is minimized.

Q2.What do you understand by graphical optimization?


Graphics optimization is the process of creating smaller, more efficient image file sizes in
online applications that will consume less bandwidth and reduce the time it takes to load
and display graphics. For most, this process simply has meant choosing to “Save/Export for
Web” in their various software applications, or increasing the compression levels of any of
their .jpg imagery. However, true graphics optimization goes far beyond these very basic
initial steps and benefits both users and businesses.

Financial Benefits: Smaller image size means less bandwidth needed to process information.
For many of the smaller companies who are often paying by the Megabyte or Gigabyte for
their site’s hosting services, those costs can add up quickly. Smaller imagery also has the
added benefit of requiring less storage space for the same amount of information —
resulting in less server space being consumed. And, most importantly, faster page and
image loads will almost certainly lead to a more positive experience for website visitors — a
more positive experience generally leads to repeat visits and higher sales.

Environmental Benefits: Less server space needed to store and process information and less
time needed to access that same information can result in a significant decrease in
electricity consumption. Faster processing of information can also work to extend the life of
older equipment, thereby reducing the amount of hazardous waste being introduced into
the environment.

Societal Benefits: Many individuals in developed countries still do not have access to high-
speed Internet connections, with even fewer having access in still-developing nations. Non-
optimized graphics that might go unnoticed on a T1 connection can add 30 to 60 seconds or
more to page loading times on a dial-up. Webmasters who employ graphics optimization
techniques will help to better serve populations which can not afford or do not have access
to broadband connections.

Q3.What do you understand by concurrent design and how it is


related with design editing?
The concurrent engineering method is still a relatively new design management system, but
has had the opportunity to mature in recent years to become a well-defined systems
approach towards optimizing engineering design cycles. [1] Because of this, concurrent
engineering has gathered much attention from industry and has been implemented in a
multitude of companies, organizations and universities, most notably in the aerospace
industry.

The basic premise for concurrent engineering revolves around two concepts. The first is the
idea that all elements of a product’s life-cycle, from functionality, producibility, assembly,
testability, maintenance issues, environmental impact and finally disposal and recycling,
should be taken into careful consideration in the early design phases. [2]

The second concept is that the preceding design activities should all be occurring at the
same time, or concurrently. The overall goal being that the concurrent nature of these
processes significantly increases productivity and product quality, aspects that are obviously
important in today's fast-paced market. [3] This philosophy is key to the success of concurrent
engineering because it allows for errors and redesigns to be discovered early in the design
process when the project is still in a more abstract and possibly digital realm. By locating
and fixing these issues early, the design team can avoid what often become costly errors as
the project moves to more complicated computational models and eventually into the
physical realm. [4]

As mentioned above, part of the design process is to ensure that the entire product's life
cycle is taken into consideration. This includes establishing user requirements, propagating
early conceptual designs, running computational models, creating physical prototypes and
eventually manufacturing the product. Included in the process is taking into full account
funding, work force capability and time, subject areas that are extremely important factors
in the success of a concurrent engineering system. As before, the extensive use of forward
planning allows for unforeseen design problems to be caught early so that the basic
conceptual design can be altered before actual physical production commences. The
amount of money that can be saved by doing this correctly has proven to be significant and
is generally the deciding factor for companies moving to a concurrent design framework. [3]

One of the most important reasons for the huge success of concurrent engineering is that by
definition it redefines the basic design process structure that was common place for
decades. This was a structure based on a sequential design flow, sometimes called the
‘Waterfall Model’.[5][6] Concurrent engineering significantly modifies this outdated method
and instead opts to use what has been termed an iterative or integrated development
method.[7] The difference between these two methods is that the ‘Waterfall’ method moves
in a completely linear fashion by starting with user requirements and sequentially moving
forward to design, implementation and additional steps until you have a finished product.
The problem here is that the design system does not look backwards or forwards from the
step it is on to fix possible problems. In the case that something does go wrong, the design
usually must be scrapped or heavily altered. On the other hand, the iterative design process
is more cyclic in that, as mentioned before, all aspects of the life cycle of the product are
taken into account, allowing for a more evolutionary approach to design. [8] The difference
between the two design processes can be seen graphically in Figure 1.
Fig. 1 – “Waterfall” or Sequential Development Method vs. Iterative Development Method

A significant part of this new method is that the individual engineer is given much more say
in the overall design process due to the collaborative nature of concurrent engineering.
Giving the designer ownership plays a large role in the productivity of the employee and
quality of the product that is being produced. This stems from the fact that people given a
sense of gratification and ownership over their work tend to work harder and design a more
robust product, as opposed to an employee that is assigned a task with little say in the
general process.[4]

By making this sweeping change, many organizational and managerial challenges arise that
must be taken into special consideration when companies and organizations move towards
such a system. From this standpoint, issues such as the implementation of early design
reviews, enabling communication between engineers, software compatibility and opening
the design process up to allow for concurrency creates problems of its own . [9] Similarly,
there must be a strong basis for teamwork since the overall success of the method relies on
the ability of engineers to effectively work together. Often this can be a difficult obstacle,
but is something that must be tackled early to avoid later problems .[10]

Similarly, now more than ever, software is playing a huge role in the engineering design
process. Be it from CAD packages to finite element analysis tools, the ability to quickly and
easily modify digital models to predict future design problems is hugely important no matter
what design process you are using. However, in concurrent engineering software’s role
becomes much more significant as the collaborative nature must take into the account that
each engineer's design models must be able to ‘talk’ to each other in order to successfully
utilize the concepts of concurrent engineering.
Q.5 How to explain mixture design for composite material?
Mixture design selection-

Use mixture design when your response changes only as a function of the proportion of
component ingredient. For example, the flavor of lemonate depends on the proportion of
lemons to water, not the amount. For an in-depth treatment of this subject we recommend
cornell’s experiments with mixtures.

Design expert handles mixture problems with 2 to 24 components. The following table
shows which designs you can choose, depending on the number of components and the
order of polynomial. In most cases, fewer components can be accommodated when you ask
for a cubic(or special cubic) model

Design No of components(q)

Simplex lattice 2≤q≤24(12 for cubic)

Simplex centroid 3≤q≤8

screening 6≤q≤24(12 for cubic)

D-optimal 2≤q≤24 (12 for cubic)

Distance based 2≤q≤24 (12 for cubic)

Modified distance 2≤q≤24 (12 for cubic)

User defined 2≤q≤24 (12 for cubic)

We will the design selection criteria separately for each general type of design.

CONSTRAINTS-

You will often run up against constraint on mixture component. For example you may need
some minimal amount of sweetener to make your lemonade drinkable. The design builder
feature allows you to entre names for each component, along with lower and upper limit for
each, and/or multiple linear constraints. If you work in percentage unit then your lower and
upper limits, as well as the total, will be express in percentage. The program permits you to
enter your limits and total in whatever scale is convenient for you.
Factorial design selection

Design expert offer’s six design types on the factorial tab:

1. Two-level factorial (2-15 factors)


2. Irregular fractions (4-9 factors)
3. General factorial (1-12 factors)
4. D-optimal design (an option to the full general factorial) (2-14 factors)
5. Plackett-Burman (11, 19, 23, 27 or 31 factors)
6. Taguchi OA ( Orthogonal Arrays for upto 63 factors)

STANDARD TWO-LEVEL FACTORIALS

The two-level factorial selection offers standard two-level full factorial and fractional
factorial design’s. You can investigate from 2-15 factors in 4,8,16,32,64,128 or 256 runs. This
collection of design provides an effective means for screening through many factors to find
the critical few.

Full two-level factorial design’s may be run for up to eight factors. These designs permit
estimation of all main effects and all interaction effects (except those confounded with
blocking). Design expert offers an option to completely replicate these designs upto 100
times. (fractional factorials can be replicated also but it would not make sense to do so.)

You will find the resolution of each fractional factorial by looking at the colours on the two
level factorial design display. They are set up like a stop light.

Red : Resolution 3rd design. Stop and think. One or more main effects will be aliased with
atleast one, two-factors interaction. Resolution 3 rd designs can be misleaded when two
factors interactions significantly affect the response.

Yellow : Resolution 4th design. Proceed with caution. One or more two factors interactions
will be aliased with atleast one other two factors interaction. The main effects will be clear
of two factors interaction, so resolution 4 th designs can be a good choice for a screening
design.
Green : Resolution 5th design. Go ahead. Assuming that no three factor( and higher)
interaction occur, all the main effects and two factor interactions can be estimated.
Resolution 5th designs work very well for screening. They are more efficient .

BLOCKING:- Design expert provides various options for blocking standard two level
factorials, depending on how many runs you choose to perform and the number of factors.
For example in the full factorial experiments with 16 runs, you may choose to carry out the
experiment in 1,2,4 or 8 blocks. Keep the default selection of 1 for blocks if you want no
blocking. A selection of two for blocks might be particularly helpful if, for some reason , you
must do half the runs on one day and the other half on other day. In this case any day to day
variation will be removed by blocking .

CENTER POINTS

A useful extension of two-level factorial and fractional factorial design incorporates


centerpoint into the factorial structure. If you have atleast one numeric factor, you can
choose to add centerpoint to your design. Data from the centerpoints provides:

 Estimate of pure error


 Estimates of curvature

If there is curvature of the response surface in the region of the design , the centre point will
be either higher or lower than predicted by the factorial points. Curvature of the surface
may indicate that the design is in yhe region of an optimum.

Design expert automatically account for the presence of the center point, constructing the
estimate of pure error, as well as the test for curvature. For factorial design, design expert
uses the average of the centre point values, rather than the polynomial model, to predict
the center point response.this includes curvature from the lack of fit test and the residual,
thus providing more information about the fit of model.

RESPONSE SURFACE DESIGN SELECTION:-

Response surface methodology quantifies relationship among one or more measured


responses and a number of input factors. It provides sophisticated maps from which you can
identify peak performance. Design expert offers many RSM designs. The options depend on
the number of design factors, which cab range from 1-10. Designs offered by the software
under the response surface tab can be seen below. You highly recommend the central
composite design (CCD) or box behnken design. However, if you require multi linear
constraints for the factors, we suggest you use the D-optimal design.
Designs factors
One factor 1
Pentagonal, hexagonal 2
Three level factorial 2-4
Central composite(small 2-10(3-10)
composite)
Hybrid 3,4,6,7
Box behnken 3,4,5,6,7,9,10
D optimal, distance based, 2-10
modified distance
User defined 2-10

Central composite Design:- the central composite design(CCD) is the most frequently used
RSM design. Refer to the respose surface methods, tutorials for a detailed example of how
to set up this type of design with design expert. A CCD can be broken down in 3 parts:

1. Two-level full or fractional design


2. Axial points( Outside the core)
3. Centre points

The two level factorial part of a design consists of all possible combinations of the plus or
minus on level of the factors. Axial points often represented by stars, emanate from the
centerpoint, with all but one of the factors set to zero. The coded distance of the axial
points is represented as a plus or minus alpha.

Its desirable to set alpha at a level that creates rotatability in the design. Design with this
property such as the two factors CCD with an alpha value of 1.414, exhibit circular contours
on the standard error plot.

Central Composite design options:- full central composite designs include factorial points
from a full factorial( 2k). Axial points and center points. Design expert makes the full CCD
available for upto 7 factors above which the number of runs become excessive.

For RSM experiments with 5 or more factors design expert offers one or more options to do
resolution 5 factorial codes for the CCD. ( A resolution 5 th factorial allows estimate of all
main effects and two factor interaction, sufficient for a second order model,) you will find
these designs to be much more efficient then full CCD’s with little or no loss of information.

You might also like