You are on page 1of 22

4.

Why is multiobjective optimization considered better approach as compared to


single objective optimization in dealing with real world situations?

A) Single vs. Multiobjective Optimisation

Many real-world decision making problems need to achieve several objectives:


minimise risks, maxi-mise reliability, minimise deviations from desired levels,
minimise cost, etc. The main goal of single-objective (SO) optimisation is to find
the “best”solution, which corresponds to the minimum or maximum value of a
single objective function that lumps all different objectives into one. This type of
optimisation is useful as a tool which should provide decision makers with insights
into the nature of the problem, but usually cannot provide a set of alternative
solutions that trade different objectives against each other. On the contrary, in a
multiobjective optimisation with conflicting objectives, there is no single optimal
solution. The interaction among different objectives gives rise to a set of
compromised solutions, largely known as the trade-off, nondominated, noninferior
or Pareto-optimal solutions.
Consideration of multiple objectives promotes more appropriate roles for the
participants in the planing and decision-making processes, i.e. “analyst” or
“modeller”– who generates alternative solutions, and “decision maker” -who uses
the solutions generated by the analyst to make informed decisions.
Models of a problem will be more realistic if many objectives are considered.
Single-objective optimisation identifies a single optimal alternative, however, it
can be used within the multiobjective framework. This does not involve
aggregating different objectives into a single
objective function, but, for example, entails setting all except one of them as
constraints in the optimization process. Thus, multiobjective methodologies are
more likely to identify a wider range of these alternatives since they do not need to
prespecify for which level of one objective a single optimal solution is obtained for
another.
Thus, single-objective approaches place the burden of decision making squarely on
the shoulders of the analyst, But multiobjective approaches allow the responsibility
of assigning relative values
of the objectives to the decision maker!
5. Explain pareto dominance in case of multiobjective optimization?

A) In multi-objective optimization fitness assignment and selection have to take


into account all the different objectives. Among the different fitness assignment
strategies, the most commonly used are those based on aggregation, single-
objective, and Pareto dominance.
The Pareto dominance (or ranking) strategy is based on exploiting the partial
or-der on the population. Some approaches use the dominance rank, i.e., the
number of individuals by which an individual is dominated, to determine the
fitness values; others make use of the dominance depth, where the population
is divided into several fronts and the depth reflects to which front an
individual belongs.
Pareto ranking alone, however, does not guarantee that the population will
spread uniformly over the non-dominated points set. It is known that in the
case of Pareto ranking-based selection schemes finite populations converge to
a single optimum, a phenomenon known as genetic drift (Goldberg and
Segrest, 1987), implying a convergence to small regions of the Pareto-optimal
set.
7. “Evolutionary algorithms are naturally suitable for solving multiobjective optimization
problem”. Why / Why not?

Evolutionary optimization (EO) algorithms use a population based approach in which more than
one solution participates in an iteration and evolves a new population of solutions in each
iteration. The reasons for their popularity are many:
(i) EOs do not require any derivative information
(ii) EOs are relatively simple to implement and
(iii) EOs are flexible and have a wide-spread applicability.

For solving single-objective optimization problems, particularly in finding a single optimal


solution, the use of a population of solutions may sound redundant, in solving multi-objective
optimization problems an EO procedure is a perfect choice. The multi-objective optimization
problems, by nature, give rise to a set of Pareto-optimal solutions which need a further
processing to arrive at a single preferred solution. To achieve the first task, it becomes quite a
natural proposition to use an EO, because the use of population in an iteration helps an EO to
simultaneously find multiple non-dominated solutions, which portrays a trade-o among
objectives, in a single simulation run.
It is clear that EMO is not only being found to be useful in solving multi-objective optimization
problems, it is also helping to solve other kinds of optimization problems in a better manner than
they are traditionally solved.

8. Briefly describe the working of Multiobjective Evolutionary Algorithms with suitable


diagrams and pseudocodes.

A)
The MEGA algorithm
The standard version of the MEGA algorithm operates on one population set, referred to as the
working population. The population of a single generation consists of the individuals subjected
to objective performance calculation and obtained through evolution in a single iteration. Note
that solution chromosomes are represented as graphs.
The first phase of the algorithm applies the objectives on the working population to obtain a list
of scores for each individual. The list of scores may be used for the elimination of solutions with
values outside the range allowed by the corresponding active hard filters. In the next step, the
individuals‟ list of scores is subjected to a Pareto-ranking procedure to set the rank of each
individual. Non-dominated individuals are assigned rank order 1.
The algorithm then proceeds to calculate the efficiency score for each individual, which is used
to select a subset of parents via a roulette-like method [18] that favours individuals with high
efficiency score, i.e. low domination rank and high chromosome graph diversity.
The parents are then subjected to graph-specific mutation and crossover according to the
probabilities indicated by the user. The new working population is formed by merging the
original working population and the newly produced offspring. The process iterates as shown in
Figure 10. The execution of the algorithm completes when the user defined termination
conditions, typically a maximum number of iterations, are fulfilled.
12. Machine Learning is motivated by learning from experience. How is training performed
in Machine Learning? Illustrate it with a suitable example from load forecasting.
A)
Machine Learning is motivated by learning from experience, Tom M. Mitchell provided a
widely quoted, more formal definition: "A computer program is said to learn from experience
E with respect to some class of tasks T and performance measure P if its performance at tasks
in T, as measured by P, improves with experience E".[9] This definition is notable for its
defining machine learning in fundamentally operational rather than cognitive terms.
Machine learning explores the study and construction of algorithms that can learn from and
make predictions on data.[3] Such algorithms operate by building a model from example
inputs in order to make data-driven predictions or decisions expressed as outputs.

Machine learning tasks are typically classified into three broad categories These are[11]

• Supervised learning: The computer is trained by a "teacher".


• Unsupervised learning: leaving it on its own to find structure in its input.
• Reinforcement learning: A computer program without a teacher explicitly telling it
whether it has come close to its goal.

Training ML Models:The process of training an ML model involves providing an ML


algorithm (that is, the learning algorithm) with training data to learn from. The term ML
model refers to the model artifact that is created by the training process.

The training data must contain the correct answer, which is known as a target or target
attribute. The learning algorithm finds patterns in the training data that map the input data
attributes to the target, and it outputs an ML model that captures these patterns. After data
collection, the available data has been pre-processed to train the ANN more efficiently.
For a daily consumption forecast one can use a say total of 1,581 data, 50% of those data, making
up the first subset, were used for ANN training and validation. The last 50% of the original data,
making up the second subset, was used to evaluate the prediction capacity of the developed ANN
model. The training data from the first subset was used for computing the ANN weights and
biases, and the validation, from the second subset, was used to test the accuracy of the ANN
model.
The first step for ANN training was take 85% of the data from first subset mentioned above for
ANN training and validation. During training, the last 15% was used to evaluate the forecast
capacity of the developed ANN model.

To achieve the mentioned model the ANN approach use 16 neurons in the input layer, 20 neurons
in the hidden layer and 1 neuron in the output layer. In the learning / training process a neural
network builds an input–output mapping, adjusting the weights and biases at each iteration based
on the minimization of some error measure between the output produced and the desired output.
Thus, learning entails an optimization process. The error minimization process is repeated until
an acceptable criterion for convergence is reached.
13. What is a Support vector machine? Briefly discuss its application in power systems.
A) “Support Vector Machine” (SVM) is a supervised machine learning algorithm which can
be used for both classification or regression challenges. However, it is mostly
used in classification problems. In this algorithm, we plot each data item as a point in n-
dimensional space (where n is number of features you have) with the value of each feature
being the value of a particular coordinate. Then, we perform classification by finding the
hyper-plane that differentiate the two classes very well.
HOW IT WORKS:

Linear SVM
Identify the right hyper-plane (Scenario-1): Here, we have three hyper-planes (A, B and
C). Now, identify the right hyper-plane to classify star and circle.
Here, maximizing the distances between nearest data point (either class) and hyper-plane
will help us to decide the right hyper-plane. This distance is called as Margin. Let’s look at
the image:
Non-linear SVMs:

Find the hyper-plane to segregate to classes (Scenario-5): In the scenario below, we can’t
have linear hyper-plane between the two classes, These are classified by functions which
takes low dimensional input space and transform it to a higher dimensional space i.e. it
converts not separable problem to separable problem, these functions are called kernels.
Here, we will add a new feature z=x^2+y^2.
It is mostly useful in non-linear separation problem. The original input space can always be
mapped to some higher-dimensional feature space where the training set is separable.
Application in power systems:
SVM has wide range of applications in Power systems,
• Support vector machines for transient stability analysis of large-scale power systems.
• Support vector machine in machine condition monitoring .
• Fault diagnosis of power transformer based on support vector machine with genetic
algorithm.
• Support vector machines with simulated annealing algorithms in electricity load
forecasting.
• Classification of power system disturbances using support vector machines.
13. What is a Support vector machine? Briefly discuss its application in power systems.
A) “Support Vector Machine” (SVM) is a supervised machine learning algorithm which can be
used for both classification or regression challenges. However, it is mostly used in classification
problems. In this algorithm, we plot each data item as a point in n-dimensional space (where n is
number of features you have) with the value of each feature being the value of a particular
coordinate. Then, we perform classification by finding the hyper-plane that differentiate the two
classes very well.

HOW IT WORKS:

Linear SVM
Identify the right hyper-plane (Scenario-1): Here, we have three hyper-planes (A, B and C).
Now, identify the right hyper-plane to classify star and circle.
Here, maximizing the distances between nearest data point (either class) and hyper-plane
will help us to decide the right hyper-plane. This distance is called as Margin. Let’s look at the
image:
Non-linear SVMs:

Find the hyper-plane to segregate to classes (Scenario-5): In the scenario below, we can’t
have linear hyper-plane between the two classes, These are classified by functions which takes
low dimensional input space and transform it to a higher dimensional space i.e. it converts not
separable problem to separable problem, these functions are called kernels. Here, we will add a
new feature z=x^2+y^2.
It is mostly useful in non-linear separation problem. The original input space can always be
mapped to some higher-dimensional feature space where the training set is separable.
Application in power systems:
SVM has wide range of applications in Power systems,
• Support vector machines for transient stability analysis of large-scale power systems.
• Support vector machine in machine condition monitoring .
• Fault diagnosis of power transformer based on support vector machine with genetic
algorithm.
• Support vector machines with simulated annealing algorithms in electricity load
forecasting.
• Classification of power system disturbances using support vector machines.
15.Briefly describe power system transient stability? How can Machine Learning help in
assessment of power system transient stability.

Ans:- Power systems transient stability phenomena are associated with the operation of
synchronous machines in parallel, and become important with longdistance heavy power
transmissions. From a physical viewpoint, transient stability may be defined as the ability of a power
system to maintain machines’ synchronous operation when subjected to large disturbances. From
the system theory viewpoint, power system transient stability is a strongly nonlinear, high-
dimensional problem.

Historically, T-D methods started being used before the advent of numerical computers: calculations
of very simplified (and hence of reduced dimensionality) versions of the system dynamic equations
were carried out manually to compute the machines’ “swing curves”, i.e. the machines’ rotor angle
evolution with time [Park and Bancker, 1929]. Another way of tackling transientstability is a
graphical method, popularized in the thirties, and called “equal-area criterion” (EAC). This method
deals with a one-machine system connected to an “infinite” bus and studies its stability by using the
concept of energy, which removes the necessity of plotting swing curves. EAChas been - and still is-
considered to be an extraordinarily powerful tool for assessing stability margins and limits, for
evaluating the influence of various system parameters, and more generally for providing insight into
the very physical transient stability phenomena.

Actually, the EAC energy concept is a particular case of the Lyapunov’s general theory yielding the
Lyapunov energy-type function applied to a onemachine infinite bus system.

15.Briefly describe power system transient stability? How can Machine Learning help in
assessment of power system transient stability.

Ans:- Electric power systems are large-scale non-linear systems where there are many kinds of stability problems.
One of them is transient stability, which is defined as the ability of a power system to maintain synchronism after
severe disturbances. The purpose of transient stability assessment (TSA) is to determine if the contingency may cause
power system into angle instability, that is, to predict whether the power system could maintain synchronous operation
of generators when subjected to large disturbances such as faults, load loss, capacity loss, etc.

Machine learning methods are promising tools for transient stability assessment (TSA) of power system. Support
vector machine (SVM) is used to assess the transient stability of power system after faults occur on transmission lines.
One of conventional methods used for TSA is the time-domain numerical simulation. This method consists of
simulating during and post-fault behaviors of the system for a given disturbance, observing its electromechanical
angular swings during a few seconds. It is usually used to estimate stability status and to provide detailed operation
information of the faulted systems as a benchmark. However, the simulation method is infeasible for on-line TSA
mainly due to its time-consuming computation.
In the transient stability assessment using machine learning method, the key step is the selection of system
variables. In power system simulation, generators with detailed model provide sufficient information in power
system stability and control. The single machine attributes can effectively predict the system state in transient
stability assessment. The attributes of generators with smaller inertia coefficient can give satisfying results.
16. What is the function of power system stabilizers? How will ANN and Fuzzy logic help
in design and implementation of power system stabilizers?

The Power System Stabilizer (PSS) is a supplementary excitation controller used to damp generator
electro-mechanical oscillations in order to protect the shaft line and stabilize the grid. System
disturbances due to sudden load changes or faults lead to an imbalance between electrical power
delivered by the generator and the mechanical power being produced by the turbine. The imbalance
results in a shaft torque with an accelerating or decelerating effect on the shaft line. The basic function
of a Power System Stabilizer (PSS) is to damp such power oscillations, by producing electrical torque
using the excitation system. The integration of huge amounts of volatile renewables (solar PV, Wind)
mean that power system operation is increasingly characterized by a wide-range of operating
conditions, random load changes and various unpredictable disturbances. Adaptive PSS systems
address these challenges by providing effective damping over the entire generating range.

FUZZY
A fuzzy logic power system stabilizer is basically a fuzzy logic controller. Speed deviation, and active
power deviation, or derivative of speed deviation of the synchronous machine are chosen as the FLPSS
inputs. The output control signal is the input to the automatic voltage regulator (AVR). Each of the
FLPSS input and output variables, are scaled through input gains and interpreted into seven linguistic
fuzzy subsets varying from Negative Big (NB) to Positive Big (PB). Each subset is associated with a
triangular membership function to form a set of seven normalized and symmetrical triangular
membership functions for each fuzzy variable. A symmetrical fuzzy rule set is used to describe the
FLPSS behavior, where the rule is “if-then”.

ANN
The trained and optimized GA-ANN based PSS has been tested on non-linear power system dynamics
under the different operating conditions, various disturbances and faults in the power system. The error
between the system's output and the desired output is calculated and then the error is back-propagated
through the whole network to adjust the network parameters to reduce the output error at each step.
Training input data are presented to the network and the network computes its output. The GA-ANN
based PSS has been designed for improvement of the small signal oscillations and the transient stability
of a power system with long transmission lines and generating unit equipped with high-gain and fast
acting excitation system.
17. Briefly discuss control of Synchronous Generator when subjected to
transient three phase short circuit using Artificial Intelligence techniques.

ANSWER:-A turn fault in the stator winding of a generator causes a large circulating current to flow in
the shorted turns. If left undetected, turn faults can propagate, leading to phase-ground or phase-
phase faults. Incipient detection of turn’s faults is essential to avoid hazardous operating conditions
and reduce down time. At present the synchronous generators are protected against almost all kind
of faults using differential methods of protection. All kind of faults develops into inter winding fault
by damaging inter winding insulation. So it is necessary to protect the synchronous generator from
inter winding faults which represents the protection against all kind of faults. There are different
method based techniques for analyzing generator incipient/inter turn faults on stator side. They are
circuit based, field based, wavelet based, artificial intelligence based, fuzzy based, artificial neural
networks based. Machine performance characteristics that could be monitored to diagnose the stator
inter-turn fault in generator include line current, terminal voltage, torque pulsations, temperature rise
due to excessive losses, shaft vibrations, air-gap flux and speed ripples. So in this we are developing a
mathematical model or method based on online/offline condition monitoring system by analyzing
various conditions and collecting various samples of voltage and current (i.e. normal and abnormal)
for protection of generators against faults (i.e. means incipient/inter turn faults) on stator side. The
main Objective is to develop a mathematical model or method based on online/offline condition
monitoring system by analyzing various conditions (i.e. normal and abnormal) for protection of
generators against faults (i.e. means incipient/inter turn faults) on stator side.

Finite element analysis (FEA) techniques are useful to obtain an


accurate characterization of the electromagnetic behavior of the magnetic components, such as
synchronous generators. Once a model of the magnetic component is defined, the values of its
parameters can be calculated using the FEA techniques. Finite element analysis has typically been
coupled to circuit simulation using time-stepped field solutions. This approach can be very accurate,
but it involves long simulation times. Therefore, the use of the finite element method in modeling a
short circuit fault in a synchronous generator provides a significant advance in the degree of accuracy.
21. Briefly discuss the application of fuzzy logic in control of robotic manipulators
under uncertainty, highlighting its advantages and disadvantages over conventional
controller.

Traditional AI approaches decompose robotic behaviors into a sense-model-plan-act type of hierarchy.


The sensors provide perceptual information, which is used to build a model of the current
environment. The planner generates a plan that enables the robot to accomplish the given task. A
controller executes the actions commanded by the planner without taking novel sensor information
into account. The utility of this model-based reasoning approach for the design of intelligent robots is
limited due to uncertainties inherent to unstructured environments, unreliable and incomplete
perceptual information and imprecise actuators. Fuzzy systems employ a mode of approximate
reasoning which enables them to make robust and meaningful decisions under uncertainty and partial
knowledge. Therefore, the difficulties arising from the lack of precise and complete information on
the environment make fuzzy control a suitable method to implement the behavior of a mobile robot.

The robotic behavior is constituted by a set of fuzzy rules, which can be designed without requiring
the complexity and precision of mathematical or logical models. The fuzzy rules describe the relation
between the external and internal states of the robot and the set of possible actions. The
performance might deteriorate if the robot is transferred to an environment that differs from the
prototype. In order to achieve true autonomy a robot depends on the ability to adapt its behavior to
changes in the environment. Training the robot with noisy and incomplete sensor information
enhances the robustness of the behavior. A robot that learns from past experience can exploit
particular regularities of its environment and perceptual apparatus and thereby improve its
competence to achieve its goals. An evolutionary algorithm can in principle design a robust fuzzy
control system that is able to cope with the uncertainty and imprecision inherent to real-world
situations.

Advantages and disadvantages over conventional controller

Traditional P controller can be a challenge, especially if auto-tuning capabilities to help find the
optimal P constants are desired. However, the theory of P control is very well known and widely used
in many other control applications. On the other hand, fuzzy control seems to accomplish better
control quality with less complexity. FLC performs much better than the P controller; the response
obtained shows that the robot reaches a steady state much earlier with a FLC than a P controller.
22.What is load frequency control of interconnected power system? Discuss the application of
artificial intelligence techniques like ANN and Fuzzy logic in load frequency control of interconnected
power system.

Ans:- Automatic Generation Control (AGC) or Load Frequency Control is a very important issue in
power system operation and control for supplying sufficient and reliable electric power with good
quality. AGC is a feedback control system adjusting a generator output power to remain in defined
frequency. The interconnected power system is divided into three control areas, all generators are
assumed to form a coherent group .Load Frequency Control (LFC) is being used for several years as
part of the Automatic Generation Control (AGC) scheme in electric power systems.

One of the objectives of AGC is to maintain the system frequency at nominal value (50 hz). In the
steady state operation of power system, the load demand is increased or decreased in the form of
Kinetic Energy stored in generator prime mover set, which results the variation of speed and
frequency accordingly. Therefore, the control of load frequency is essential to have safe operation of
the power system . A control strategy is needed that not only maintains constancy of frequency and
desired tie-power flow but also achieves zero steady state error and inadvertent interchange.

The Artificial Intelligent controllers like Fuzzy and Neural control approaches are more suitable in
this respect. Fuzzy Load frequency control of three area interconnected hydro-thermal reheat power
system using artificial intelligence and PI controllers system has been applied to the load frequency
control problems with rather promising results .

The salient feature of these techniques is that they provide a model- free description of control
systems and do not require model identification. The fuzzy controller offers better performance over
the conventional controllers, especially, in complex and nonlinearities associated system. However,
it is demonstrated good dynamics only when selecting the specific number of membership function,
so that the method had limitation. To over come this Artificial Neural Network (ANN) controller,
which is an advance adaptive control configuration, is used because the controller provides faster
control than the others .
24.Briefly discuss the application of fuzzy logic in process control under uncertainty, highlighting
its advantages and disadvantages over conventional controller.

Ans:-In the industrial world Fuzzy logic applications have been in use since long time. Fuzzy logic’s
expert high grade decision making ability has allowed it to be used in areas such as flow process
plants, power plants, thermal process plants, oil refineries, diagnosing medical problems, etc.
Chemical processes are well-known for difficulties such as large variations in output responses and
non-linearity. Thus it is difficult to control these processes using the conventional regulating
mechanisms.

One of the applications of automatic control system is process control. Fuzzy logic has been
successfully applied to various processes in chemical engineering, especially flow control processes.
Any product whether it is a chemical, for e.g. gasoline, gas agents or any consumable, for e.g. a food
product, can only be manufactured by passing the raw materials through a process. Chemical
engineering largely involves the design, improvement and maintenance of processes involving
chemical or biological transformations for large-scale manufacture. Chemical engineers ensure the
processes are operated safely, sustainably and economically. Complex industrial processes such as a
batch chemical reactors; blast furnaces, cement kilns and basic oxygen steel making are difficult to
control automatically.

PID controllers as they provide a linguistic control strategy based on high-grade judging knowledge
of an expert. The poor capabilities of conventional PID controller to control processes which are
nonlinear in nature. PID controller tuning is difficult as knowledge of process parameters is
insufficient in Having so many advantages over conventional PID controller, it is evident that many
efforts have been made in order to replace the PID controller or combine both the controller design.
case of nonlinear processes.

You might also like