You are on page 1of 17

MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE, GWALIOR

Deemed to be University
(Declared under Distinct Category by Ministry of Education, Government of India)

NAAC ACCREDITED WITH A++ GRADE

Practical File

on

Soft Computing (290602)

Submitted By:
Ashish Narawariya
0901CD211013

Faculty Mentor:
Dr. Mahesh Parmar Assistant Professor CSE

DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING


MADHAV INSTITUTE OF TECHNOLOGY & SCIENCE
GWALIOR - 474005 (MP) est. 1957

JAN-JUNE 2024
INDEX

S.no Experiment Name Date Sign

Implementation of Simple Neural Network


1. (McCulloh-Pitts model)

2. Implementation of Perceptron Learning Algorithm


Implementation of Unsupervised Learning
3. Algorithm

4. Implementation of Fuzzy Operations.


Implementation of Fuzzy Relations (Max-min
5. Composition)
Implementation of Fuzzy Controller (Washing
6. Machine)

7. Implementation of Simple Genetic Application

8. Study of ANFIS Architecture

9. Study of Derivative-free Optimization

10. Study of research paper on Soft Computing.


Q1- Implementation of Simple Neural Network (McCulloh-Pitts model).

Source-Code:

Output:
Q2- Implementation of Perceptron Learning Algorithm.

Source-Code:

Output:
Q3- Implementation of Unsupervised Learning Algorithm.

Source-Code:

Output:
Q4- Implementation of Fuzzy Operations.
Source-Code:
Output:
Q5- Implementation of Fuzzy Relations (Max-min Composition).

Source-Code:

Output:
Q6- Implementation of Fuzzy Controller (Washing Machine).

Source-Code:

Output:
Q7- Implementation of Simple Genetic Application.
Source-Code:
Output:
Q8- Study of ANFIS Architecture.

Adaptive Neuro-Fuzzy Inference System (ANFIS) is a hybrid intelligent


system that combines the capabilities of fuzzy logic and neural networks
to perform tasks such as modeling, prediction, and control. The ANFIS
architecture typically consists of five layers:

• Layer 1 - Fuzzification Layer: This layer serves to fuzzify the input


variables. Each node in this layer represents a membership function
for a particular input variable. The input signals are transformed into
fuzzy membership grades using these membership functions.

• Layer 2 - Rule Layer: This layer represents the fuzzy rule base. It
computes the firing strength of each rule by combining the fuzzified
inputs using the AND operation. Each node corresponds to a fuzzy
rule, and the output of each node represents the firing strength of
the rule.

• Layer 3 - Normalization Layer: In this layer, the firing strengths


obtained from Layer 2 are normalized to ensure that they sum up to
one. This normalization allows for the proper weighting of the
consequent parameters in the next layer.

• Layer 4 - Consequent Layer: This layer computes the output of each


rule by multiplying the normalized firing strength of the rule with the
consequent parameters. The output of each node in this layer
represents the contribution of a rule to the overall output.

• Layer 5 - Defuzzification Layer: The final layer aggregates the outputs


of all the rules to produce the overall system output. This layer
typically performs a weighted average or centroid defuzzification to
obtain a crisp output value.
ANFIS combines the benefits of fuzzy logic, which provides a transparent
and interpretable rule-based framework, with the learning capabilities of
neural networks to adaptively tune the parameters of the fuzzy inference
system based on input-output data. This hybrid approach allows ANFIS to
model complex nonlinear relationships between inputs and outputs
effectively.
Q9- Study of Derivative-free Optimization.

Derivative-free optimization (DFO) is a subfield of optimization that


focuses on finding the minimum or maximum of a function without
requiring information about its derivatives. This approach is particularly
useful when the function is complex, noisy, or non-differentiable. Here's a
brief overview of the study of derivative-free optimization:

➢ Motivation: DFO methods are motivated by scenarios where


traditional optimization methods, which rely on derivatives, are not
applicable. This could be due to the high computational cost of
computing derivatives, the lack of analytical expressions for the
function, or the presence of noise or discontinuities in the function.

➢ Challenges: The absence of derivative information poses several


challenges in derivative-free optimization. These challenges include
the need to explore the search space efficiently, handle noisy or
stochastic objective functions, and ensure convergence to a global
optimum without relying on gradient information.

➢ Methods: There are various approaches to derivative-free


optimization, including:

• Direct Search Methods: Direct search methods explore the


search space by iteratively sampling points and evaluating
the objective function without using derivative information.
Examples include pattern search, Nelder-Mead simplex
algorithm, and genetic algorithms.

• Surrogate-based Optimization: Surrogate-based methods


build a surrogate model (e.g., Gaussian processes, radial
basis functions) to approximate the objective function and
guide the search process. These surrogate models are
iteratively updated based on the observed function
evaluations.
• Evolutionary Algorithms: Evolutionary algorithms, such as
genetic algorithms, differential evolution, and particle
swarm optimization, mimic the process of natural selection
to iteratively evolve a population of candidate solutions
toward better solutions.

• Metaheuristic Optimization: Metaheuristic algorithms,


such as simulated annealing, tabu search, and ant colony
optimization, are heuristic techniques that iteratively
explore the search space to find near-optimal solutions
without relying on derivatives.

• Global Optimization: Global optimization methods aim to


find the global optimum of a function by systematically
exploring the entire search space. These methods often
employ a combination of local search and global
exploration strategies.

➢ Applications: Derivative-free optimization has applications in various


fields, including engineering design, machine learning, finance,
logistics, and scientific computing. It is used to optimize complex
systems, tune parameters of machine learning algorithms, design
experiments, and solve real-world optimization problems where
derivative information is unavailable or impractical to compute.

➢ Evaluation: The performance of derivative-free optimization


methods is evaluated based on criteria such as convergence speed,
solution quality, robustness to noise, scalability to high-dimensional
problems, and computational efficiency.
Q10- Study of research paper on Soft Computing.

Studying a research paper on Soft Computing involves understanding the


principles, methods, and applications of soft computing techniques in
solving complex problems. Here's a general outline of how you can
approach studying a research paper on Soft Computing:

➢ Abstract and Introduction: Start by reading the abstract and


introduction sections of the paper. These sections provide an
overview of the research problem, the motivation behind the study,
and the objectives of the research. Pay attention to the key
concepts, terms, and research questions introduced in these
sections.

➢ Literature Review: Review the literature cited in the paper to


understand the background and context of the research. Identify
previous studies, methodologies, and approaches related to the
topic of soft computing. This will help you understand how the
current research contributes to the existing body of knowledge.

➢ Methodology: Study the methodology section to understand the


soft computing techniques or algorithms used in the research. This
may include neural networks, fuzzy logic, evolutionary algorithms,
swarm intelligence, or other soft computing approaches. Pay
attention to the details of how these techniques are applied to
address the research problem.

➢ Results and Analysis: Examine the results and analysis presented in


the paper. This section typically includes experimental findings,
computational results, or case studies that demonstrate the
effectiveness of the proposed soft computing approach. Evaluate
the significance of the results and how they contribute to solving
the research problem.
➢ Discussion: Read the discussion section to understand the
interpretation of the results and their implications. Authors often
discuss the strengths and limitations of their approach, as well as
future directions for research. Consider how the findings of the
study advance our understanding of soft computing and its
applications.

➢ Conclusion: Review the conclusion section to summarize the main


findings of the research and the key takeaways. Reflect on the
contributions of the study to the field of soft computing and its
potential impact on real-world problems.

➢ References: Finally, explore the references cited in the paper to


identify additional resources for further reading. This will help you
deepen your understanding of the topic and explore related
research areas.

You might also like