You are on page 1of 2

1.

Optimization Criteria:
- SVM optimizes by maximizing the margin between classes.
- LS-SVM optimizes by minimizing the squared error between actual and predicted
outputs.
2. Training Efficiency:
- SVM training involves solving a quadratic optimization problem with constraints.
- LS-SVM simplifies this process by transforming the primal SVM problem into a dual least
squares optimization problem.

3. Kernel Trick:
- SVM and LS-SVM both support the kernel trick for nonlinear classification.

- However, LS-SVM typically requires less computational resources due to its optimization
formulation.
4. Handling Noisy Data:
- LS-SVM can potentially handle noisy data better than traditional SVM due to its
regression-based approach.
- SVM might be more sensitive to outliers and noisy data during classification tasks.

5. Interpretability:
- In SVM, the decision boundary is defined by support vectors, making it more
interpretable.
- LS-SVM, while efficient for regression tasks, might have a less straightforward
interpretation of the decision boundary in classification scenarios.
Artificial Neural Networks (ANNs) are computational models inspired by the human brain's
neural structure. They consist of interconnected nodes organized in layers. Here's a
simplified breakdown:
1. Neurons (Nodes): These are the basic units that process information. Each neuron
receives input, processes it using an activation function, and passes the output to the
next layer.
2. Layers: Neurons are organized in layers - an input layer, hidden layers, and an output
layer. The input layer receives data, hidden layers process it, and the output layer
provides the final result.
3. Weights and Connections: Each connection between neurons has a weight that
determines the signal's strength. During training, these weights are adjusted to
minimize errors and improve the network's performance.
4. Activation Function: This function determines if a neuron should be activated or not
based on the weighted sum of its inputs. Common activation functions include ReLU,
Sigmoid, and Tanh.

5. Forward Propagation: Input data is fed forward through the network, with each
layer processing the information until an output is generated.
6. Backpropagation: This is the process of updating the weights of the connections by
propagating errors backward through the network. It helps the network learn from
its mistakes and improve its predictions.
7. Training: During training, the network learns to map inputs to outputs by adjusting
the weights based on the provided data and desired outcomes.
Activation function Types
Threshold function: - A threshold value determines whether a neuron should be activated
or not activated in a binary step activation function. The activation function compares the
input value to a threshold value. If the input value is greater than the threshold value, the
neuron is activated.
Piecewise-linear activation function: - Piecewise-linear activation function is a type of
activation function that defines the output of a neuron.
Sigmoid function has an "S"-shaped curve that asymptotes to 0 for large negative numbers
and 1 for large positive numbers. The sigmoid function is important because it allows the
network to introduce non-linearity into the model.

You might also like