You are on page 1of 4

Report on Gradient Descent Optimization

Jaanav Mathavan
EE22B167
October 25, 2023

1 Implementation Details
The Code uses if-else conditional statements to check if its a 1D function or 2D function.

f1: f (x) = x2 + 3x + 8
This function is very simple. Since only one minima exists, the starting point doesn’t
matter much. Clearly the gradient descent converges to the minimum point.

Figure 1

f3: f (x, y) = x4 − 16x3 + 96x2 − 256x + y 2 − 4y + 262


This function is a 2D function. The only new idea here is to reduce both the x and y
coordinate based on the product of slope and learning rate. Essentially this makes the
point move in 3D and hence the 3D graph.

1
Figure 2

2
f4: f (x, y) = e−(x−y) · sin(y)
This function is a 2D function. This poses a new challenge of saddle point. Its clear
as day that there is a minima. Based on starting point, we can end up in 2 places. A
Saddle point at the center or at the minima. Otherwise this is routine gradient descent
algorithm.

Figure 3
f5: f (x) = cos(x)4 − sin(x)3 − 4 sin(x)2 + cos(x) + 1
Something interesting here is that there are multiple minimas. My algorithm is highly
affected by the starting point in determining its capacity to find out the global minima.
The point gets stuck at local minimas which can be overcome by techniques which allow
exploration such as annhealing.

Figure 4

Gradient Descent Function


The primary optimization function, gradient descent, is designed to handle both one-
dimensional and two-dimensional functions. It allows users to specify the learning rate,
optimization range, initial values, and the number of iterations.

1.1 Optimization Process


The code implements gradient descent, updating the current point with the negative
gradient multiplied by the learning rate. This process is repeated for a specified number
of iterations. The code also tracks the path of optimization, allowing us to visualize how
the algorithm converges to the minimum.

2 Results and Visualization


The code includes visualization components to provide a clear understanding of the op-
timization process. The primary results are visualized using Matplotlib and saved as
images.

• 1D Function Optimization: For one-dimensional functions (e.g., f1), a 1D plot


shows the function and the optimization path. An image is generated to visualize
the optimization process, and the final minimum point is recorded.

• 2D Function Optimization: For two-dimensional functions (e.g., f4), a 3D plot


displays the function surface, and the trajectory of points shows the optimization
path. An image is saved to illustrate the function surface and the optimization
path. The final minimum point is also recorded.

3 Conclusion
In this assignment, we successfully implemented gradient descent-based optimization for
various mathematical functions. The code allows optimization of one-dimensional and
two-dimensional functions, providing flexibility for different optimization tasks. The pro-
vided visualizations help in understanding the optimization process and the convergence
to the minimum.
The code and the associated functions are valuable tools for solving optimization
problems in different domains, and the flexibility to adapt to various functions and ranges
makes it a versatile solution.

You might also like