You are on page 1of 4

Lab Assignment 5

Numerical Computation

COMSATS UNIVERSITY ISLAMABAD

Submitted By:
Abdul Ahad Naeem
FA20-BEE-003

Submitted to:
Dr. Iftikhar Ahmed Shb

Jacobi Method Iterations:-

Advantages of the Jacobi method:


1. Simplicity: The Jacobi method is relatively easy to understand and implement,
making it a good choice for introductory numerical linear algebra courses.
2. Convergence: Under certain conditions, the Jacobi method is guaranteed to
converge to the exact solution of the linear system.
3. Parallelism: The method can be parallelized because each component of the
solution vector is updated independently in each iteration, making it suitable for
parallel computing.

Disadvantages of the Jacobi method:


1. Slow Convergence: The Jacobi method can converge slowly, especially for ill-
conditioned matrices or matrices with eigenvalues close to each other. Other iterative
methods like the Gauss-Seidel method or more advanced techniques like the
Conjugate Gradient method can converge faster in such cases.
2. Requirement of Diagonal Dominance: The method works best when the matrix A is
diagonally dominant, meaning the diagonal elements are larger than the off-diagonal
elements. If the matrix lacks this property, the method may not converge or may
converge very slowly.
3. Dependency on Initial Guess: The choice of the initial guess can affect
convergence. A poor initial guess may lead to slower convergence or even divergence.

Code:
import numpy as np
import matplotlib.pyplot as plt

def is_diagonally_dominant(A):
n = A.shape[0]
for i in range(n):
row_sum = np.sum(np.abs(A[i, :])) - np.abs(A[i, i])
if np.abs(A[i, i]) <= row_sum:
return False
return True

def gauss_jacobi(A, b, max_iterations=4, tol=1e-6):


n = len(b) x =
np.zeros(n) x_new
= np.copy(x)
error_list = []

if not is_diagonally_dominant(A):
print("The coefficient matrix is not diagonally dominant. The method may no
return None, error_list

for _ in range(max_iterations):
for i in range(n):
sigma = np.dot(A[i, :n], x) - A[i, i] * x[i]
x_new[i] = (b[i] - sigma) / A[i, i]

error = np.linalg.norm(x_new - x)
error_list.append(error)

if error < tol:

return x_new, error_list x =

np.copy(x_new) return x,

error_list

A = np.array([[10, 2, 1],
[1, 7, 1],
[2, 3, 10]], dtype=float)
b = np.array([7, -8, 6], dtype=float)

solution, errors = gauss_jacobi(A, b)

if solution is not None:


print("Solution:", solution)
print("Number of iterations:", len(errors))

# Plot the error convergence plt.figure(figsize=(10, 6))


plt.plot(range(len(errors)), errors, marker='o', linestyle='-')

plt.yscale('log') plt.xlabel("Iteration")
plt.ylabel("Error") plt.title("Convergence of Gauss-Jacobi
Method") plt.grid() plt.show()
Solution: [ 0.89384082 -1.38718367 0.83740408] Number
of iterations: 4

You might also like