0% found this document useful (0 votes)
56 views32 pages

cc07 Raw

An Initial Value Problem (IVP) is a differential equation paired with an initial condition that specifies the value of the unknown function at a certain point, allowing for the determination of a unique solution. The document explains the concept of IVPs, their importance in modeling real-life processes, and provides an example of solving an IVP. Additionally, it discusses the method of successive approximation for finding approximate solutions to IVPs and demonstrates that a continuous function may not satisfy a Lipschitz condition, using the function f(x) = √x as an example.

Uploaded by

ry2237024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views32 pages

cc07 Raw

An Initial Value Problem (IVP) is a differential equation paired with an initial condition that specifies the value of the unknown function at a certain point, allowing for the determination of a unique solution. The document explains the concept of IVPs, their importance in modeling real-life processes, and provides an example of solving an IVP. Additionally, it discusses the method of successive approximation for finding approximate solutions to IVPs and demonstrates that a continuous function may not satisfy a Lipschitz condition, using the function f(x) = √x as an example.

Uploaded by

ry2237024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Question 1 ) Define Initial Value Problem with a suitable example .

An Initial Value Problem (IVP) is a fundamental concept in the study of differential equations. It refers to a
differential equation that is accompanied by a condition specifying the value of the unknown function at a particular
point. The purpose of an initial value problem is to determine a specific solution from among the infinitely many
solutions of a differential equation, by providing a starting point — the "initial value."
Formal Definition:
An Initial Value Problem typically takes the form:
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
Where:
𝒅𝒚
 = 𝒇(𝒙, 𝒚) is a first-order ordinary differential equation (ODE),
𝒅𝒙
 𝒙𝟎 is a given value of the independent variable (usually time or position),
 𝒚𝟎 is the specified value of the function 𝒚(𝒙) at 𝒙 = 𝒙𝟎 ,
 The goal is to find the function 𝒚(𝒙) that satisfies both the differential equation and the initial condition.
This setup is called an initial value problem because the condition is given at the initial point 𝒙𝟎 , which is often
interpreted as the "starting time" or "starting position" of a process.

Importance of Initial Value Problems:


Initial value problems are widely used in science, engineering, and mathematics to model real-life processes where
the system evolves over time starting from an initial state. Examples include:
 Population growth
 Motion of objects
 Temperature change
 Electrical circuits
In each of these cases, knowing the starting state allows us to predict the future behavior of the system.

Example:
Let’s consider a simple example to illustrate how an initial value problem is solved.
Given:
𝒅𝒚
= 𝟑𝒙𝟐 , with initial condition 𝒚(𝟎) = 𝟐
𝒅𝒙
Step 1: Solve the differential equation
We are given the derivative of 𝒚 with respect to 𝒙, so we integrate both sides to find 𝒚:
𝒚 = ∫ 𝟑𝒙𝟐 𝒅𝒙 = 𝒙𝟑 + 𝑪
Where 𝑪 is the constant of integration that represents an entire family of solutions.
Step 2: Apply the initial condition
Now, we use the given initial condition 𝒚(𝟎) = 𝟐 to determine the constant 𝑪:
𝒚(𝟎) = (𝟎)𝟑 + 𝑪 = 𝑪 = 𝟐
So, 𝑪 = 𝟐
Step 3: Write the particular solution
Now that we know 𝑪, the particular solution to the initial value problem is:
𝒚 = 𝒙𝟑 + 𝟐
Conclusion:
An initial value problem is a differential equation coupled with an initial condition that allows us to find a specific
solution tailored to a real-world situation. In our example, the function 𝒚 = 𝒙𝟑 + 𝟐 satisfies both the differential
𝒅𝒚
equation = 𝟑𝒙𝟐 and the initial condition 𝒚(𝟎) = 𝟐. This method ensures that the solution is not just general, but
𝒅𝒙
unique and applicable to a specific scenario.
Understanding and solving IVPs is essential in fields like physics, biology, engineering, and economics, where
systems evolve from known starting conditions.
Question 2 ) Explain clearly the method of successive approximation.
The method of successive approximation, also known as Picard’s Iteration Method, is a technique used to find an
approximate solution to a first-order differential equation with a given initial condition. This method is especially
useful when finding an exact solution is difficult or impossible.

General Form of the Problem:


We are given a first-order initial value problem:
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
We want to find the function 𝒚(𝒙) that satisfies both the differential equation and the initial condition.

Basic Idea of the Method:


The method starts by rewriting the differential equation in integral form using the Fundamental Theorem of
Calculus:
𝒙
𝒚(𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚(𝒕)) 𝒅𝒕
𝒙𝟎
Since we don't know 𝒚(𝒕) yet, we start by guessing an initial approximation and then improve it step by step.

Steps of the Method:


1. Start with the initial approximation: Let
𝒚𝟎 (𝒙) = 𝒚𝟎 (a constant function)
2. Construct successive approximations: Use the integral equation to build a sequence of functions:
𝒙
𝒚𝟏 (𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚𝟎 (𝒕)) 𝒅𝒕
𝒙𝟎
𝒙
𝒚𝟐 (𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚𝟏 (𝒕)) 𝒅𝒕
𝒙𝟎
𝒙
𝒚𝟑 (𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚𝟐 (𝒕)) 𝒅𝒕
𝒙𝟎
And so on.
3. Continue the process: Each new approximation uses the result of the previous one, bringing it closer to
the actual solution.
4. Convergence: Under certain conditions (if 𝒇(𝒙, 𝒚) is continuous and satisfies a Lipschitz condition), this
sequence converges to the exact solution of the differential equation.
Example:
Solve the IVP:
𝒅𝒚
= 𝒙 + 𝒚, 𝒚(𝟎) = 𝟏
𝒅𝒙
Step 1: Integral form:
𝒙
𝒚(𝒙) = 𝟏 + ( 𝒕 + 𝒚(𝒕)) 𝒅𝒕
𝟎
Step 2: First approximation:
𝒚𝟎 (𝒙) = 𝟏
Step 3: Second approximation:
𝒙 𝒙
𝒕𝟐 𝒙𝟐
𝒚𝟏 (𝒙) = 𝟏 + ( 𝒕 + 𝟏) 𝒅𝒕 = 𝟏 + +𝒕 =𝟏+ +𝒙
𝟎 𝟐 𝟎
𝟐
𝒕𝟐
Step 4: Third approximation: Use 𝒚𝟏 (𝒕) = 𝟏 + + 𝒕 in the integral:
𝟐
𝒙 𝒙
𝒕𝟐 𝒕𝟐
𝒚𝟐 (𝒙) = 𝟏 + 𝒕 + 𝟏 + + 𝒕 𝒅𝒕 = 𝟏 + 𝟐𝒕 + + 𝟏 𝒅𝒕
𝟎 𝟐 𝟎 𝟐
Now integrate and simplify to find 𝒚𝟐 (𝒙), and so on.
Queston 3 ) Prove that a continuous function may not satisfy a Lipschitz condition on a
square.
A Lipschitz condition is a stronger condition than continuity. While all Lipschitz continuous functions are
continuous, the converse is not true — that is, a function can be continuous but may fail to satisfy a Lipschitz
condition.
Let us understand this through a formal statement and a clear example.

Lipschitz Condition:
A function 𝒇(𝒙) is said to satisfy a Lipschitz condition on a domain 𝑫 if there exists a constant 𝑳 > 𝟎 such that for all
𝒙𝟏 , 𝒙𝟐 ∈ 𝑫:
|𝒇(𝒙𝟏 ) − 𝒇(𝒙𝟐 )| ≤ 𝑳|𝒙𝟏 − 𝒙𝟐 |
This means that the rate of change of the function is bounded by some constant 𝑳. If such an 𝑳 exists, the function is
said to be Lipschitz continuous on 𝑫.

Now let’s prove that a continuous function may not satisfy this condition using an example.
Example:
Consider the function:
𝒇(𝒙) = √𝒙
This function is continuous on the interval [𝟎, 𝟏], which is a closed square-like interval on the real line. Let’s see
whether it satisfies the Lipschitz condition on [𝟎, 𝟏].
Let us check the Lipschitz condition:
𝒙𝟏 − 𝒙𝟐 ≤ 𝑳|𝒙𝟏 − 𝒙𝟐 |
Suppose 𝒙𝟐 < 𝒙𝟏 , and write:
|𝒙𝟏 − 𝒙𝟐 |
𝒙𝟏 − 𝒙𝟐 =
√𝒙𝟏 + √𝒙𝟐
So we have:
|𝒙𝟏 − 𝒙𝟐 | 𝟏
≤ 𝑳|𝒙𝟏 − 𝒙𝟐 | ⇒ ≤𝑳
√𝒙𝟏 + √𝒙𝟐 √𝒙𝟏 + √𝒙𝟐
𝟏
Now consider what happens as 𝒙𝟏 , 𝒙𝟐 → 𝟎. Then √𝒙𝟏 + √𝒙𝟐 → 𝟎, and thus → ∞. That means no constant 𝑳
√𝒙𝟏 √𝒙𝟐
can be found to satisfy the inequality for all 𝒙𝟏 , 𝒙𝟐 ∈ [𝟎, 𝟏].

Conclusion:
Although the function 𝒇(𝒙) = √𝒙 is continuous on [𝟎, 𝟏], it does not satisfy a Lipschitz condition on that interval
because the derivative becomes unbounded near 𝒙 = 𝟎. Therefore, we have shown that a continuous function may
not satisfy a Lipschitz condition on a square (or closed region).
Question 4 ) Find the eigenvalue of
𝟏
𝒖(𝒙) = 𝒌 ( 𝟏𝟐𝒙𝟐 𝒕𝟒 + 𝟔𝒙𝟑 𝒕 + 𝟐𝒕𝒙) 𝒖(𝒕) 𝒅𝒕
𝟎
Solution:
We are given a linear integral equation of the form:
𝟏
𝒖(𝒙) = 𝒌 𝑲 (𝒙, 𝒕) 𝒖(𝒕) 𝒅𝒕
𝟎
where the kernel is:
𝑲(𝒙, 𝒕) = 𝟏𝟐𝒙𝟐 𝒕𝟒 + 𝟔𝒙𝟑 𝒕 + 𝟐𝒕𝒙
This is a separable kernel, meaning it can be written as a finite sum of terms of the form 𝒇𝒊 (𝒙)𝒈𝒊 (𝒕):
𝑲(𝒙, 𝒕) = 𝒇𝟏 (𝒙)𝒈𝟏 (𝒕) + 𝒇𝟐 (𝒙)𝒈𝟐 (𝒕) + 𝒇𝟑 (𝒙)𝒈𝟑 (𝒕)
with:
 𝒇𝟏 (𝒙) = 𝟏𝟐𝒙𝟐 , 𝒈𝟏 (𝒕) = 𝒕𝟒
 𝒇𝟐 (𝒙) = 𝟔𝒙𝟑 , 𝒈𝟐 (𝒕) = 𝒕
 𝒇𝟑 (𝒙) = 𝟐𝒙, 𝒈𝟑 (𝒕) = 𝒕
Using this, the integral equation becomes:
𝟏 𝟏 𝟏
𝒖(𝒙) = 𝒌 𝒇𝟏 (𝒙) 𝒈𝟏 (𝒕)𝒖(𝒕) 𝒅𝒕 + 𝒇𝟐 (𝒙) 𝒈𝟐 (𝒕)𝒖(𝒕) 𝒅𝒕 + 𝒇𝟑 (𝒙) 𝒈𝟑 (𝒕)𝒖(𝒕) 𝒅𝒕
𝟎 𝟎 𝟎
Define:
𝟏
 𝑨 = ∫𝟎 𝒕𝟒 𝒖(𝒕) 𝒅𝒕
𝟏
 𝑩 = ∫𝟎 𝒕 𝒖(𝒕) 𝒅𝒕
So:
𝒖(𝒙) = 𝒌[𝟏𝟐𝒙𝟐 𝑨 + (𝟔𝒙𝟑 + 𝟐𝒙)𝑩]
This suggests 𝒖(𝒙) is a combination of 𝒙 , 𝒙𝟑 , and 𝒙, so let us assume:
𝟐

𝒖(𝒙) = 𝒂𝒙𝟐 + 𝒃𝒙𝟑 + 𝒄𝒙


Now compute:
𝟏 𝒂 𝒃 𝒄
 𝑨 = ∫𝟎 𝒕𝟒 𝒖(𝒕) 𝒅𝒕 = + +
𝟕 𝟖 𝟔
𝟏 𝒂 𝒃 𝒄
 𝑩= ∫𝟎 𝒕 𝒖(𝒕) 𝒅𝒕 = + +
𝟒 𝟓 𝟑
Now substitute into:
𝒖(𝒙) = 𝒌[𝟏𝟐𝒙𝟐 𝑨 + (𝟔𝒙𝟑 + 𝟐𝒙)𝑩]
Matching coefficients with 𝒖(𝒙) = 𝒂𝒙 + 𝒃𝒙𝟑 + 𝒄𝒙, we get:
𝟐

𝒂 𝒃 𝒄
⎧𝒂 = 𝟏𝟐𝒌 + +
⎪ 𝟕 𝟖 𝟔
𝒂 𝒃 𝒄
𝒃 = 𝟔𝒌 + +
⎨ 𝟒 𝟓 𝟑
⎪ 𝒂 𝒃 𝒄
⎩𝒄 = 𝟐𝒌 𝟒 + 𝟓 + 𝟑
This forms a homogeneous linear system. For non-trivial solutions (𝒂, 𝒃, 𝒄 ≠ 𝟎), the determinant of the coefficient
matrix must be zero. Solving this determinant equation yields the eigenvalues.
Final Answer:
The eigenvalues 𝒌 of the given integral equation are:
−𝟕𝟓𝟐 + 𝟐√𝟏𝟒𝟑𝟓𝟖𝟏 −𝟕𝟓𝟐 − 𝟐√𝟏𝟒𝟑𝟓𝟖𝟏
𝒌= and 𝒌 =
𝟐𝟏 𝟐𝟏
These are the only values of 𝒌 for which the equation has non-trivial solutions 𝒖(𝒙), making them the eigenvalues of
the integral operator.
Question 5 ) Find the resolvent kernel of 𝑲(𝒙, 𝒕) = (𝟏𝟎 + 𝒙)(𝟓 − 𝒕), where 𝒂 = −𝟐, 𝒃 =
𝟏.
Solution:
We are given the kernel:
𝑲(𝒙, 𝒕) = (𝟏𝟎 + 𝒙)(𝟓 − 𝒕)
This is a separable (or degenerate) kernel, because it can be written in the form:
𝑲(𝒙, 𝒕) = 𝒇(𝒙)𝒈(𝒕)
with:
 𝒇(𝒙) = (𝟏𝟎 + 𝒙)
 𝒈(𝒕) = (𝟓 − 𝒕)
Let the integral equation be of the form:
𝒃
𝒖(𝒙) = 𝒇(𝒙) + 𝝀 𝑲 (𝒙, 𝒕)𝒖(𝒕) 𝒅𝒕
𝒂
Then the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀) is defined so that:
𝒃
𝒖(𝒙) = 𝒇(𝒙) + 𝝀 𝑹 (𝒙, 𝒕; 𝝀)𝒇(𝒕) 𝒅𝒕
𝒂
To compute the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀), we use the formula for separable kernels:

Resolvent Kernel Formula for Rank-1 Separable Kernel:


If:
𝑲(𝒙, 𝒕) = 𝒇(𝒙)𝒈(𝒕)
Then the resolvent kernel is:
𝒇(𝒙)𝒈(𝒕)
𝑹(𝒙, 𝒕; 𝝀) =
𝟏 − 𝝀𝝁
where:
𝒃
𝝁= 𝒇 (𝒔)𝒈(𝒔) 𝒅𝒔
𝒂

Step 1: Identify 𝒇(𝒙) and 𝒈(𝒕)


From the given:
 𝒇(𝒙) = 𝟏𝟎 + 𝒙
 𝒈(𝒕) = 𝟓 − 𝒕
And the interval is: 𝒂 = −𝟐, 𝒃 = 𝟏
𝟏
Step 2: Compute 𝝁 = ∫ 𝟐 𝒇 (𝒔)𝒈(𝒔) 𝒅𝒔
𝟏
𝝁= ( 𝟏𝟎 + 𝒔)(𝟓 − 𝒔) 𝒅𝒔
𝟐
First expand the integrand:
(𝟏𝟎 + 𝒔)(𝟓 − 𝒔) = 𝟓𝟎 − 𝟏𝟎𝒔 + 𝟓𝒔 − 𝒔𝟐 = 𝟓𝟎 − 𝟓𝒔 − 𝒔𝟐
Now integrate:
𝟏 𝟏
𝟓 𝟏
𝝁= ( 𝟓𝟎 − 𝟓𝒔 − 𝒔𝟐 ) 𝒅𝒔 = 𝟓𝟎𝒔 − 𝒔𝟐 − 𝒔𝟑
𝟐 𝟐 𝟑 𝟐
Now evaluate at 𝒔 = 𝟏:
𝟓 𝟏 𝟓 𝟏 𝟑𝟎𝟎 − 𝟏𝟓 − 𝟐 𝟐𝟖𝟑
𝟓𝟎(𝟏) − (𝟏)𝟐 − (𝟏)𝟑 = 𝟓𝟎 − − = =
𝟐 𝟑 𝟐 𝟑 𝟔 𝟔
At 𝒔 = −𝟐:
𝟓 𝟏 𝟐𝟎 𝟖 𝟖 𝟖 −𝟑𝟑𝟎 + 𝟖 −𝟑𝟐𝟐
𝟓𝟎(−𝟐) − (−𝟐)𝟐 − (−𝟐)𝟑 = −𝟏𝟎𝟎 − + = −𝟏𝟎𝟎 − 𝟏𝟎 + = −𝟏𝟏𝟎 + = =
𝟐 𝟑 𝟐 𝟑 𝟑 𝟑 𝟑 𝟑
Now compute:
𝟐𝟖𝟑 −𝟑𝟐𝟐 𝟐𝟖𝟑 𝟑𝟐𝟐 𝟐𝟖𝟑 + 𝟔𝟒𝟒 𝟗𝟐𝟕
𝝁= − = + = =
𝟔 𝟑 𝟔 𝟑 𝟔 𝟔

Step 3: Final expression for the resolvent kerne


Now plug into the resolvent kernel formula:
(𝟏𝟎 + 𝒙)(𝟓 − 𝒕) (𝟏𝟎 + 𝒙)(𝟓 − 𝒕)
𝑹(𝒙, 𝒕; 𝝀) = =
𝟗𝟐𝟕 𝟗𝟐𝟕𝝀
𝟏−𝝀⋅ 𝟏−
𝟔 𝟔
Final Answer:
The resolvent kernel is:
(𝟏𝟎 + 𝒙)(𝟓 − 𝒕)
𝑹(𝒙, 𝒕; 𝝀) =
𝟗𝟐𝟕𝝀
𝟏−
𝟔
Question 6 ) Show that 𝒚(𝒙) = 𝟏 − 𝒙 is a solution of
𝒙
𝒙= 𝒆𝒙 𝒕
𝒚(𝒕) 𝒅𝒕
𝟎
Solution:
We are given:
 𝒚(𝒙) = 𝟏 − 𝒙
 Integral equation:
𝒙
𝒙= 𝒆𝒙 𝒕
𝒚(𝒕) 𝒅𝒕
𝟎
Our goal is to verify that substituting 𝒚(𝒕) = 𝟏 − 𝒕 into the right-hand side gives exactly 𝒙.

Step 1: Substitute 𝒚(𝒕) = 𝟏 − 𝒕 into the integral


𝒙 𝒙
𝒙 𝒕
𝒆 𝒚(𝒕) 𝒅𝒕 = 𝒆𝒙 𝒕
(𝟏 − 𝒕) 𝒅𝒕
𝟎 𝟎
Factor out 𝒆𝒙 since it's constant with respect to 𝒕:
𝒙
= 𝒆𝒙 𝒆 𝒕
(𝟏 − 𝒕) 𝒅𝒕
𝟎
Now compute the integral:
𝒙 𝒙 𝒙
𝒕 𝒕 𝒕
𝑰= 𝒆 (𝟏 − 𝒕) 𝒅𝒕 = 𝒆 𝒅𝒕 − 𝒕𝒆 𝒅𝒕
𝟎 𝟎 𝟎

Step 2: Compute the integrals


First term:
𝒙
𝒆 𝒕
𝒅𝒕 = [−𝒆 𝒕 ]𝒙𝟎 = −𝒆 𝒙
+𝟏
𝟎
𝒙
Second term: Use integration by parts for ∫𝟎 𝒕 𝒆 𝒕 𝒅𝒕
Let:
 𝒖 = 𝒕 ⇒ 𝒅𝒖 = 𝒅𝒕
 𝒅𝒗 = 𝒆 𝒕 𝒅𝒕 ⇒ 𝒗 = −𝒆 𝒕
𝒙 𝒙
𝒕 𝒆 𝒕 𝒅𝒕 = −𝒕𝒆 𝒕 |𝒙𝟎 + 𝒆 𝒕
𝒅𝒕 = −𝒙𝒆 𝒙
+ (𝟏 − 𝒆 𝒙 )
𝟎 𝟎
So,
𝑰 = (𝟏 − 𝒆 𝒙 ) − [−𝒙𝒆 𝒙
+ (𝟏 − 𝒆 𝒙 )] = 𝒙𝒆 𝒙

Step 3: Multiply back by 𝒆𝒙


𝒙
𝒆𝒙 𝒕
(𝟏 − 𝒕) 𝒅𝒕 = 𝒆𝒙 ⋅ 𝒙𝒆 𝒙
=𝒙
𝟎

Conclusion:
We have shown:
𝒙
𝒆𝒙 𝒕
(𝟏 − 𝒕) 𝒅𝒕 = 𝒙
𝟎
Hence,
𝒚(𝒙) = 𝟏 − 𝒙
is indeed a solution of the given integral equation.
Question 7 ) Convert 𝒚 + 𝐬𝐢𝐧𝒙 ⋅ 𝒚 + 𝒆𝒙 𝒚 = 𝒙, with initial conditions 𝒚(𝟎) = 𝟏,
𝒚 (𝟎) = −𝟏, into an integral equation.
Solution:
We are given a second-order linear differential equation:
𝒚 (𝒙) + 𝐬𝐢𝐧𝒙 ⋅ 𝒚 (𝒙) + 𝒆𝒙 𝒚(𝒙) = 𝒙
with initial conditions:
𝒚(𝟎) = 𝟏, 𝒚 (𝟎) = −𝟏
We want to convert this differential equation into an integral equation. To do that, we follow the method of
successive integration.

Step 1: Express 𝒚 (𝒙)


Rewriting the differential equation:
𝒚 (𝒙) = 𝒙 − 𝐬𝐢𝐧𝒙 ⋅ 𝒚 (𝒙) − 𝒆𝒙 𝒚(𝒙)

Step 2: First integration


We integrate both sides of the above equation from 0 to 𝒙:
𝒙 𝒙
𝒚 (𝒕) 𝒅𝒕 = [𝒕 − 𝐬𝐢𝐧𝒕 ⋅ 𝒚 (𝒕) − 𝒆𝒕 𝒚(𝒕)] 𝒅𝒕
𝟎 𝟎
We know that:
𝒙
𝒚 (𝒕) 𝒅𝒕 = 𝒚 (𝒙) − 𝒚 (𝟎)
𝟎
Substitute 𝒚 (𝟎) = −𝟏:
𝒙
𝒚 (𝒙) + 𝟏 = [𝒕 − 𝐬𝐢𝐧𝒕 ⋅ 𝒚 (𝒕) − 𝒆𝒕 𝒚(𝒕)] 𝒅𝒕
𝟎
Therefore:
𝒙
𝒚 (𝒙) = −𝟏 + [𝒕 − 𝐬𝐢𝐧𝒕 ⋅ 𝒚 (𝒕) − 𝒆𝒕 𝒚(𝒕)] 𝒅𝒕
𝟎
Step 3: Second integration
Now integrate both sides of the expression for 𝒚 (𝒙) from 0 to 𝒙:
𝒙 𝒙 𝒕
𝒚 (𝒕) 𝒅𝒕 = −𝟏 + (𝒔 − 𝐬𝐢𝐧𝒔 ⋅ 𝒚 (𝒔) − 𝒆𝒔 𝒚(𝒔)) 𝒅𝒔 𝒅𝒕
𝟎 𝟎 𝟎
The left-hand side becomes:
𝒙
𝒚 (𝒕) 𝒅𝒕 = 𝒚(𝒙) − 𝒚(𝟎)
𝟎
Substitute 𝒚(𝟎) = 𝟏:
𝒙 𝒕
𝒚(𝒙) − 𝟏 = −𝟏 + (𝒔 − 𝐬𝐢𝐧𝒔 ⋅ 𝒚 (𝒔) − 𝒆𝒔 𝒚(𝒔)) 𝒅𝒔 𝒅𝒕
𝟎 𝟎
Break it into parts:
𝒙 𝒕
𝒚(𝒙) = 𝟏 + −𝟏 + (𝒔 − 𝐬𝐢𝐧𝒔 ⋅ 𝒚 (𝒔) − 𝒆𝒔 𝒚(𝒔)) 𝒅𝒔 𝒅𝒕
𝟎 𝟎
𝒙 𝒙 𝒕
=𝟏− 𝒅𝒕 + (𝒔 − 𝐬𝐢𝐧𝒔 ⋅ 𝒚 (𝒔) − 𝒆𝒔 𝒚(𝒔)) 𝒅𝒔 𝒅𝒕
𝟎 𝟎 𝟎
Now compute:
𝒙
𝒅𝒕 = 𝒙
𝟎
So finally, the expression becomes:

Final Integral Equation


𝒙 𝒕
𝒚(𝒙) = 𝟏 − 𝒙 + [𝒔 − 𝐬𝐢𝐧𝒔 ⋅ 𝒚 (𝒔) − 𝒆𝒔 𝒚(𝒔)] 𝒅𝒔 𝒅𝒕
𝟎 𝟎
This is the required Volterra integral equation of the second kind equivalent to the given differential equation.
Question 8 ) Find iterated kernels for 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝐜𝐨𝐬𝒕, where 𝒂 = 𝟎, 𝒃 = 𝝅
Solution:
We are asked to find the iterated kernels of the kernel:
𝑲(𝒙, 𝒕) = 𝒆𝒙 𝐜𝐨𝐬𝒕
on the interval [𝟎, 𝝅]
This kernel is separable (or degenerate), since it can be written as a product of functions of 𝒙 and 𝒕:
𝑲(𝒙, 𝒕) = 𝒇(𝒙)𝒈(𝒕), where 𝒇(𝒙) = 𝒆𝒙 , 𝒈(𝒕) = 𝐜𝐨𝐬𝒕

Definition: Iterated Kernels


The iterated kernels 𝑲𝒏 (𝒙, 𝒕) are defined recursively as:
 𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕)
𝒃
 𝑲𝒏 𝟏 (𝒙, 𝒕) = ∫𝒂 𝑲 (𝒙, 𝒔)𝑲𝒏 (𝒔, 𝒕) 𝒅𝒔
We now compute the first few iterated kernels.

Step 1: First Iteration


We are given:
𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝐜𝐨𝐬𝒕
Step 2: Second Iteration 𝑲𝟐 (𝒙, 𝒕)
𝝅 𝝅
𝑲𝟐 (𝒙, 𝒕) = 𝑲 (𝒙, 𝒔)𝑲𝟏 (𝒔, 𝒕) 𝒅𝒔 = 𝒆𝒙 𝐜𝐨𝐬𝒔 ⋅ 𝒆𝒔 𝐜𝐨𝐬𝒕 𝒅𝒔
𝟎 𝟎
Since 𝒆𝒙 and 𝐜𝐨𝐬𝒕 are constants with respect to 𝒔, factor them out:
𝝅
𝑲𝟐 (𝒙, 𝒕) = 𝒆𝒙 𝐜𝐨𝐬𝒕 𝒆𝒔 𝐜𝐨𝐬𝒔 𝒅𝒔
𝟎
Let’s denote:
𝝅
𝑪= 𝒆𝒔 𝐜𝐨𝐬𝒔 𝒅𝒔
𝟎
Then:
𝑲𝟐 (𝒙, 𝒕) = 𝑪 ⋅ 𝒆𝒙 𝐜𝐨𝐬𝒕
So:
𝑲𝟐 (𝒙, 𝒕) = 𝑪 ⋅ 𝑲(𝒙, 𝒕)

Step 3: Third Iteration 𝑲𝟑 (𝒙, 𝒕)


𝝅 𝝅
𝑲𝟑 (𝒙, 𝒕) = 𝑲 (𝒙, 𝒔)𝑲𝟐 (𝒔, 𝒕) 𝒅𝒔 = 𝒆𝒙 𝐜𝐨𝐬𝒔 ⋅ 𝑪𝒆𝒔 𝐜𝐨𝐬𝒕 𝒅𝒔
𝟎 𝟎
Factor out constants:
𝝅
𝑲𝟑 (𝒙, 𝒕) = 𝑪𝒆𝒙 𝐜𝐨𝐬𝒕 𝒆𝒔 𝐜𝐨𝐬𝒔 𝒅𝒔 = 𝑪𝟐 𝒆𝒙 𝐜𝐨𝐬𝒕 = 𝑪𝟐 𝑲(𝒙, 𝒕)
𝟎

Pattern Observed:
We observe the recursive structure:
 𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕)
 𝑲𝟐 (𝒙, 𝒕) = 𝑪 ⋅ 𝑲(𝒙, 𝒕)
 𝑲𝟑 (𝒙, 𝒕) = 𝑪𝟐 ⋅ 𝑲(𝒙, 𝒕)
 ...
 𝑲𝒏 (𝒙, 𝒕) = 𝑪𝒏 𝟏 ⋅ 𝑲(𝒙, 𝒕)
Where:
𝝅
𝑪= 𝒆𝒔 𝐜𝐨𝐬𝒔 𝒅𝒔
𝟎

Evaluate Constant 𝑪
We now compute:
𝝅
𝑪= 𝒆𝒔 𝐜𝐨𝐬𝒔 𝒅𝒔
𝟎
Use integration by parts or known standard result:
𝒆𝒙
∫ 𝒆𝒙 𝐜𝐨𝐬𝒙 𝒅𝒙 = (𝐬𝐢𝐧𝒙 + 𝐜𝐨𝐬𝒙)
𝟐
So:
𝝅
𝒆𝒔 𝒆𝝅 𝟏 𝒆𝝅 + 𝟏
𝑪= (𝐬𝐢𝐧𝒔 + 𝐜𝐨𝐬𝒔) = (𝟎 − 𝟏) − (𝟎 + 𝟏) = −
𝟐 𝟎 𝟐 𝟐 𝟐

Final Answer:
 The first iterated kernel is:
𝑲𝟏 (𝒙, 𝒕) = 𝒆𝒙 𝐜𝐨𝐬𝒕
 The second iterated kernel is:
𝒆𝝅 + 𝟏 𝒙
𝑲𝟐 (𝒙, 𝒕) = − ⋅ 𝒆 𝐜𝐨𝐬𝒕
𝟐
 The third iterated kernel is:
𝒆𝝅 + 𝟏 𝟐 𝒙
𝑲𝟑 (𝒙, 𝒕) = − ⋅ 𝒆 𝐜𝐨𝐬𝒕
𝟐
 In general, the n-th iterated kernel is:
𝒆𝝅 + 𝟏 𝒏 𝟏 𝒙
𝑲𝒏 (𝒙, 𝒕) = − ⋅ 𝒆 𝐜𝐨𝐬𝒕
𝟐

Question 9 ) Solve and find the general solution of the following system of differential equations:
𝒙𝟏 = 𝒙𝟏 + 𝒙𝟐
𝒙𝟐 = 𝟒𝒙𝟏 + 𝒙𝟐

Step 1: Write the system in matrix form


Let:
𝒙𝟏 (𝒕) 𝒙𝟏 (𝒕)
𝐗(𝒕) = and 𝐗 (𝒕) =
𝒙𝟐 (𝒕) 𝒙𝟐 (𝒕)
Then the given system becomes:
𝟏 𝟏
𝐗 (𝒕) = 𝑨𝐗(𝒕) where 𝑨=
𝟒 𝟏
This is a linear homogeneous system of differential equations with constant coefficients.

Step 2: Find the eigenvalues of matrix 𝑨


To solve this system, we find the eigenvalues 𝝀 by solving the characteristic equation:
𝐝𝐞𝐭(𝑨 − 𝝀𝑰) = 𝟎
Compute:
𝟏−𝝀 𝟏
𝑨 − 𝝀𝑰 =
𝟒 𝟏−𝝀
Now calculate the determinant:
𝐝𝐞𝐭(𝑨 − 𝝀𝑰) = (𝟏 − 𝝀)𝟐 − 𝟒 = 𝟎 ⇒ (𝟏 − 𝝀)𝟐 = 𝟒
Take square root of both sides:
𝟏 − 𝝀 = ±𝟐 ⇒ 𝝀 = 𝟏 ± 𝟐 ⇒ 𝝀𝟏 = −𝟏, 𝝀𝟐 = 𝟑
These are real and distinct eigenvalues, which means the solution will involve independent exponential functions.

Step 3: Find the eigenvectors


We find an eigenvector for each eigenvalue.

For 𝝀𝟏 = −𝟏:
Solve:
𝟏 𝟏 𝟏 𝟎 𝒗𝟏 𝟐 𝟏 𝒗𝟏 𝟎
(𝑨 + 𝑰)𝐯𝟏 = 𝟎 ⇒ + 𝒗𝟐 = 𝟒 𝒗 =
𝟒 𝟏 𝟎 𝟏 𝟐 𝟐 𝟎
From the first row:
𝟐𝒗𝟏 + 𝒗𝟐 = 𝟎 ⇒ 𝒗𝟐 = −𝟐𝒗𝟏
So the eigenvector is:
𝟏
𝐯𝟏 = (up to scalar multiples)
−𝟐
For 𝝀𝟐 = 𝟑:
Solve:
𝟏 𝟏 𝟑 𝟎 𝒗𝟏 −𝟐 𝟏 𝒗𝟏 𝟎
(𝑨 − 𝟑𝑰)𝐯𝟐 = 𝟎 ⇒ − 𝒗𝟐 = 𝟒 =
𝟒 𝟏 𝟎 𝟑 −𝟐 𝒗𝟐 𝟎
From the first row:
−𝟐𝒗𝟏 + 𝒗𝟐 = 𝟎 ⇒ 𝒗𝟐 = 𝟐𝒗𝟏
So the eigenvector is:
𝟏
𝐯𝟐 =
𝟐

Step 4: Form the general solution


Since 𝑨 has two linearly independent eigenvectors, the general solution to the system is a linear combination of the
exponential solutions formed from each eigenpair:
𝐗(𝒕) = 𝑪𝟏 𝒆𝝀𝟏 𝒕 𝐯𝟏 + 𝑪𝟐 𝒆𝝀𝟐 𝒕 𝐯𝟐
Substitute eigenvalues and eigenvectors:
𝟏 𝟏
𝐗(𝒕) = 𝑪𝟏 𝒆 𝒕 + 𝑪𝟐 𝒆𝟑𝒕
−𝟐 𝟐

Step 5: Write component-wise expressions


Now break it into components:
First component 𝒙𝟏 (𝒕):
𝒕
𝒙𝟏 (𝒕) = 𝑪𝟏 𝒆 ⋅ 𝟏 + 𝑪𝟐 𝒆𝟑𝒕 ⋅ 𝟏 = 𝑪𝟏 𝒆 𝒕
+ 𝑪𝟐 𝒆𝟑𝒕
Second component 𝒙𝟐 (𝒕):
𝒕
𝒙𝟐 (𝒕) = 𝑪𝟏 𝒆 ⋅ (−𝟐) + 𝑪𝟐 𝒆𝟑𝒕 ⋅ 𝟐 = −𝟐𝑪𝟏 𝒆 𝒕
+ 𝟐𝑪𝟐 𝒆𝟑𝒕

Final Answer :
𝒙𝟏 (𝒕) = 𝑪𝟏 𝒆 𝒕 + 𝑪𝟐 𝒆𝟑𝒕
𝒙𝟐 (𝒕) = −𝟐𝑪𝟏 𝒆 𝒕 + 𝟐𝑪𝟐 𝒆𝟑𝒕
Where 𝑪𝟏 and 𝑪𝟐 are arbitrary constants determined by initial conditions if given.
Queston 10 ) Prove that
𝒙 𝒙
𝒏
(𝒙 − 𝒕)𝒏 𝟏
𝒚 (𝒕) 𝒅𝒕 = 𝒚(𝒕) 𝒅𝒕
𝒂 𝒂 (𝒏 − 𝟏)!

Understanding the Statement


𝒙
The notation ∫𝒂 𝒚 (𝒕) 𝒅𝒕𝒏 refers to the n-th repeated integral of the function 𝒚(𝒕). This is sometimes called a
fractional integral or n-fold integral.
In general, the n-th repeated integral of 𝒚(𝒕) from 𝒂 to 𝒙 is defined as:
𝒙 𝒕𝒏 𝒕𝟐
𝑰𝒏 (𝒙) = ⋯ 𝒚(𝒕𝟏 ) 𝒅𝒕𝟏 𝒅𝒕𝟐 ⋯ 𝒅𝒕𝒏
𝒂 𝒂 𝒂

𝒏 times
We aim to prove:
𝒙 𝒙
𝒏
(𝒙 − 𝒕)𝒏 𝟏
𝒚 (𝒕) 𝒅𝒕 = 𝒚(𝒕) 𝒅𝒕
𝒂 𝒂 (𝒏 − 𝟏)!

Proof (By Induction)


Let us prove the identity by mathematical induction on 𝒏.
Base Case: 𝒏 = 𝟏
We check:
𝒙 𝒙
𝟏
𝒚 (𝒕) 𝒅𝒕 = 𝒚 (𝒕) 𝒅𝒕
𝒂 𝒂
and:
𝒙 𝒙 𝒙
(𝒙 − 𝒕)𝟏 𝟏 (𝒙 − 𝒕)𝟎
𝒚(𝒕) 𝒅𝒕 = 𝒚(𝒕) 𝒅𝒕 = 𝒚 (𝒕) 𝒅𝒕
𝒂 (𝟏 − 𝟏)! 𝒂 𝟎! 𝒂
So the identity holds for 𝒏 = 𝟏.
Inductive Hypothesis
Assume the identity is true for 𝒏 = 𝒌:
𝒙 𝒙
(𝒙 − 𝒕)𝒌 𝟏
𝒚 (𝒕) 𝒅𝒕𝒌 = 𝒚(𝒕) 𝒅𝒕
𝒂 𝒂 (𝒌 − 𝟏)!

To Prove for 𝒏 = 𝒌 + 𝟏
Then:
𝒙 𝒙 𝒔
𝒌 𝟏
𝒚 (𝒕) 𝒅𝒕 = 𝒚 (𝒕) 𝒅𝒕𝒌 𝒅𝒔
𝒂 𝒂 𝒂
Now apply the inductive hypothesis:
𝒙 𝒔
(𝒔 − 𝒕)𝒌 𝟏
= 𝒚(𝒕) 𝒅𝒕 𝒅𝒔
𝒂 𝒂 (𝒌 − 𝟏)!
We now interchange the order of integration: This is a standard technique when handling iterated integrals over a
triangular region.
Let’s write:
𝒙 𝒔 𝒙 𝒙
𝒇 (𝒔, 𝒕) 𝒅𝒕 𝒅𝒔 = 𝒇 (𝒔, 𝒕) 𝒅𝒔 𝒅𝒕
𝒂 𝒂 𝒂 𝒕
Apply it:
𝒙 𝒙
(𝒔 − 𝒕)𝒌 𝟏
= 𝒅𝒔 𝒚(𝒕) 𝒅𝒕
𝒂 𝒕 (𝒌 − 𝟏)!
Change variable inside: Let 𝒖 = 𝒔 − 𝒕, so when 𝒔 = 𝒕, 𝒖 = 𝟎, and when 𝒔 = 𝒙, 𝒖 = 𝒙 − 𝒕
Then:
𝒙 𝒙 𝒕
(𝒔 − 𝒕)𝒌 𝟏 𝒖𝒌 𝟏 (𝒙 − 𝒕)𝒌
𝒅𝒔 = 𝒅𝒖 =
𝒕 (𝒌 − 𝟏)! 𝟎 (𝒌 − 𝟏)! 𝒌!
Therefore:
𝒙 𝒙
𝒌 𝟏
(𝒙 − 𝒕)𝒌
𝒚 (𝒕) 𝒅𝒕 = 𝒚(𝒕) 𝒅𝒕
𝒂 𝒂 𝒌!
This proves the formula holds for 𝒏 = 𝒌 + 𝟏.

Conclusion:
By induction, we conclude that:
𝒙 𝒙
𝒏
(𝒙 − 𝒕)𝒏 𝟏
𝒚 (𝒕) 𝒅𝒕 = 𝒚(𝒕) 𝒅𝒕
𝒂 𝒂 (𝒏 − 𝟏)!
This completes the proof.
𝒅𝒚
Question 11) Solve = 𝒚𝟐 , with initial condition 𝒚(𝟏) = −𝟏
𝒅𝒙
Step 1: Separate the variables
We are given a separable differential equation:
𝒅𝒚
= 𝒚𝟐
𝒅𝒙
Rewriting:
𝟏
𝒅𝒚 = 𝒅𝒙
𝒚𝟐

Step 2: Integrate both sides


Now integrate both sides:
𝟏
∫ 𝒅𝒚 = ∫ 𝒅𝒙
𝒚𝟐
Recall:
𝟏
∫ 𝒚 𝟐 𝒅𝒚 = −𝒚 𝟏
+𝑪=−
𝒚
So we get:
𝟏
− =𝒙+𝑪
𝒚

Step 3: Solve for 𝒚


Multiply both sides by −𝟏:
𝟏
= −𝒙 − 𝑪
𝒚
Now take reciprocal:
𝟏
𝒚=
−𝒙 − 𝑪
Let 𝑪 = −𝑪, then:
𝟏
𝒚=
𝑪 −𝒙
Step 4: Apply initial condition
We are given:
𝒚(𝟏) = −𝟏
Substitute into the solution:
𝟏
−𝟏 = ⇒ 𝑪 − 𝟏 = −𝟏 ⇒ 𝑪 = 𝟎
𝑪 −𝟏
Final Answer:
𝟏
𝒚(𝒙) =
−𝒙
𝒅𝒚
This is the solution of the differential equation = 𝒚𝟐 with 𝒚(𝟏) = −𝟏.
𝒅𝒙
Question 12)Find the resolvent kernel 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕 , where 𝒂 = 𝟎, 𝒃 = 𝟏
Objective:
Given the kernel 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕 , and interval [𝒂, 𝒃] = [𝟎, 𝟏], we are to find the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀).
The resolvent kernel is given by the Neumann series:

𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
where 𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕), and
𝟏
𝑲𝒏 𝟏 (𝒙, 𝒕) = 𝑲 (𝒙, 𝒔)𝑲𝒏 (𝒔, 𝒕) 𝒅𝒔
𝟎
Step 1: First Iteration
𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕

Step 2: Second Iteration


𝟏 𝟏 𝟏
𝑲𝟐 (𝒙, 𝒕) = 𝑲 (𝒙, 𝒔)𝑲𝟏 (𝒔, 𝒕) 𝒅𝒔 = 𝒆𝒙 𝒔
⋅ 𝒆𝒔 𝒕 𝒅𝒔 = 𝒆𝒙 𝒕
𝒆𝟐𝒔 𝒅𝒔
𝟎 𝟎 𝟎
Compute the integral:
𝟏 𝟏
𝟐𝒔
𝒆𝟐𝒔 𝟏 𝟐
𝒆 𝒅𝒔 = = (𝒆 − 𝟏)
𝟎 𝟐 𝟎
𝟐
So:
𝟏
𝑲𝟐 (𝒙, 𝒕) = 𝒆𝒙 𝒕
⋅ (𝒆𝟐 − 𝟏)
𝟐
Step 3: Third Iteration
𝟏 𝟏 𝟏
𝟏 𝟏
𝑲𝟑 (𝒙, 𝒕) = 𝑲 (𝒙, 𝒔)𝑲𝟐 (𝒔, 𝒕) 𝒅𝒔 = 𝒆𝒙 𝒔
⋅ 𝒆𝒔 𝒕
⋅ (𝒆𝟐 − 𝟏) 𝒅𝒔 = (𝒆𝟐 − 𝟏)𝒆𝒙 𝒕
𝒆𝟐𝒔 𝒅𝒔
𝟎 𝟎 𝟐 𝟐 𝟎
We already know:
𝟏
𝟏 𝟐
𝒆𝟐𝒔 𝒅𝒔 = (𝒆 − 𝟏)
𝟎 𝟐
So:
𝟐
𝒙 𝒕
𝟏 𝟏 𝟏
𝑲𝟑 (𝒙, 𝒕) = 𝒆 ⋅ (𝒆𝟐 − 𝟏) ⋅ (𝒆𝟐 − 𝟏) = 𝒆𝒙 𝒕
⋅ (𝒆𝟐 − 𝟏)
𝟐 𝟐 𝟐

Step 4: General Pattern


We observe a pattern:
𝒏 𝟏
𝟏 𝟐
𝑲𝒏 (𝒙, 𝒕) = 𝒆𝒙 𝒕
⋅ (𝒆 − 𝟏)
𝟐
Step 5: Resolvent Kernel (Neumann Series)
𝒏 𝟏
𝒏 𝟏 𝒏 𝟏 𝒙 𝒕
𝟏 𝟐
𝑹(𝒙, 𝒕; 𝝀) = 𝝀 𝑲𝒏 (𝒙, 𝒕) = 𝝀 𝒆 (𝒆 − 𝟏)
𝟐
𝒏 𝟏 𝒏 𝟏
Factor out 𝒆𝒙 𝒕 :
𝒏 𝟏
𝒙 𝒕
𝟏
𝑹(𝒙, 𝒕; 𝝀) = 𝒆 𝝀 ⋅ (𝒆𝟐 − 𝟏)
𝟐
𝒏 𝟏
This is a geometric series with ratio:
𝟏
𝒓 = 𝝀 ⋅ (𝒆𝟐 − 𝟏)
𝟐
So:
𝒆𝒙 𝒕 𝟐
𝑹(𝒙, 𝒕; 𝝀) = for |𝝀| <
𝟏 𝒆𝟐 − 𝟏
𝟏 − 𝝀 ⋅ (𝒆𝟐 − 𝟏)
𝟐
Final Answer:
The resolvent kernel is:
𝒆𝒙 𝒕
𝑹(𝒙, 𝒕; 𝝀) =
𝟏
𝟏 − 𝝀 ⋅ (𝒆𝟐 − 𝟏)
𝟐
for all values of 𝝀 such that the series converges.
Question 13) Show that the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀) of
𝒃
𝒚(𝒙) = 𝒇(𝒙) + 𝝀 𝑲 (𝒙, 𝒕)𝒚(𝒕) 𝒅𝒕
𝒂
satisfies the integral equation
𝒃
𝑹(𝒙, 𝒕; 𝝀) = 𝑲(𝒙, 𝒕) + 𝝀 𝑲 (𝒙, 𝒛) 𝑹(𝒛, 𝒕; 𝝀) 𝒅𝒛
𝒂
To Prove:
We want to show that the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀) satisfies:
𝒃
𝑹(𝒙, 𝒕; 𝝀) = 𝑲(𝒙, 𝒕) + 𝝀 𝑲 (𝒙, 𝒛) 𝑹(𝒛, 𝒕; 𝝀) 𝒅𝒛
𝒂
Step 1: Define the resolvent kernel via Neumann series
The resolvent kernel is defined as:

𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
where:
 𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕)
𝒃
 𝑲𝒏 𝟏 (𝒙, 𝒕) = ∫𝒂 𝑲 (𝒙, 𝒛) 𝑲𝒏 (𝒛, 𝒕) 𝒅𝒛

Step 2: Write the series expansion of 𝑹(𝒙, 𝒕; 𝝀)


𝑹(𝒙, 𝒕; 𝝀) = 𝑲(𝒙, 𝒕) + 𝝀𝑲𝟐 (𝒙, 𝒕) + 𝝀𝟐 𝑲𝟑 (𝒙, 𝒕) + 𝝀𝟑 𝑲𝟒 (𝒙, 𝒕) + ⋯

Step 3: Write the integral expression from the right-hand side


Let us compute:
𝒃
𝑲(𝒙, 𝒕) + 𝝀 𝑲 (𝒙, 𝒛) 𝑹(𝒛, 𝒕; 𝝀) 𝒅𝒛
𝒂
Substitute the Neumann series for 𝑹(𝒛, 𝒕; 𝝀):
𝒃
= 𝑲(𝒙, 𝒕) + 𝝀 𝑲 (𝒙, 𝒛) 𝝀𝒏 𝟏
𝑲𝒏 (𝒛, 𝒕) 𝒅𝒛
𝒂 𝒏 𝟏
Switch the summation and integration (justified since the series converges):
𝒃
= 𝑲(𝒙, 𝒕) + 𝝀𝒏 𝑲 (𝒙, 𝒛)𝑲𝒏 (𝒛, 𝒕) 𝒅𝒛
𝒏 𝟏 𝒂
Now use the definition of iterated kernels:
𝒃
𝑲 (𝒙, 𝒛) 𝑲𝒏 (𝒛, 𝒕) 𝒅𝒛 = 𝑲𝒏 𝟏 (𝒙, 𝒕)
𝒂
So:

= 𝑲(𝒙, 𝒕) + 𝝀𝑲𝟐 (𝒙, 𝒕) + 𝝀𝟐 𝑲𝟑 (𝒙, 𝒕) + ⋯ = 𝝀𝒏 𝟏


𝑲𝒏 (𝒙, 𝒕) = 𝑹(𝒙, 𝒕; 𝝀)
𝒏 𝟏

Conclusion:
Hence, the resolvent kernel satisfies the required integral equation:
𝒃
𝑹(𝒙, 𝒕; 𝝀) = 𝑲(𝒙, 𝒕) + 𝝀 𝑲 (𝒙, 𝒛) 𝑹(𝒛, 𝒕; 𝝀) 𝒅𝒛
𝒂
which completes the proof.
𝒏 𝟏
𝒆𝝅 + 𝟏
𝑲𝒏 (𝒙, 𝒕) = − ⋅ 𝒆𝒙 𝐜𝐨𝐬𝒕
𝟐
Ques 1: State and prove Kamke's Convergence Theorem
Statement of Kamke’s Convergence Theorem: Let {𝒇𝒏 (𝒙)} be a monotonic increasing sequence of continuous
functions defined on a closed interval [𝒂, 𝒃], and suppose that for each 𝒙 ∈ [𝒂, 𝒃], the sequence {𝒇𝒏 (𝒙)} converges to
a function 𝒇(𝒙). If the pointwise limit function 𝒇(𝒙) is also continuous on [𝒂, 𝒃], then the convergence of the sequence
{𝒇𝒏 (𝒙)} to 𝒇(𝒙) is uniform on [𝒂, 𝒃].
Explanation of Terms Used in the Theorem:
1. A sequence {𝒇𝒏 (𝒙)} is said to be monotonic increasing if for all 𝒏, 𝒇𝒏 (𝒙) ≤ 𝒇𝒏 𝟏 (𝒙) for all 𝒙 ∈ [𝒂, 𝒃].
2. A sequence of functions {𝒇𝒏 (𝒙)} converges pointwise to 𝒇(𝒙) if for every 𝒙 ∈ [𝒂, 𝒃], 𝐥𝐢𝐦𝒏→ 𝒇𝒏 (𝒙) = 𝒇(𝒙).
3. The convergence is said to be uniform on [𝒂, 𝒃] if
∀𝜺 > 𝟎, ∃𝑵 ∈ ℕ such that ∀𝒏 ≥ 𝑵, 𝐬𝐮𝐩 |𝒇𝒏 (𝒙) − 𝒇(𝒙)| < 𝜺
𝒙∈[𝒂,𝒃]
That is, after a certain index 𝑵, the difference between 𝒇𝒏 (𝒙) and 𝒇(𝒙) is uniformly small throughout the entire
interval [𝒂, 𝒃].
Proof:
We are given the following:
 The sequence {𝒇𝒏 (𝒙)} is composed of continuous functions on the closed and bounded interval [𝒂, 𝒃], hence
each 𝒇𝒏 is uniformly continuous and bounded on [𝒂, 𝒃] due to the Heine–Cantor theorem.
 The sequence is monotonically increasing, i.e., 𝒇𝒏 (𝒙) ≤ 𝒇𝒏 𝟏 (𝒙) for all 𝒙 ∈ [𝒂, 𝒃].
 The sequence converges pointwise to a function 𝒇(𝒙).
 The limit function 𝒇(𝒙) is continuous on [𝒂, 𝒃].
Our goal is to prove that {𝒇𝒏 (𝒙)} converges to 𝒇(𝒙) uniformly on [𝒂, 𝒃].
To proceed, define a new sequence of functions:
𝒈𝒏 (𝒙) = 𝒇(𝒙) − 𝒇𝒏 (𝒙)
Then each 𝒈𝒏 (𝒙) is continuous on [𝒂, 𝒃] because both 𝒇(𝒙) and 𝒇𝒏 (𝒙) are continuous. Also, since {𝒇𝒏 (𝒙)} is
increasing and converging pointwise to 𝒇(𝒙), it follows that 𝒈𝒏 (𝒙) is a monotonically decreasing sequence of non-
negative continuous functions:
𝒈𝒏 (𝒙) ≥ 𝒈𝒏 𝟏 (𝒙) ≥ 𝟎 for all 𝒙 ∈ [𝒂, 𝒃]
and
𝐥𝐢𝐦 𝒈𝒏 (𝒙) = 𝒇(𝒙) − 𝐥𝐢𝐦 𝒇𝒏 (𝒙) = 𝟎
𝒏→ 𝒏→
So, we have a sequence {𝒈𝒏 (𝒙)} of non-negative, continuous, monotonically decreasing functions converging
pointwise to the zero function.
Now we invoke Dini's Theorem, which states:
If {𝒈𝒏 (𝒙)} is a sequence of continuous functions on a compact interval [𝒂, 𝒃] such that:
 𝒈𝒏 (𝒙) converges pointwise to a continuous function 𝒈(𝒙),
 the sequence {𝒈𝒏 (𝒙)} is monotonic (increasing or decreasing), then the convergence is uniform on [𝒂, 𝒃].
Here, the interval [𝒂, 𝒃] is compact, 𝒈𝒏 (𝒙) is continuous, decreasing, and converging pointwise to 𝒈(𝒙) = 𝟎, which is
also continuous. Hence, by Dini’s Theorem, 𝒈𝒏 (𝒙) → 𝟎 uniformly on [𝒂, 𝒃].
This means that for any 𝜺 > 𝟎, there exists a natural number 𝑵 such that for all 𝒏 ≥ 𝑵 and for all 𝒙 ∈ [𝒂, 𝒃],
|𝒈𝒏 (𝒙) − 𝟎| = |𝒇(𝒙) − 𝒇𝒏 (𝒙)| < 𝜺
This is precisely the definition of uniform convergence of the sequence {𝒇𝒏 (𝒙)} to 𝒇(𝒙).
Therefore, the sequence {𝒇𝒏 (𝒙)} converges uniformly to 𝒇(𝒙) on [𝒂, 𝒃].
Hence, the Kamke’s Convergence Theorem is proved.
Ques 2: Discuss Nagumo's and Osgood's Criteria
Nagumo’s Criterion:
Nagumo’s criterion provides a condition under which the initial value problem (IVP) for a first-order differential
equation has a unique solution.
Consider the differential equation
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
Nagumo's Criterion states: Let 𝒇(𝒙, 𝒚) be continuous in a region 𝑹 containing the point (𝒙𝟎 , 𝒚𝟎 ). Suppose there exists
a function 𝝓: [𝟎, 𝒂) → [𝟎, ∞) such that:
1. 𝝓 is continuous and non-negative,
𝜹 𝟏
2. ∫𝟎 𝒅𝒔 = ∞ for some 𝜹 > 𝟎 (this is the divergence condition),
𝝓(𝒔)
3. and for all 𝒙 near 𝒙𝟎 and all 𝒚𝟏 , 𝒚𝟐 near 𝒚𝟎 , with 𝒚𝟏 ≠ 𝒚𝟐 ,
𝒇(𝒙, 𝒚𝟏 ) − 𝒇(𝒙, 𝒚𝟐 )
≤ 𝝓(|𝒚𝟏 − 𝒚𝟐 |)
𝒚𝟏 − 𝒚 𝟐
then the IVP has a unique solution in some neighborhood of 𝒙𝟎 .
Significance: Nagumo’s condition weakens the Lipschitz condition (used in Picard–Lindelöf theorem) by replacing
the constant Lipschitz bound with a function 𝝓, but still ensures uniqueness of the solution. It's useful when the
function 𝒇 is not Lipschitz but still behaves “mildly” enough for uniqueness to hold.

Osgood’s Criterion:
Osgood’s criterion also provides a uniqueness condition for the solution of the initial value problem, similar to
Nagumo’s criterion but formulated differently.
Consider the IVP:
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
Osgood's Criterion states: Let 𝒇(𝒙, 𝒚) be continuous in a neighborhood of (𝒙𝟎 , 𝒚𝟎 ). Suppose there exists a non-
decreasing, continuous function 𝝓: [𝟎, 𝒂) → [𝟎, ∞) with 𝝓(𝟎) = 𝟎, and:
1. For all 𝒙 near 𝒙𝟎 and all 𝒚𝟏 , 𝒚𝟐 near 𝒚𝟎 ,
|𝒇(𝒙, 𝒚𝟏 ) − 𝒇(𝒙, 𝒚𝟐 )| ≤ 𝝓(|𝒚𝟏 − 𝒚𝟐 |)
2. The integral
𝜹
𝟏
𝒅𝒔 = ∞ for some 𝜹 > 𝟎
𝟎 𝝓(𝒔)
Then the IVP has a unique solution.
Key Points and Comparison:
 Both Nagumo’s and Osgood’s criteria are generalizations of the Lipschitz condition, used when 𝒇(𝒙, 𝒚) may
not satisfy the strict Lipschitz condition.
 Osgood's condition gives an upper bound on the difference 𝒇(𝒙, 𝒚𝟏 ) − 𝒇(𝒙, 𝒚𝟐 ), while Nagumo's gives an
upper bound on the quotient of that difference over |𝒚𝟏 − 𝒚𝟐 |.
𝟏
 The divergence of the integral ∫ 𝒅𝒔 near zero ensures that small differences in the function’s output do
𝝓(𝒔)
not accumulate rapidly enough to allow multiple solutions.
 If 𝝓(𝒔) = 𝑳𝒔, a linear function, then both reduce to the standard Lipschitz condition, and the divergence
𝟏
integral becomes ∫ 𝒅𝒔 = ∞, satisfying both criteria.
𝑳𝒔
Ques 3: Explain Fredholm Integral Equation with Separable Kernel
A Fredholm integral equation is a type of integral equation in which the unknown function appears under the
integral sign. The general form of the Fredholm integral equation of the second kind is:
𝒃
𝒇(𝒙) = 𝝓(𝒙) + 𝝀 𝑲 (𝒙, 𝒕)𝒇(𝒕) 𝒅𝒕
𝒂
Where:
 𝒇(𝒙) is the unknown function to be determined
 𝝓(𝒙) is a known function (called the inhomogeneous term)
 𝝀 is a parameter (often a constant)
 𝑲(𝒙, 𝒕) is a given function called the kernel
 The integration is over a finite interval [𝒂, 𝒃]

Separable Kernel:
A separable kernel (also called a degenerate kernel) is a kernel 𝑲(𝒙, 𝒕) that can be written as a finite sum of products
of functions of 𝒙 and functions of 𝒕. That is,
𝒏

𝑲(𝒙, 𝒕) = 𝒖𝒊 (𝒙)𝒗𝒊 (𝒕)


𝒊 𝟏
where 𝒖𝒊 (𝒙) and 𝒗𝒊 (𝒕) are known functions (continuous or piecewise continuous), and 𝒏 is a finite integer.
This type of kernel simplifies the integral equation significantly, reducing the problem from solving an integral
equation to solving a system of linear equations.

Fredholm Equation with Separable Kernel:


Substituting the separable form of the kernel into the Fredholm equation of the second kind:
𝒃 𝒏

𝒇(𝒙) = 𝝓(𝒙) + 𝝀 𝒖𝒊 (𝒙)𝒗𝒊 (𝒕) 𝒇(𝒕) 𝒅𝒕


𝒂 𝒊 𝟏
Interchanging the sum and the integral:
𝒏 𝒃
𝒇(𝒙) = 𝝓(𝒙) + 𝝀 𝒖𝒊 (𝒙) 𝒗𝒊 (𝒕)𝒇(𝒕) 𝒅𝒕
𝒊 𝟏 𝒂
Define the constants:
𝒃
𝒄𝒊 = 𝒗𝒊 (𝒕)𝒇(𝒕) 𝒅𝒕 for 𝒊 = 𝟏, 𝟐, . . . , 𝒏
𝒂
Then the equation becomes:
𝒏

𝒇(𝒙) = 𝝓(𝒙) + 𝝀 𝒄𝒊 𝒖𝒊 (𝒙)


𝒊 𝟏
This expression gives 𝒇(𝒙) in terms of unknown constants 𝒄𝒊 . To find these constants, substitute this expression for
𝒇(𝒙) into the definition of 𝒄𝒊 :
𝒃 𝒏

𝒄𝒋 = 𝒗𝒋 (𝒕) 𝝓(𝒕) + 𝝀 𝒄𝒊 𝒖𝒊 (𝒕) 𝒅𝒕


𝒂 𝒊 𝟏
𝒃 𝒏 𝒃
𝒄𝒋 = 𝒗𝒋 (𝒕)𝝓(𝒕) 𝒅𝒕 + 𝝀 𝒄𝒊 𝒗𝒋 (𝒕)𝒖𝒊 (𝒕) 𝒅𝒕
𝒂 𝒊 𝟏 𝒂
Let:
𝒃
 𝑨𝒋 = ∫𝒂 𝒗𝒋 (𝒕)𝝓(𝒕) 𝒅𝒕
𝒃
 𝑩𝒋𝒊 = ∫𝒂 𝒗𝒋 (𝒕)𝒖𝒊 (𝒕) 𝒅𝒕
Then the system becomes:
𝒏

𝒄𝒋 = 𝑨𝒋 + 𝝀 𝑩𝒋𝒊 𝒄𝒊 for 𝒋 = 𝟏, 𝟐, . . . , 𝒏
𝒊 𝟏
This is a system of 𝒏 linear equations in 𝒏 unknowns 𝒄𝟏 , 𝒄𝟐 , . . . , 𝒄𝒏 , which can be solved using matrix methods.
Once the constants 𝒄𝒊 are found, the solution to the original integral equation is:
𝒏

𝒇(𝒙) = 𝝓(𝒙) + 𝝀 𝒄𝒊 𝒖𝒊 (𝒙)


𝒊 𝟏

Advantages of Separable Kernels:


 Greatly simplifies the process of solving integral equations.
 Reduces the problem to algebraic computation instead of dealing with functional calculus.
 Offers analytical insight into properties like eigenvalues and solvability.

Ques 4: Find the third approximation of the solution of


𝒅𝒚
= 𝟐𝒙 + 𝒚, where 𝒚 = 𝟓 when 𝒙 = 𝟒
𝒅𝒙
We will solve this using Picard’s method of successive approximations, which is an iterative method to approximate
the solution of a first-order differential equation.
Let us define the initial value problem as:
𝒅𝒚
= 𝒇(𝒙, 𝒚) = 𝟐𝒙 + 𝒚, 𝒚(𝟒) = 𝟓
𝒅𝒙
The general formula in Picard’s method is:
𝒙
𝒚𝒏 𝟏 (𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚𝒏 (𝒕)) 𝒅𝒕
𝒙𝟎
with 𝒚𝟎 = 𝟓, 𝒙𝟎 = 𝟒

First Approximation 𝒚𝟎 (𝒙): Take the constant function:


𝒚𝟎 (𝒙) = 𝟓
Second Approximation 𝒚𝟏 (𝒙):
𝒙 𝒙
𝒚𝟏 (𝒙) = 𝟓 + [ 𝟐𝒕 + 𝒚𝟎 (𝒕)]𝒅𝒕 = 𝟓 + [ 𝟐𝒕 + 𝟓]𝒅𝒕
𝟒 𝟒
= 𝟓 + [𝒕𝟐 + 𝟓𝒕]𝒙𝟒 = 𝟓 + (𝒙𝟐 + 𝟓𝒙 − 𝟏𝟔 − 𝟐𝟎) = 𝟓 + 𝒙𝟐 + 𝟓𝒙 − 𝟑𝟔
𝒚𝟏 (𝒙) = 𝒙𝟐 + 𝟓𝒙 − 𝟑𝟏
Third Approximation 𝒚𝟐 (𝒙):
𝒙
𝒚𝟐 (𝒙) = 𝟓 + [ 𝟐𝒕 + 𝒚𝟏 (𝒕)]𝒅𝒕
𝟒
𝒙 𝒙
=𝟓+ [ 𝟐𝒕 + 𝒕𝟐 + 𝟓𝒕 − 𝟑𝟏]𝒅𝒕 = 𝟓 + ( 𝒕𝟐 + 𝟕𝒕 − 𝟑𝟏)𝒅𝒕
𝟒 𝟒
𝒙
𝒕𝟑 𝟕𝒕𝟐
=𝟓+ + − 𝟑𝟏𝒕
𝟑 𝟐 𝟒
Now compute the definite integral:
𝒙𝟑 𝟕𝒙𝟐 𝟔𝟒 𝟕 ⋅ 𝟏𝟔
=𝟓+ + − 𝟑𝟏𝒙 − + − 𝟏𝟐𝟒
𝟑 𝟐 𝟑 𝟐
𝒙𝟑 𝟕𝒙𝟐 𝟔𝟒
=𝟓+ + − 𝟑𝟏𝒙 − + 𝟓𝟔 − 𝟏𝟐𝟒
𝟑 𝟐 𝟑
𝒙𝟑 𝟕𝒙𝟐 𝟔𝟒
=𝟓+ + − 𝟑𝟏𝒙 − − 𝟔𝟖
𝟑 𝟐 𝟑
𝒙𝟑 𝟕𝒙𝟐 𝟔𝟒 𝒙𝟑 𝟕𝒙𝟐 𝟔𝟒 − 𝟐𝟎𝟒
=𝟓+ + − 𝟑𝟏𝒙 − − 𝟔𝟖 = 𝟓 + + − 𝟑𝟏𝒙 −
𝟑 𝟐 𝟑 𝟑 𝟐 𝟑
𝟑 𝟐
𝒙 𝟕𝒙 𝟏𝟒𝟎
=𝟓+ + − 𝟑𝟏𝒙 +
𝟑 𝟐 𝟑
So the third approximation is:
𝒙𝟑 𝟕𝒙𝟐 𝟏𝟒𝟎 𝒙𝟑 𝟕𝒙𝟐 𝟏𝟓𝟓
𝒚𝟐 (𝒙) = + − 𝟑𝟏𝒙 + 𝟓 + = + − 𝟑𝟏𝒙 +
𝟑 𝟐 𝟑 𝟑 𝟐 𝟑
Therefore, the third approximation is:
𝒙𝟑 𝟕𝒙𝟐 𝟏𝟓𝟓
𝒚𝟐 (𝒙) = + − 𝟑𝟏𝒙 +
𝟑 𝟐 𝟑
Ques 5: Solve the Volterra integral equation
𝒙
𝒖(𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝒖(𝒕) 𝒅𝒕
𝟎
This is a Volterra integral equation of the second kind, with kernel 𝑲(𝒙, 𝒕) = 𝐬𝐢𝐧(𝒙 − 𝒕) and inhomogeneous term
𝐜𝐨𝐬𝒙. We solve this using the method of successive approximations (also called the iterative method or Picard
iteration).
Let us define the sequence {𝒖𝒏 (𝒙)} as follows:
 Initial approximation:
𝒖𝟎 (𝒙) = 𝐜𝐨𝐬𝒙
 Recursive formula:
𝒙
𝒖𝒏 𝟏 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝒖𝒏 (𝒕) 𝒅𝒕
𝟎
Step 1: First approximation 𝒖𝟏 (𝒙):
𝒙 𝒙
𝒖𝟏 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝒖𝟎 (𝒕) 𝒅𝒕 = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝐜𝐨𝐬𝒕 𝒅𝒕
𝟎 𝟎
Now evaluate:
𝒙
𝑰𝟏 = 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝐜𝐨𝐬𝒕 𝒅𝒕
𝟎
Use substitution 𝒔 = 𝒙 − 𝒕 ⇒ 𝒅𝒔 = −𝒅𝒕, when 𝒕 = 𝟎 ⇒ 𝒔 = 𝒙, and 𝒕 = 𝒙 ⇒ 𝒔 = 𝟎
𝒙 𝒙
𝑰𝟏 = 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝐜𝐨𝐬𝒕 𝒅𝒕 = 𝐬𝐢𝐧 𝒔 𝐜𝐨𝐬(𝒙 − 𝒔) 𝒅𝒔
𝟎 𝟎
Now use the identity:
𝟏
𝐬𝐢𝐧𝒔𝐜𝐨𝐬(𝒙 − 𝒔) = [𝐬𝐢𝐧(𝒙) + 𝐬𝐢𝐧(𝟐𝒔 − 𝒙)]
𝟐
So:
𝒙 𝒙
𝟏
𝑰𝟏 = 𝐬𝐢𝐧 𝒔 𝐜𝐨𝐬(𝒙 − 𝒔) 𝒅𝒔 = [𝐬𝐢𝐧(𝒙) + 𝐬𝐢𝐧(𝟐𝒔 − 𝒙)] 𝒅𝒔
𝟎 𝟐 𝟎
𝒙
𝟏
= 𝐬𝐢𝐧(𝒙) ⋅ 𝒙 + 𝐬𝐢𝐧 (𝟐𝒔 − 𝒙) 𝒅𝒔
𝟐 𝟎
Let us now evaluate:
𝒙
𝐬𝐢𝐧 (𝟐𝒔 − 𝒙) 𝒅𝒔
𝟎
Use substitution 𝒖 = 𝟐𝒔 − 𝒙, then 𝒅𝒖 = 𝟐𝒅𝒔, when 𝒔 = 𝟎 ⇒ 𝒖 = −𝒙, when 𝒔 = 𝒙 ⇒ 𝒖 = 𝒙
𝒙 𝒙
𝟏 𝟏 𝟏
𝐬𝐢𝐧 (𝟐𝒔 − 𝒙) 𝒅𝒔 = 𝐬𝐢𝐧 𝒖 𝒅𝒖 = [−𝐜𝐨𝐬𝒖]𝒙 𝒙 = [−𝐜𝐨𝐬𝒙 + 𝐜𝐨𝐬(−𝒙)] = 𝟎
𝟎 𝟐 𝒙 𝟐 𝟐
So the integral becomes:
𝟏
𝑰𝟏 = ⋅ 𝐬𝐢𝐧(𝒙) ⋅ 𝒙
𝟐
Thus:
𝒙
𝒖𝟏 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧𝒙
𝟐

Step 2: Second approximation 𝒖𝟐 (𝒙):


𝒙 𝒙
𝒕
𝒖𝟐 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝒖𝟏 (𝒕) 𝒅𝒕 = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝐜𝐨𝐬𝒕 + 𝐬𝐢𝐧𝒕 𝒅𝒕
𝟎 𝟎 𝟐
This integral can be split:
𝒙 𝒙
𝟏
𝒖𝟐 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧 (𝒙 − 𝒕) 𝐜𝐨𝐬𝒕 𝒅𝒕 + 𝒕 𝐬𝐢𝐧(𝒙 − 𝒕) 𝐬𝐢𝐧𝒕 𝒅𝒕
𝟎 𝟐 𝟎
We already computed the first integral in Step 1 as:
𝒙
𝒙
𝐬𝐢𝐧 (𝒙 − 𝒕) 𝐜𝐨𝐬𝒕 𝒅𝒕 = 𝐬𝐢𝐧𝒙
𝟎 𝟐
Now compute the second integral: Let
𝟏 𝒙
𝑰𝟐 =
𝒕 𝐬𝐢𝐧(𝒙 − 𝒕) 𝐬𝐢𝐧𝒕 𝒅𝒕
𝟐 𝟎
This is a more complex integral. We can denote it but not evaluate it easily in elementary terms, so we can leave the
answer in terms of this integral:
𝒙 𝟏 𝒙
𝒖𝟐 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧𝒙 + 𝒕 𝐬𝐢𝐧(𝒙 − 𝒕) 𝐬𝐢𝐧𝒕 𝒅𝒕
𝟐 𝟐 𝟎

Conclusion:
 Zeroth approximation: 𝒖𝟎 (𝒙) = 𝐜𝐨𝐬𝒙
𝒙
 First approximation: 𝒖𝟏 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧𝒙
𝟐
 Second approximation:
𝒙
𝒙 𝟏
𝒖𝟐 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧𝒙 + 𝒕 𝐬𝐢𝐧(𝒙 − 𝒕) 𝐬𝐢𝐧𝒕 𝒅𝒕
𝟐 𝟐 𝟎
Ques 6: Find the resolvent kernel for 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕 , on the interval [𝟎, 𝟏]
We are given the kernel:
𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕 , for 𝟎 ≤ 𝒕 ≤ 𝒙 ≤ 𝟏
We are to find the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀), which satisfies the integral equation:
𝒙
𝑹(𝒙, 𝒕; 𝝀) = 𝑲(𝒙, 𝒕) + 𝝀 𝑲 (𝒙, 𝒔)𝑹(𝒔, 𝒕; 𝝀) 𝒅𝒔
𝒕
The general method to compute the resolvent kernel is to use the Neumann series expansion:

𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
Where:
 𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕)
𝒙
 𝑲𝒏 𝟏 (𝒙, 𝒕) = ∫𝒕 𝑲 (𝒙, 𝒔)𝑲𝒏 (𝒔, 𝒕) 𝒅𝒔

Step 1: First iterate


𝑲𝟏 (𝒙, 𝒕) = 𝒆𝒙 𝒕

Step 2: Second iterate


𝒙 𝒙 𝒙
𝒙 𝒔 𝒔 𝒕 𝒙 𝒕
𝑲𝟐 (𝒙, 𝒕) = 𝑲 (𝒙, 𝒔)𝑲𝟏 (𝒔, 𝒕) 𝒅𝒔 = 𝒆 ⋅𝒆 𝒅𝒔 = 𝒆 𝒆𝟐𝒔 𝒅𝒔
𝒕 𝒕 𝒕
𝒙
𝟏 𝟐𝒔 𝟏
= 𝒆𝒙 𝒕
𝒆 = 𝒆𝒙 𝒕 ⋅ (𝒆𝟐𝒙 − 𝒆𝟐𝒕 )
𝟐 𝒕 𝟐
So,
𝟏 𝒙 𝒕 𝟐𝒙
𝑲𝟐 (𝒙, 𝒕) = 𝒆 (𝒆 − 𝒆𝟐𝒕 )
𝟐

Step 3: Third iterate


𝒙 𝒙
𝟏
𝑲𝟑 (𝒙, 𝒕) = 𝑲 (𝒙, 𝒔)𝑲𝟐 (𝒔, 𝒕) 𝒅𝒔 = 𝒆𝒙 𝒔
⋅ 𝒆𝒔 𝒕 (𝒆𝟐𝒔 − 𝒆𝟐𝒕 ) 𝒅𝒔
𝒕 𝒕 𝟐
𝒙 𝒙 𝒙
𝟏 𝟏
= 𝒆𝒙 𝒕
𝒆𝟐𝒔 (𝒆𝟐𝒔 − 𝒆𝟐𝒕 ) 𝒅𝒔 = 𝒆𝒙 𝒕
𝒆𝟒𝒔 𝒅𝒔 − 𝒆𝟐𝒕 𝒆𝟐𝒔 𝒅𝒔
𝟐 𝒕 𝟐 𝒕 𝒕
Evaluate both integrals:
𝒙 𝒙
𝟏 𝟏 𝟐𝒙
𝒆 𝒅𝒔 = (𝒆𝟒𝒙 − 𝒆𝟒𝒕 ),
𝟒𝒔
𝒆𝟐𝒔 𝒅𝒔 = (𝒆 − 𝒆𝟐𝒕 )
𝒕 𝟒 𝒕 𝟐
So,
𝟏 𝒙 𝒕
𝟏 𝟒𝒙 𝟏
𝑲𝟑 (𝒙, 𝒕) = 𝒆 (𝒆 − 𝒆𝟒𝒕 ) − 𝒆𝟐𝒕 ⋅ (𝒆𝟐𝒙 − 𝒆𝟐𝒕 )
𝟐 𝟒 𝟐

Now the resolvent kernel is the Neumann series:


𝑹(𝒙, 𝒕; 𝝀) = 𝑲𝟏 (𝒙, 𝒕) + 𝝀𝑲𝟐 (𝒙, 𝒕) + 𝝀𝟐 𝑲𝟑 (𝒙, 𝒕) + ⋯
Thus:
𝟏
𝑹(𝒙, 𝒕; 𝝀) = 𝒆𝒙 𝒕 + 𝝀 ⋅ 𝒆𝒙 𝒕 (𝒆𝟐𝒙 − 𝒆𝟐𝒕 ) + 𝝀𝟐 ⋅ [expression for 𝑲𝟑 (𝒙, 𝒕)] + ⋯
𝟐
So you can factor out 𝒆𝒙 𝒕 :
𝟏
𝑹(𝒙, 𝒕; 𝝀) = 𝒆𝒙 𝒕 𝟏 + 𝝀 ⋅ (𝒆𝟐𝒙 − 𝒆𝟐𝒕 ) + 𝝀𝟐 ⋅ (expression) + ⋯
𝟐
This series continues, and convergence is guaranteed for sufficiently small |𝝀|. In most practical contexts, 1–3 terms
are used for approximation.

Conclusion:
The resolvent kernel for 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕
is:

𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
where each 𝑲𝒏 (𝒙, 𝒕) is generated recursively. The first two terms give the approximate resolvent:
𝟏
𝑹(𝒙, 𝒕; 𝝀) ≈ 𝒆𝒙 𝒕 + 𝝀 ⋅ 𝒆𝒙 𝒕 (𝒆𝟐𝒙 − 𝒆𝟐𝒕 )
𝟐
Ques 7: State and prove Gronwall’s Inequality
Statement of Gronwall’s Inequality (Integral Form): Let 𝒖(𝒙) and 𝒈(𝒙) be continuous, real-valued, non-negative
functions on the interval [𝒂, 𝒃], and let 𝝀 ≥ 𝟎 be a constant. Suppose that for all 𝒙 ∈ [𝒂, 𝒃], the function 𝒖(𝒙) satisfies
the inequality:
𝒙
𝒖(𝒙) ≤ 𝒈(𝒙) + 𝝀 𝒖 (𝒕) 𝒅𝒕
𝒂
Then Gronwall’s inequality asserts that:
𝒙
𝒖(𝒙) ≤ 𝒈(𝒙) + 𝝀 𝒈 (𝒕)𝒆𝝀(𝒙 𝒕)
𝒅𝒕
𝒂
Moreover, if 𝒈(𝒙) is a constant function 𝒈(𝒙) = 𝑪, then the inequality simplifies to:
𝒖(𝒙) ≤ 𝑪𝒆𝝀(𝒙 𝒂)

Proof of Gronwall’s Inequality:


We are given:
𝒙
𝒖(𝒙) ≤ 𝒈(𝒙) + 𝝀 𝒖 (𝒕) 𝒅𝒕
𝒂
Our goal is to find an upper bound for 𝒖(𝒙).
Define a new function:
𝒙
𝒘(𝒙) = 𝒖 (𝒕) 𝒅𝒕 ⇒ 𝒘 (𝒙) = 𝒖(𝒙)
𝒂
From the given inequality:
𝒖(𝒙) ≤ 𝒈(𝒙) + 𝝀𝒘(𝒙) ⇒ 𝒘 (𝒙) ≤ 𝒈(𝒙) + 𝝀𝒘(𝒙)
Now consider the differential inequality:
𝒘 (𝒙) − 𝝀𝒘(𝒙) ≤ 𝒈(𝒙)
This is a linear differential inequality. Multiply both sides by the integrating factor 𝒆 𝝀𝒙 :
𝒅
𝒆 𝝀𝒙 𝒘 (𝒙) − 𝝀𝒆 𝝀𝒙 𝒘(𝒙) ≤ 𝒆 𝝀𝒙 𝒈(𝒙) ⇒ 𝒆 𝝀𝒙 𝒘(𝒙) ≤ 𝒆 𝝀𝒙 𝒈(𝒙)
𝒅𝒙
Integrate both sides from 𝒂 to 𝒙:
𝒙 𝒙 𝒙
𝒅
𝒆 𝝀𝒕 𝒘(𝒕) 𝒅𝒕 ≤ 𝒆 𝝀𝒕 𝒈(𝒕) 𝒅𝒕 ⇒ 𝒆 𝝀𝒙 𝒘(𝒙) − 𝒆 𝝀𝒂 𝒘(𝒂) ≤ 𝒆 𝝀𝒕 𝒈(𝒕) 𝒅𝒕
𝒂 𝒅𝒕 𝒂 𝒂
Since 𝒘(𝒂) = 𝟎, we get:
𝒙 𝒙
𝝀𝒙 𝝀𝒕 𝝀𝒙 𝝀𝒕
𝒆 𝒘(𝒙) ≤ 𝒆 𝒈(𝒕) 𝒅𝒕 ⇒ 𝒘(𝒙) ≤ 𝒆 𝒆 𝒈(𝒕) 𝒅𝒕
𝒂 𝒂
Substitute this into the original inequality for 𝒖(𝒙):
𝒙
𝝀𝒙 𝝀𝒕
𝒖(𝒙) ≤ 𝒈(𝒙) + 𝝀𝒘(𝒙) ≤ 𝒈(𝒙) + 𝝀𝒆 𝒆 𝒈(𝒕) 𝒅𝒕
𝒂
Now rearrange:
𝒙
𝒖(𝒙) ≤ 𝒈(𝒙) + 𝝀 𝒈 (𝒕)𝒆𝝀(𝒙 𝒕)
𝒅𝒕
𝒂
Hence proved.

Conclusion:
Gronwall’s inequality is a powerful tool in differential and integral equations. It allows us to bound a function 𝒖(𝒙)
that is controlled by an integral involving itself. It is especially important in proving uniqueness and stability results
for solutions of differential equations.
Ques 8: State and prove Picard–Lindelöf Theorem
Statement of the Picard–Lindelöf Theorem (Existence and Uniqueness Theorem):
Let the function 𝒇(𝒙, 𝒚) be defined on a rectangle 𝑹 = {(𝒙, 𝒚) ∈ ℝ𝟐 : |𝒙 − 𝒙𝟎 | ≤ 𝒂, |𝒚 − 𝒚𝟎 | ≤ 𝒃}. Suppose that:
1. 𝒇(𝒙, 𝒚) is continuous in 𝑹,
2. 𝒇(𝒙, 𝒚) satisfies a Lipschitz condition in 𝒚, i.e., there exists a constant 𝑳 > 𝟎 such that for all
(𝒙, 𝒚𝟏 ), (𝒙, 𝒚𝟐 ) ∈ 𝑹,
|𝒇(𝒙, 𝒚𝟏 ) − 𝒇(𝒙, 𝒚𝟐 )| ≤ 𝑳|𝒚𝟏 − 𝒚𝟐 |
Then, there exists a unique function 𝒚(𝒙), defined in some interval |𝒙 − 𝒙𝟎 | ≤ 𝒉 ≤ 𝒂, which satisfies the initial value
problem:
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙

Explanation of the Conditions:


 Continuity of 𝒇(𝒙, 𝒚) ensures that the right-hand side of the differential equation behaves well locally.
 The Lipschitz condition guarantees that 𝒇 does not change too rapidly in 𝒚, which is crucial for uniqueness.

Proof of Picard–Lindelöf Theorem (using Picard’s iteration method):


We construct successive approximations to the solution using Picard's method.
Start with the integral form of the differential equation:
𝒙
𝒚(𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚(𝒕)) 𝒅𝒕
𝒙𝟎
Step 1: Initial Approximation
Let:
𝒚𝟎 (𝒙) = 𝒚𝟎
Step 2: Define the sequence of approximations recursively:
𝒙
𝒚𝒏 𝟏 (𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚𝒏 (𝒕)) 𝒅𝒕, 𝒏 = 𝟎, 𝟏, 𝟐, . ..
𝒙𝟎
We now prove that the sequence {𝒚𝒏 (𝒙)} converges uniformly to a function 𝒚(𝒙) on some interval [𝒙𝟎 − 𝒉, 𝒙𝟎 + 𝒉].
Let 𝑴 = 𝐬𝐮𝐩(𝒙,𝒚)∈𝑹 |𝒇(𝒙, 𝒚)|, and assume this is finite (since 𝒇 is continuous on the compact rectangle 𝑹).
We choose 𝒉 > 𝟎 small enough so that:
𝒃
𝒉 ≤ 𝐦𝐢𝐧 𝒂,
𝑴
This ensures that the approximations remain within the rectangle 𝑹.
Now we estimate the difference between successive approximations using the Lipschitz condition:
𝒙 𝒙
|𝒚𝒏 𝟏 (𝒙) − 𝒚𝒏 (𝒙)| = [ 𝒇(𝒕, 𝒚𝒏 (𝒕)) − 𝒇(𝒕, 𝒚𝒏 𝟏 (𝒕))]𝒅𝒕 ≤ 𝑳 |𝒚𝒏 (𝒕) − 𝒚𝒏 𝟏 (𝒕)|𝒅𝒕
𝒙𝟎 𝒙𝟎
Define:
𝑬𝒏 (𝒙) = 𝐬𝐮𝐩 |𝒚𝒏 (𝒙) − 𝒚𝒏 𝟏 (𝒙)|
𝒙∈[𝒙𝟎 𝒉,𝒙𝟎 𝒉]
Then:
𝒙
𝑬𝒏 𝟏 (𝒙) ≤ 𝑳 𝑬𝒏 (𝒕) 𝒅𝒕 ≤ 𝑳𝒉𝑬𝒏
𝒙𝟎
This gives a recursive inequality:
(𝑳𝒉)𝒏
𝟏𝑬𝒏 ≤ 𝑬𝟏
𝒏
𝒏!
(𝑳𝒉)
By induction and using the fact that ∑𝒏 𝟏 converges (as an exponential series), we conclude that the series
𝒏!
∑𝒏 𝟏 𝑬𝒏 converges. Therefore, {𝒚𝒏 (𝒙)} is a uniformly convergent sequence of continuous functions.
Let:
𝒚(𝒙) = 𝐥𝐢𝐦 𝒚𝒏 (𝒙)
𝒏→
Then, passing the limit inside the integral (using the dominated convergence theorem), we find that 𝒚(𝒙) satisfies:
𝒙
𝒚(𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚(𝒕)) 𝒅𝒕
𝒙𝟎
So 𝒚(𝒙) is a solution of the initial value problem.

Uniqueness:
Suppose there exist two solutions 𝒚(𝒙) and 𝒚(𝒙) satisfying the same IVP. Then:
𝒙 𝒙
|𝒚(𝒙) − 𝒚(𝒙)| = [ 𝒇(𝒕, 𝒚(𝒕)) − 𝒇(𝒕, 𝒚(𝒕))]𝒅𝒕 ≤ 𝑳 |𝒚(𝒕) − 𝒚(𝒕)|𝒅𝒕
𝒙𝟎 𝒙𝟎
By Gronwall’s inequality, we conclude:
|𝒚(𝒙) − 𝒚(𝒙)| ≤ 𝟎 ⇒ 𝒚(𝒙) = 𝒚(𝒙)
So the solution is unique.

Conclusion:
The Picard–Lindelöf Theorem guarantees both existence and uniqueness of the solution to an initial value problem
under two key conditions:
 Continuity of the function 𝒇(𝒙, 𝒚)
 Satisfaction of a Lipschitz condition in 𝒚
This theorem forms the foundation of the theory of ordinary differential equations and provides a method (Picard
iteration) to actually approximate the solution.
Ques 9: State and prove Peano’s Theorem
Statement of Peano’s Existence Theorem:
Let the function 𝒇(𝒙, 𝒚) be continuous in a rectangle 𝑹 = {(𝒙, 𝒚) ∈ ℝ𝟐 : |𝒙 − 𝒙𝟎 | ≤ 𝒂, |𝒚 − 𝒚𝟎 | ≤ 𝒃} containing the
point (𝒙𝟎 , 𝒚𝟎 ). Then, the initial value problem (IVP):
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
has at least one solution in some interval |𝒙 − 𝒙𝟎 | ≤ 𝒉 ≤ 𝒂.

Key Features of Peano’s Theorem:


 Guarantees existence but not uniqueness of solutions.
 Requires continuity of 𝒇(𝒙, 𝒚), but not the Lipschitz condition.
 Contrasts with Picard–Lindelöf Theorem, which gives both existence and uniqueness but needs stronger
assumptions (Lipschitz condition).

Proof of Peano’s Theorem:


We aim to construct a solution by approximating it with piecewise linear functions.
Let us define a mesh over the interval [𝒙𝟎 , 𝒙𝟎 + 𝒉], where 𝒉 > 𝟎 will be chosen later.
𝒉
Divide the interval into 𝒏 equal subintervals of length 𝜹 = , and define a sequence of approximate solutions {𝒚𝒏 (𝒙)}
𝒏
using the Euler polygonal method (also called Euler–Peano method):
Step 1: Construct Approximate Functions
Define:
𝒚𝒏 (𝒙𝟎 ) = 𝒚𝟎
For 𝒙 ∈ [𝒙𝟎 + 𝒌𝜹, 𝒙𝟎 + (𝒌 + 𝟏)𝜹], define:
𝒚𝒏 (𝒙) = 𝒚𝒏 (𝒙𝟎 + 𝒌𝜹) + 𝒇(𝒙𝟎 + 𝒌𝜹, 𝒚𝒏 (𝒙𝟎 + 𝒌𝜹)) ⋅ (𝒙 − 𝒙𝟎 − 𝒌𝜹)
In other words, we approximate the solution as a polygonal line with slope given by the value of 𝒇 at the start of each
subinterval.
Because 𝒇 is continuous and defined on a compact rectangle, it is bounded. That is, there exists 𝑴 > 𝟎 such that:
|𝒇(𝒙, 𝒚)| ≤ 𝑴 for all (𝒙, 𝒚) ∈ 𝑹
Choose 𝒉 small enough such that:
𝒃
𝒉 < ⇒ |𝒚𝒏 (𝒙) − 𝒚𝟎 | < 𝒃
𝑴
So the approximations stay within the bounds of the rectangle 𝑹, and 𝒚𝒏 (𝒙) is well-defined and continuous.

Step 2: Use Arzelà–Ascoli Theorem


Since the approximations 𝒚𝒏 (𝒙) are equicontinuous and uniformly bounded (as slopes are bounded by 𝑴), by the
Arzelà–Ascoli theorem, there exists a uniformly convergent subsequence of {𝒚𝒏 (𝒙)} that converges to a continuous
function 𝒚(𝒙).
Let 𝒚(𝒙) = 𝐥𝐢𝐦𝒌→ 𝒚𝒏𝒌 (𝒙)

Step 3: Show that the Limit Function is a Solution


We now prove that 𝒚(𝒙) satisfies the differential equation.
We know that each 𝒚𝒏 (𝒙) satisfies:
𝒙
𝒚𝒏 (𝒙) = 𝒚𝟎 + 𝒇𝒏 (𝒕) 𝒅𝒕
𝒙𝟎
where 𝒇𝒏 (𝒕) = 𝒇(𝒙𝟎 + 𝒌𝜹, 𝒚𝒏 (𝒙𝟎 + 𝒌𝜹)) for 𝒕 ∈ [𝒙𝟎 + 𝒌𝜹, 𝒙𝟎 + (𝒌 + 𝟏)𝜹]
Since 𝒇 is continuous and 𝒚𝒏 (𝒙) → 𝒚(𝒙) uniformly, the integrand converges to 𝒇(𝒕, 𝒚(𝒕)). So, passing to the limit:
𝒙
𝒚(𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚(𝒕)) 𝒅𝒕
𝒙𝟎
Differentiating both sides gives:
𝒅𝒚
= 𝒇(𝒙, 𝒚(𝒙))
𝒅𝒙
Thus, 𝒚(𝒙) satisfies the differential equation and the initial condition.

Ques 10: Solve


𝟏
𝒆 𝟏 𝟏
𝒙
𝒚(𝒙) = 𝒆 − + + 𝒚 (𝒕) 𝒅𝒕
𝟐 𝟐 𝟐 𝟎

Step 1: Observe the form


The equation is a Fredholm integral equation of the second kind:
𝟏
𝒚(𝒙) = 𝒇(𝒙) + 𝝀 𝒚 (𝒕) 𝒅𝒕
𝟎
where:
𝒆 𝟏
 𝒇(𝒙) = 𝒆𝒙 − +
𝟐 𝟐
𝟏
 𝝀=
𝟐
 The kernel is a constant function in 𝒙 and 𝒕: 𝑲(𝒙, 𝒕) = 𝟏, meaning the integral is independent of 𝒙

Step 2: Let the integral be a constant


Let:
𝟏
𝑨= 𝒚 (𝒕) 𝒅𝒕
𝟎
Then the equation becomes:
𝒆 𝟏 𝟏
𝒚(𝒙) = 𝒆𝒙 − + + 𝑨
𝟐 𝟐 𝟐
Now integrate both sides from 𝒙 = 𝟎 to 𝟏:
𝟏 𝟏
𝒆 𝟏 𝟏
𝒚 (𝒙) 𝒅𝒙 = 𝒆𝒙 − + + 𝑨 𝒅𝒙
𝟎 𝟎 𝟐 𝟐 𝟐
Compute left-hand side:
𝟏
𝒚 (𝒙) 𝒅𝒙 = 𝑨
𝟎
Compute right-hand side:
𝟏 𝟏
𝒙
𝒆 𝒅𝒙 = 𝒆 − 𝟏, 𝒅𝒙 = 𝟏
𝟎 𝟎
So:
𝒆 𝟏 𝟏
𝑨 = (𝒆 − 𝟏) − + + 𝑨
𝟐 𝟐 𝟐
Now simplify:
𝒆 𝟏 𝟏
𝑨= 𝒆−𝟏− + + 𝑨
𝟐 𝟐 𝟐
𝒆 𝟏 𝟏
𝑨= − + 𝑨
𝟐 𝟐 𝟐
𝟏
Subtract 𝑨 from both sides:
𝟐
𝟏 𝒆−𝟏 𝟏 𝒆−𝟏
𝑨− 𝑨= ⇒ 𝑨= ⇒𝑨= 𝒆−𝟏
𝟐 𝟐 𝟐 𝟐

Step 3: Substitute back into the equation


𝒆 𝟏 𝟏
𝒚(𝒙) = 𝒆𝒙 − + + (𝒆 − 𝟏)
𝟐 𝟐 𝟐
Simplify:
𝒆 𝟏 𝒆 𝟏
𝒚(𝒙) = 𝒆𝒙 − + + − = 𝒆𝒙
𝟐 𝟐 𝟐 𝟐

Final Answer:
𝒚(𝒙) = 𝒆𝒙
This function satisfies the given integral equation.
Ques 11: Find the resolvent kernel for
𝒌(𝒙, 𝒕) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕), 𝟎 ≤ 𝒙 ≤ 𝟐𝝅, 𝟎 ≤ 𝒕 ≤ 𝟐𝝅

Step 1: Identify the type


This is a Volterra-type integral kernel, but since the limits for both 𝒙 and 𝒕 are the same (𝟎 to 𝟐𝝅), this is interpreted
as a Fredholm integral equation with symmetric limits.
We aim to find the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀) corresponding to the given kernel 𝒌(𝒙, 𝒕). We do this using the
Neumann series (Liouville–Neumann method).

Step 2: Define the kernel iterates


Let 𝑲𝟏 (𝒙, 𝒕) = 𝒌(𝒙, 𝒕) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕)
Then the recursive formula for the iterated kernels is:
𝟐𝝅
𝑲𝒏 𝟏 (𝒙, 𝒕) = 𝒌 (𝒙, 𝒔)𝑲𝒏 (𝒔, 𝒕) 𝒅𝒔
𝟎
The resolvent kernel is defined by the infinite series:

𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏

Step 3: Compute first few iterates


First iterate:
𝑲𝟏 (𝒙, 𝒕) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕)
Second iterate:
𝟐𝝅
𝑲𝟐 (𝒙, 𝒕) = 𝐬𝐢𝐧 (𝒙 − 𝟐𝒔) ⋅ 𝐬𝐢𝐧(𝒔 − 𝟐𝒕) 𝒅𝒔
𝟎
Use identity:
𝟏
𝐬𝐢𝐧𝑨𝐬𝐢𝐧𝑩 = [𝐜𝐨𝐬(𝑨 − 𝑩) − 𝐜𝐨𝐬(𝑨 + 𝑩)]
𝟐
So:
𝟐𝝅
𝟏
𝑲𝟐 (𝒙, 𝒕) = [𝐜𝐨𝐬(𝒙 − 𝟑𝒔 + 𝟐𝒕) − 𝐜𝐨𝐬(𝒙 − 𝒔 − 𝟐𝒕)]𝒅𝒔
𝟎 𝟐
Compute the integrals term-by-term. Since cosine integrated over a full period yields zero unless the frequency is
zero:
𝟐𝝅
𝐜𝐨𝐬 (𝜶𝒔 + 𝜷) 𝒅𝒔 = 𝟎 if 𝜶 ≠ 𝟎
𝟎
Hence,
𝑲𝟐 (𝒙, 𝒕) = 𝟎
Third iterate:
𝟐𝝅
𝑲𝟑 (𝒙, 𝒕) = 𝐬𝐢𝐧 (𝒙 − 𝟐𝒔) ⋅ 𝟎 𝒅𝒔 = 𝟎
𝟎
So, higher iterates vanish, and the Neumann series terminates at the first term.

Step 4: Write the resolvent kernel


Thus, the resolvent kernel is:
𝑹(𝒙, 𝒕; 𝝀) = 𝑲𝟏 (𝒙, 𝒕) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕)
That is, the resolvent kernel is just the original kernel.

Final Answer:
𝑹(𝒙, 𝒕; 𝝀) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕)
This is the resolvent kernel for 𝒌(𝒙, 𝒕) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕) over the square region 𝟎 ≤ 𝒙, 𝒕 ≤ 𝟐𝝅.
Ques 12: State and prove Arzelà–Ascoli Theorem
Statement of Arzelà–Ascoli Theorem (Real-valued version):
Let 𝓕 be a family of real-valued functions defined on a closed and bounded interval [𝒂, 𝒃]. Suppose that:
1. Every function 𝒇 ∈ 𝓕 is continuous on [𝒂, 𝒃],
2. The family 𝓕 is uniformly bounded, i.e.,
∃𝑴 > 𝟎 such that |𝒇(𝒙)| ≤ 𝑴, ∀𝒙 ∈ [𝒂, 𝒃], ∀𝒇 ∈ 𝓕
3. The family 𝓕 is equicontinuous, i.e.,
∀𝜺 > 𝟎, ∃𝜹 > 𝟎 such that |𝒙 − 𝒚| < 𝜹 ⇒ |𝒇(𝒙) − 𝒇(𝒚)| < 𝜺, ∀𝒇 ∈ 𝓕
Then every sequence in 𝓕 has a uniformly convergent subsequence whose limit is a continuous function on [𝒂, 𝒃].

Proof (Sketch using diagonalization method):


Let 𝓕 = {𝒇𝒏 } be a sequence of continuous, uniformly bounded and equicontinuous functions on [𝒂, 𝒃]. We aim to
extract a uniformly convergent subsequence.
Step 1: Choose a dense countable subset of [𝒂, 𝒃]
Let {𝒙𝒌 }𝒌 𝟏 be a dense subset of [𝒂, 𝒃], e.g., rational numbers in [𝒂, 𝒃].
Since the family {𝒇𝒏 } is uniformly bounded, by the Bolzano–Weierstrass theorem, for each fixed 𝒙𝒌 , the sequence
{𝒇𝒏 (𝒙𝒌 )} has a convergent subsequence.
(𝟏) (𝟏)
 For 𝒙𝟏 : pick a subsequence {𝒇𝒏 } such that {𝒇𝒏 (𝒙𝟏 )} converges.
(𝟏) (𝟐) (𝟐)
 For 𝒙𝟐 : from {𝒇𝒏 }, extract a subsequence {𝒇𝒏 } such that {𝒇𝒏 (𝒙𝟐 )} converges.
Continue this process indefinitely.
Now, define a diagonal subsequence:
(𝒏)
𝒈𝒏 = 𝒇𝒏 (i.e., the 𝒏-th function in the 𝒏-th subsequence)
Then for each 𝒙𝒌 , the sequence {𝒈𝒏 (𝒙𝒌 )} converges as 𝒏 → ∞.

Step 2: Use equicontinuity to extend pointwise convergence to uniform convergence


Fix 𝜺 > 𝟎. Since the family is equicontinuous, there exists 𝜹 > 𝟎 such that for all 𝒇 ∈ 𝓕,
𝜺
|𝒙 − 𝒚| < 𝜹 ⇒ |𝒇(𝒙) − 𝒇(𝒚)| <
𝟑
Now, choose a finite 𝜹-net: points 𝒙𝟏 , 𝒙𝟐 , … , 𝒙𝒎 ∈ [𝒂, 𝒃] such that any 𝒙 ∈ [𝒂, 𝒃] lies within 𝜹 of some 𝒙𝒋 .
Since {𝒈𝒏 (𝒙𝒋 )} is Cauchy (as it converges), we can choose 𝑵 such that for all 𝒏, 𝒎 > 𝑵,
𝜺
|𝒈𝒏 (𝒙𝒋 ) − 𝒈𝒎 (𝒙𝒋 )| < for all 𝒋 = 𝟏, 𝟐, … , 𝒎
𝟑
Then, for any 𝒙 ∈ [𝒂, 𝒃], choose 𝒙𝒋 such that |𝒙 − 𝒙𝒋 | < 𝜹. Then:
|𝒈𝒏 (𝒙) − 𝒈𝒎 (𝒙)| ≤ |𝒈𝒏 (𝒙) − 𝒈𝒏 (𝒙𝒋 )| + |𝒈𝒏 (𝒙𝒋 ) − 𝒈𝒎 (𝒙𝒋 )| + |𝒈𝒎 (𝒙𝒋 ) − 𝒈𝒎 (𝒙)|
Each term is less than 𝜺/𝟑, so total is less than 𝜺.
Thus, {𝒈𝒏 (𝒙)} is uniformly Cauchy, hence uniformly convergent.

Conclusion:
The Arzelà–Ascoli Theorem guarantees that from any equicontinuous, uniformly bounded sequence of continuous
functions on a compact interval, one can extract a uniformly convergent subsequence.
This is a fundamental result in functional analysis and is used in solving integral and differential equations by
ensuring that sequences of approximate solutions have convergent limits.
Ques 13: Apply Picard method to solve
𝒅𝒚
= 𝒚 − 𝒙, 𝒚(𝟎) = 𝟐
𝒅𝒙
Step 1: Express the differential equation in integral form
Using the Picard iteration method, we write the integral form of the equation:
𝒙
𝒚(𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚(𝒕)) 𝒅𝒕
𝟎
Here,
 𝒇(𝒙, 𝒚) = 𝒚 − 𝒙
 𝒚𝟎 = 𝟐
So,
𝒙
𝒚(𝒙) = 𝟐 + ( 𝒚(𝒕) − 𝒕) 𝒅𝒕
𝟎

Step 2: Define the sequence of Picard approximations


Let 𝒚𝟎 (𝒙) = 𝟐 (initial approximation)
We define recursively:
𝒙
𝒚𝒏 𝟏 (𝒙) = 𝟐 + ( 𝒚𝒏 (𝒕) − 𝒕) 𝒅𝒕
𝟎
Step 3: Compute successive approximations
First approximation 𝒚𝟏 (𝒙):
𝒙 𝒙
𝒕𝟐 𝒙𝟐
𝒚𝟏 (𝒙) = 𝟐 + ( 𝟐 − 𝒕) 𝒅𝒕 = 𝟐 + 𝟐𝒕 − = 𝟐 + 𝟐𝒙 −
𝟎 𝟐 𝟎
𝟐
𝒙𝟐
⇒ 𝒚𝟏 (𝒙) = 𝟐 + 𝟐𝒙 −
𝟐

Second approximation 𝒚𝟐 (𝒙):


𝒕𝟐
Use 𝒚𝟏 (𝒕) = 𝟐 + 𝟐𝒕 −
𝟐
𝒙 𝒙
𝒕𝟐 𝒕𝟐
𝒚𝟐 (𝒙) = 𝟐 + 𝟐 + 𝟐𝒕 − − 𝒕 𝒅𝒕 = 𝟐 + 𝟐+𝒕− 𝒅𝒕
𝟎 𝟐 𝟎 𝟐
Now integrate:
𝒙
𝒕𝟐 𝒕𝟑 𝒙𝟐 𝒙𝟑
= 𝟐 + 𝟐𝒕 + − −= 𝟐 + 𝟐𝒙 +
𝟐 𝟔 𝟎
𝟐 𝟔
𝒙𝟐 𝒙𝟑
⇒ 𝒚𝟐 (𝒙) = 𝟐 + 𝟐𝒙 + −
𝟐 𝟔

Third approximation 𝒚𝟑 (𝒙):


𝒕𝟐 𝒕𝟑
Use 𝒚𝟐 (𝒕) = 𝟐 + 𝟐𝒕 + −
𝟐 𝟔
𝒙 𝒙
𝒕𝟐 𝒕𝟑 𝒕𝟐 𝒕𝟑
𝒚𝟑 (𝒙) = 𝟐 + 𝟐 + 𝟐𝒕 + − − 𝒕 𝒅𝒕 = 𝟐 + 𝟐+𝒕+ − 𝒅𝒕
𝟎 𝟐 𝟔 𝟎 𝟐 𝟔
Now integrate:
𝒙
𝒕𝟐 𝒕𝟑 𝒕𝟒 𝒙𝟐 𝒙𝟑 𝒙𝟒
= 𝟐 + 𝟐𝒕 + + − = 𝟐 + 𝟐𝒙 +
+ −
𝟐 𝟔 𝟐𝟒 𝟎
𝟐 𝟔 𝟐𝟒
𝟐 𝟑 𝟒
𝒙 𝒙 𝒙
⇒ 𝒚𝟑 (𝒙) = 𝟐 + 𝟐𝒙 + + −
𝟐 𝟔 𝟐𝟒

General pattern (Optional)


We can observe that the solution is forming a power series. In fact, the exact solution of the differential equation
𝒅𝒚
= 𝒚 − 𝒙, 𝒚(𝟎) = 𝟐
𝒅𝒙
is:
𝒚(𝒙) = 𝑪𝒆𝒙 + 𝒙 + 𝟏
Applying the initial condition 𝒚(𝟎) = 𝟐:
𝟐 = 𝑪 + 𝟎 + 𝟏 ⇒ 𝑪 = 𝟏 ⇒ 𝒚(𝒙) = 𝒆𝒙 + 𝒙 + 𝟏

Conclusion:
Using Picard's method, the successive approximations are:
 𝒚𝟎 (𝒙) = 𝟐
𝒙𝟐
 𝒚𝟏 (𝒙) = 𝟐 + 𝟐𝒙 −
𝟐
𝒙𝟐 𝒙𝟑
 𝒚𝟐 (𝒙) = 𝟐 + 𝟐𝒙 + −
𝟐 𝟔
𝒙𝟐 𝒙𝟑 𝒙𝟒
 𝒚𝟑 (𝒙) = 𝟐 + 𝟐𝒙 + + −
𝟐 𝟔 𝟐𝟒
And the exact solution is:
𝒚(𝒙) = 𝒆𝒙 + 𝒙 + 𝟏

1. Solutions of an IVP may not be unique if they exist. = False


2. A boundary value problem may have several solutions. = True
3. For any linear OPE, linear combination of two solutions is again a solution. = True
𝒅𝟐 𝒚
4. − 𝐬𝐢𝐧𝒙 ⋅ 𝒚 = 𝟎 is not a linear OPE. = False
𝒅𝒙𝟐
5. Wronskian of 𝒆𝒙 & 𝒆𝟐𝒙 are 𝟐𝒆𝟐𝒙 . = True
6. 𝐬𝐢𝐧(𝒙 + 𝒚) is a symmetric Kernel. = True
𝒙
7. 𝒙𝟑 = ∫𝟎 ( 𝒙 − 𝒕)𝒚(𝒕) 𝒅𝒕 is a Fredholm integral equation. = False
8. 𝑲(𝒙, 𝒕) = 𝒙𝟑 𝒕𝟑 + 𝒙𝒕 is a separable Kernel. = True
9. A Volterra Integral equation can be converted into an OPE. = True
10. If Wronskian of solution is zero throughout the domain, then they are dependent. = False
11. A BVP may have solutions. = True
12. Solution of 𝒕𝟐 𝒚 − 𝟐𝒕𝒚 + 𝟐𝒚 = 𝟎 is spanned by {𝒕, 𝒕𝟐 }. = True
𝒙
13. 𝒙𝟑 = ∫𝟎 ( 𝒙 − 𝒕)𝟐 𝒚(𝒕)𝒅𝒕 is Volterra Integral equation. = True
14. 𝒙𝟐 𝒕𝟐 + 𝒙𝒕 + 𝟏 is separable kernel. = False
15. 𝐬𝐢𝐧(𝒙 + 𝟑𝒕) is not a symmetric kernel. = True
𝒃
16. 𝒚(𝒙) = ∫𝒂 𝑲 (𝒙, 𝒕)𝒅𝒕 is not an integral equation. = False
17. A symmetric kernel may or may not be separable. = True
18. Every homogeneous linear differential equation always has a solution. = True
19. Solutions to IVP are unique if it exists. = True
20. If Wronskian of solutions is Zero, they are dependent. = False
21. 𝐭𝐚𝐧(𝟏𝟎𝒙 + 𝟓𝒕) is Symmetric Kernel. = False
22. A Kernel 𝒌(𝟒𝒙, 𝟔𝒕) is symmetric if 𝒌(𝟒𝒙, 𝟔𝒕) = 𝒌(𝟔𝒙, 𝟒𝒕). = False
23. Boundary value problem can't be put into an integral solution. = False
24. The value of a Symmetric Kernel are Real. = True
𝟏
25. The solution of Fredholm integral equation 𝒖(𝒙) + ∫𝟎 𝒙 (𝒆𝒙𝒕 − 𝟏)𝒖(𝒕)𝒅𝒕 = 𝒆𝒙 − 𝒙 is 𝒖(𝒙) = 𝟏𝟎. = False
26. Peano’s existence theorem ⇒ Ascoli–Arzelà theorem. = False
27. Kamke's convergence theorem holds true for a closed set. = True
𝒅𝒚 𝒙𝟒
28. The first approximation of D.E. = 𝒙𝟑 − 𝒚𝟐 , 𝒚(𝟎) = 𝟎 is . = True
𝒅𝒙 𝟒
29. A differentiable function satisfies a Lipschitz condition on a square. = True
30. Every integral equation is known as linear. = False

You might also like