cc07 Raw
cc07 Raw
An Initial Value Problem (IVP) is a fundamental concept in the study of differential equations. It refers to a
differential equation that is accompanied by a condition specifying the value of the unknown function at a particular
point. The purpose of an initial value problem is to determine a specific solution from among the infinitely many
solutions of a differential equation, by providing a starting point — the "initial value."
Formal Definition:
An Initial Value Problem typically takes the form:
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
Where:
𝒅𝒚
= 𝒇(𝒙, 𝒚) is a first-order ordinary differential equation (ODE),
𝒅𝒙
𝒙𝟎 is a given value of the independent variable (usually time or position),
𝒚𝟎 is the specified value of the function 𝒚(𝒙) at 𝒙 = 𝒙𝟎 ,
The goal is to find the function 𝒚(𝒙) that satisfies both the differential equation and the initial condition.
This setup is called an initial value problem because the condition is given at the initial point 𝒙𝟎 , which is often
interpreted as the "starting time" or "starting position" of a process.
Example:
Let’s consider a simple example to illustrate how an initial value problem is solved.
Given:
𝒅𝒚
= 𝟑𝒙𝟐 , with initial condition 𝒚(𝟎) = 𝟐
𝒅𝒙
Step 1: Solve the differential equation
We are given the derivative of 𝒚 with respect to 𝒙, so we integrate both sides to find 𝒚:
𝒚 = ∫ 𝟑𝒙𝟐 𝒅𝒙 = 𝒙𝟑 + 𝑪
Where 𝑪 is the constant of integration that represents an entire family of solutions.
Step 2: Apply the initial condition
Now, we use the given initial condition 𝒚(𝟎) = 𝟐 to determine the constant 𝑪:
𝒚(𝟎) = (𝟎)𝟑 + 𝑪 = 𝑪 = 𝟐
So, 𝑪 = 𝟐
Step 3: Write the particular solution
Now that we know 𝑪, the particular solution to the initial value problem is:
𝒚 = 𝒙𝟑 + 𝟐
Conclusion:
An initial value problem is a differential equation coupled with an initial condition that allows us to find a specific
solution tailored to a real-world situation. In our example, the function 𝒚 = 𝒙𝟑 + 𝟐 satisfies both the differential
𝒅𝒚
equation = 𝟑𝒙𝟐 and the initial condition 𝒚(𝟎) = 𝟐. This method ensures that the solution is not just general, but
𝒅𝒙
unique and applicable to a specific scenario.
Understanding and solving IVPs is essential in fields like physics, biology, engineering, and economics, where
systems evolve from known starting conditions.
Question 2 ) Explain clearly the method of successive approximation.
The method of successive approximation, also known as Picard’s Iteration Method, is a technique used to find an
approximate solution to a first-order differential equation with a given initial condition. This method is especially
useful when finding an exact solution is difficult or impossible.
Lipschitz Condition:
A function 𝒇(𝒙) is said to satisfy a Lipschitz condition on a domain 𝑫 if there exists a constant 𝑳 > 𝟎 such that for all
𝒙𝟏 , 𝒙𝟐 ∈ 𝑫:
|𝒇(𝒙𝟏 ) − 𝒇(𝒙𝟐 )| ≤ 𝑳|𝒙𝟏 − 𝒙𝟐 |
This means that the rate of change of the function is bounded by some constant 𝑳. If such an 𝑳 exists, the function is
said to be Lipschitz continuous on 𝑫.
Now let’s prove that a continuous function may not satisfy this condition using an example.
Example:
Consider the function:
𝒇(𝒙) = √𝒙
This function is continuous on the interval [𝟎, 𝟏], which is a closed square-like interval on the real line. Let’s see
whether it satisfies the Lipschitz condition on [𝟎, 𝟏].
Let us check the Lipschitz condition:
𝒙𝟏 − 𝒙𝟐 ≤ 𝑳|𝒙𝟏 − 𝒙𝟐 |
Suppose 𝒙𝟐 < 𝒙𝟏 , and write:
|𝒙𝟏 − 𝒙𝟐 |
𝒙𝟏 − 𝒙𝟐 =
√𝒙𝟏 + √𝒙𝟐
So we have:
|𝒙𝟏 − 𝒙𝟐 | 𝟏
≤ 𝑳|𝒙𝟏 − 𝒙𝟐 | ⇒ ≤𝑳
√𝒙𝟏 + √𝒙𝟐 √𝒙𝟏 + √𝒙𝟐
𝟏
Now consider what happens as 𝒙𝟏 , 𝒙𝟐 → 𝟎. Then √𝒙𝟏 + √𝒙𝟐 → 𝟎, and thus → ∞. That means no constant 𝑳
√𝒙𝟏 √𝒙𝟐
can be found to satisfy the inequality for all 𝒙𝟏 , 𝒙𝟐 ∈ [𝟎, 𝟏].
Conclusion:
Although the function 𝒇(𝒙) = √𝒙 is continuous on [𝟎, 𝟏], it does not satisfy a Lipschitz condition on that interval
because the derivative becomes unbounded near 𝒙 = 𝟎. Therefore, we have shown that a continuous function may
not satisfy a Lipschitz condition on a square (or closed region).
Question 4 ) Find the eigenvalue of
𝟏
𝒖(𝒙) = 𝒌 ( 𝟏𝟐𝒙𝟐 𝒕𝟒 + 𝟔𝒙𝟑 𝒕 + 𝟐𝒕𝒙) 𝒖(𝒕) 𝒅𝒕
𝟎
Solution:
We are given a linear integral equation of the form:
𝟏
𝒖(𝒙) = 𝒌 𝑲 (𝒙, 𝒕) 𝒖(𝒕) 𝒅𝒕
𝟎
where the kernel is:
𝑲(𝒙, 𝒕) = 𝟏𝟐𝒙𝟐 𝒕𝟒 + 𝟔𝒙𝟑 𝒕 + 𝟐𝒕𝒙
This is a separable kernel, meaning it can be written as a finite sum of terms of the form 𝒇𝒊 (𝒙)𝒈𝒊 (𝒕):
𝑲(𝒙, 𝒕) = 𝒇𝟏 (𝒙)𝒈𝟏 (𝒕) + 𝒇𝟐 (𝒙)𝒈𝟐 (𝒕) + 𝒇𝟑 (𝒙)𝒈𝟑 (𝒕)
with:
𝒇𝟏 (𝒙) = 𝟏𝟐𝒙𝟐 , 𝒈𝟏 (𝒕) = 𝒕𝟒
𝒇𝟐 (𝒙) = 𝟔𝒙𝟑 , 𝒈𝟐 (𝒕) = 𝒕
𝒇𝟑 (𝒙) = 𝟐𝒙, 𝒈𝟑 (𝒕) = 𝒕
Using this, the integral equation becomes:
𝟏 𝟏 𝟏
𝒖(𝒙) = 𝒌 𝒇𝟏 (𝒙) 𝒈𝟏 (𝒕)𝒖(𝒕) 𝒅𝒕 + 𝒇𝟐 (𝒙) 𝒈𝟐 (𝒕)𝒖(𝒕) 𝒅𝒕 + 𝒇𝟑 (𝒙) 𝒈𝟑 (𝒕)𝒖(𝒕) 𝒅𝒕
𝟎 𝟎 𝟎
Define:
𝟏
𝑨 = ∫𝟎 𝒕𝟒 𝒖(𝒕) 𝒅𝒕
𝟏
𝑩 = ∫𝟎 𝒕 𝒖(𝒕) 𝒅𝒕
So:
𝒖(𝒙) = 𝒌[𝟏𝟐𝒙𝟐 𝑨 + (𝟔𝒙𝟑 + 𝟐𝒙)𝑩]
This suggests 𝒖(𝒙) is a combination of 𝒙 , 𝒙𝟑 , and 𝒙, so let us assume:
𝟐
𝒂 𝒃 𝒄
⎧𝒂 = 𝟏𝟐𝒌 + +
⎪ 𝟕 𝟖 𝟔
𝒂 𝒃 𝒄
𝒃 = 𝟔𝒌 + +
⎨ 𝟒 𝟓 𝟑
⎪ 𝒂 𝒃 𝒄
⎩𝒄 = 𝟐𝒌 𝟒 + 𝟓 + 𝟑
This forms a homogeneous linear system. For non-trivial solutions (𝒂, 𝒃, 𝒄 ≠ 𝟎), the determinant of the coefficient
matrix must be zero. Solving this determinant equation yields the eigenvalues.
Final Answer:
The eigenvalues 𝒌 of the given integral equation are:
−𝟕𝟓𝟐 + 𝟐√𝟏𝟒𝟑𝟓𝟖𝟏 −𝟕𝟓𝟐 − 𝟐√𝟏𝟒𝟑𝟓𝟖𝟏
𝒌= and 𝒌 =
𝟐𝟏 𝟐𝟏
These are the only values of 𝒌 for which the equation has non-trivial solutions 𝒖(𝒙), making them the eigenvalues of
the integral operator.
Question 5 ) Find the resolvent kernel of 𝑲(𝒙, 𝒕) = (𝟏𝟎 + 𝒙)(𝟓 − 𝒕), where 𝒂 = −𝟐, 𝒃 =
𝟏.
Solution:
We are given the kernel:
𝑲(𝒙, 𝒕) = (𝟏𝟎 + 𝒙)(𝟓 − 𝒕)
This is a separable (or degenerate) kernel, because it can be written in the form:
𝑲(𝒙, 𝒕) = 𝒇(𝒙)𝒈(𝒕)
with:
𝒇(𝒙) = (𝟏𝟎 + 𝒙)
𝒈(𝒕) = (𝟓 − 𝒕)
Let the integral equation be of the form:
𝒃
𝒖(𝒙) = 𝒇(𝒙) + 𝝀 𝑲 (𝒙, 𝒕)𝒖(𝒕) 𝒅𝒕
𝒂
Then the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀) is defined so that:
𝒃
𝒖(𝒙) = 𝒇(𝒙) + 𝝀 𝑹 (𝒙, 𝒕; 𝝀)𝒇(𝒕) 𝒅𝒕
𝒂
To compute the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀), we use the formula for separable kernels:
Conclusion:
We have shown:
𝒙
𝒆𝒙 𝒕
(𝟏 − 𝒕) 𝒅𝒕 = 𝒙
𝟎
Hence,
𝒚(𝒙) = 𝟏 − 𝒙
is indeed a solution of the given integral equation.
Question 7 ) Convert 𝒚 + 𝐬𝐢𝐧𝒙 ⋅ 𝒚 + 𝒆𝒙 𝒚 = 𝒙, with initial conditions 𝒚(𝟎) = 𝟏,
𝒚 (𝟎) = −𝟏, into an integral equation.
Solution:
We are given a second-order linear differential equation:
𝒚 (𝒙) + 𝐬𝐢𝐧𝒙 ⋅ 𝒚 (𝒙) + 𝒆𝒙 𝒚(𝒙) = 𝒙
with initial conditions:
𝒚(𝟎) = 𝟏, 𝒚 (𝟎) = −𝟏
We want to convert this differential equation into an integral equation. To do that, we follow the method of
successive integration.
Pattern Observed:
We observe the recursive structure:
𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕)
𝑲𝟐 (𝒙, 𝒕) = 𝑪 ⋅ 𝑲(𝒙, 𝒕)
𝑲𝟑 (𝒙, 𝒕) = 𝑪𝟐 ⋅ 𝑲(𝒙, 𝒕)
...
𝑲𝒏 (𝒙, 𝒕) = 𝑪𝒏 𝟏 ⋅ 𝑲(𝒙, 𝒕)
Where:
𝝅
𝑪= 𝒆𝒔 𝐜𝐨𝐬𝒔 𝒅𝒔
𝟎
Evaluate Constant 𝑪
We now compute:
𝝅
𝑪= 𝒆𝒔 𝐜𝐨𝐬𝒔 𝒅𝒔
𝟎
Use integration by parts or known standard result:
𝒆𝒙
∫ 𝒆𝒙 𝐜𝐨𝐬𝒙 𝒅𝒙 = (𝐬𝐢𝐧𝒙 + 𝐜𝐨𝐬𝒙)
𝟐
So:
𝝅
𝒆𝒔 𝒆𝝅 𝟏 𝒆𝝅 + 𝟏
𝑪= (𝐬𝐢𝐧𝒔 + 𝐜𝐨𝐬𝒔) = (𝟎 − 𝟏) − (𝟎 + 𝟏) = −
𝟐 𝟎 𝟐 𝟐 𝟐
Final Answer:
The first iterated kernel is:
𝑲𝟏 (𝒙, 𝒕) = 𝒆𝒙 𝐜𝐨𝐬𝒕
The second iterated kernel is:
𝒆𝝅 + 𝟏 𝒙
𝑲𝟐 (𝒙, 𝒕) = − ⋅ 𝒆 𝐜𝐨𝐬𝒕
𝟐
The third iterated kernel is:
𝒆𝝅 + 𝟏 𝟐 𝒙
𝑲𝟑 (𝒙, 𝒕) = − ⋅ 𝒆 𝐜𝐨𝐬𝒕
𝟐
In general, the n-th iterated kernel is:
𝒆𝝅 + 𝟏 𝒏 𝟏 𝒙
𝑲𝒏 (𝒙, 𝒕) = − ⋅ 𝒆 𝐜𝐨𝐬𝒕
𝟐
Question 9 ) Solve and find the general solution of the following system of differential equations:
𝒙𝟏 = 𝒙𝟏 + 𝒙𝟐
𝒙𝟐 = 𝟒𝒙𝟏 + 𝒙𝟐
For 𝝀𝟏 = −𝟏:
Solve:
𝟏 𝟏 𝟏 𝟎 𝒗𝟏 𝟐 𝟏 𝒗𝟏 𝟎
(𝑨 + 𝑰)𝐯𝟏 = 𝟎 ⇒ + 𝒗𝟐 = 𝟒 𝒗 =
𝟒 𝟏 𝟎 𝟏 𝟐 𝟐 𝟎
From the first row:
𝟐𝒗𝟏 + 𝒗𝟐 = 𝟎 ⇒ 𝒗𝟐 = −𝟐𝒗𝟏
So the eigenvector is:
𝟏
𝐯𝟏 = (up to scalar multiples)
−𝟐
For 𝝀𝟐 = 𝟑:
Solve:
𝟏 𝟏 𝟑 𝟎 𝒗𝟏 −𝟐 𝟏 𝒗𝟏 𝟎
(𝑨 − 𝟑𝑰)𝐯𝟐 = 𝟎 ⇒ − 𝒗𝟐 = 𝟒 =
𝟒 𝟏 𝟎 𝟑 −𝟐 𝒗𝟐 𝟎
From the first row:
−𝟐𝒗𝟏 + 𝒗𝟐 = 𝟎 ⇒ 𝒗𝟐 = 𝟐𝒗𝟏
So the eigenvector is:
𝟏
𝐯𝟐 =
𝟐
Final Answer :
𝒙𝟏 (𝒕) = 𝑪𝟏 𝒆 𝒕 + 𝑪𝟐 𝒆𝟑𝒕
𝒙𝟐 (𝒕) = −𝟐𝑪𝟏 𝒆 𝒕 + 𝟐𝑪𝟐 𝒆𝟑𝒕
Where 𝑪𝟏 and 𝑪𝟐 are arbitrary constants determined by initial conditions if given.
Queston 10 ) Prove that
𝒙 𝒙
𝒏
(𝒙 − 𝒕)𝒏 𝟏
𝒚 (𝒕) 𝒅𝒕 = 𝒚(𝒕) 𝒅𝒕
𝒂 𝒂 (𝒏 − 𝟏)!
To Prove for 𝒏 = 𝒌 + 𝟏
Then:
𝒙 𝒙 𝒔
𝒌 𝟏
𝒚 (𝒕) 𝒅𝒕 = 𝒚 (𝒕) 𝒅𝒕𝒌 𝒅𝒔
𝒂 𝒂 𝒂
Now apply the inductive hypothesis:
𝒙 𝒔
(𝒔 − 𝒕)𝒌 𝟏
= 𝒚(𝒕) 𝒅𝒕 𝒅𝒔
𝒂 𝒂 (𝒌 − 𝟏)!
We now interchange the order of integration: This is a standard technique when handling iterated integrals over a
triangular region.
Let’s write:
𝒙 𝒔 𝒙 𝒙
𝒇 (𝒔, 𝒕) 𝒅𝒕 𝒅𝒔 = 𝒇 (𝒔, 𝒕) 𝒅𝒔 𝒅𝒕
𝒂 𝒂 𝒂 𝒕
Apply it:
𝒙 𝒙
(𝒔 − 𝒕)𝒌 𝟏
= 𝒅𝒔 𝒚(𝒕) 𝒅𝒕
𝒂 𝒕 (𝒌 − 𝟏)!
Change variable inside: Let 𝒖 = 𝒔 − 𝒕, so when 𝒔 = 𝒕, 𝒖 = 𝟎, and when 𝒔 = 𝒙, 𝒖 = 𝒙 − 𝒕
Then:
𝒙 𝒙 𝒕
(𝒔 − 𝒕)𝒌 𝟏 𝒖𝒌 𝟏 (𝒙 − 𝒕)𝒌
𝒅𝒔 = 𝒅𝒖 =
𝒕 (𝒌 − 𝟏)! 𝟎 (𝒌 − 𝟏)! 𝒌!
Therefore:
𝒙 𝒙
𝒌 𝟏
(𝒙 − 𝒕)𝒌
𝒚 (𝒕) 𝒅𝒕 = 𝒚(𝒕) 𝒅𝒕
𝒂 𝒂 𝒌!
This proves the formula holds for 𝒏 = 𝒌 + 𝟏.
Conclusion:
By induction, we conclude that:
𝒙 𝒙
𝒏
(𝒙 − 𝒕)𝒏 𝟏
𝒚 (𝒕) 𝒅𝒕 = 𝒚(𝒕) 𝒅𝒕
𝒂 𝒂 (𝒏 − 𝟏)!
This completes the proof.
𝒅𝒚
Question 11) Solve = 𝒚𝟐 , with initial condition 𝒚(𝟏) = −𝟏
𝒅𝒙
Step 1: Separate the variables
We are given a separable differential equation:
𝒅𝒚
= 𝒚𝟐
𝒅𝒙
Rewriting:
𝟏
𝒅𝒚 = 𝒅𝒙
𝒚𝟐
𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
where 𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕), and
𝟏
𝑲𝒏 𝟏 (𝒙, 𝒕) = 𝑲 (𝒙, 𝒔)𝑲𝒏 (𝒔, 𝒕) 𝒅𝒔
𝟎
Step 1: First Iteration
𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕
𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
where:
𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕)
𝒃
𝑲𝒏 𝟏 (𝒙, 𝒕) = ∫𝒂 𝑲 (𝒙, 𝒛) 𝑲𝒏 (𝒛, 𝒕) 𝒅𝒛
Conclusion:
Hence, the resolvent kernel satisfies the required integral equation:
𝒃
𝑹(𝒙, 𝒕; 𝝀) = 𝑲(𝒙, 𝒕) + 𝝀 𝑲 (𝒙, 𝒛) 𝑹(𝒛, 𝒕; 𝝀) 𝒅𝒛
𝒂
which completes the proof.
𝒏 𝟏
𝒆𝝅 + 𝟏
𝑲𝒏 (𝒙, 𝒕) = − ⋅ 𝒆𝒙 𝐜𝐨𝐬𝒕
𝟐
Ques 1: State and prove Kamke's Convergence Theorem
Statement of Kamke’s Convergence Theorem: Let {𝒇𝒏 (𝒙)} be a monotonic increasing sequence of continuous
functions defined on a closed interval [𝒂, 𝒃], and suppose that for each 𝒙 ∈ [𝒂, 𝒃], the sequence {𝒇𝒏 (𝒙)} converges to
a function 𝒇(𝒙). If the pointwise limit function 𝒇(𝒙) is also continuous on [𝒂, 𝒃], then the convergence of the sequence
{𝒇𝒏 (𝒙)} to 𝒇(𝒙) is uniform on [𝒂, 𝒃].
Explanation of Terms Used in the Theorem:
1. A sequence {𝒇𝒏 (𝒙)} is said to be monotonic increasing if for all 𝒏, 𝒇𝒏 (𝒙) ≤ 𝒇𝒏 𝟏 (𝒙) for all 𝒙 ∈ [𝒂, 𝒃].
2. A sequence of functions {𝒇𝒏 (𝒙)} converges pointwise to 𝒇(𝒙) if for every 𝒙 ∈ [𝒂, 𝒃], 𝐥𝐢𝐦𝒏→ 𝒇𝒏 (𝒙) = 𝒇(𝒙).
3. The convergence is said to be uniform on [𝒂, 𝒃] if
∀𝜺 > 𝟎, ∃𝑵 ∈ ℕ such that ∀𝒏 ≥ 𝑵, 𝐬𝐮𝐩 |𝒇𝒏 (𝒙) − 𝒇(𝒙)| < 𝜺
𝒙∈[𝒂,𝒃]
That is, after a certain index 𝑵, the difference between 𝒇𝒏 (𝒙) and 𝒇(𝒙) is uniformly small throughout the entire
interval [𝒂, 𝒃].
Proof:
We are given the following:
The sequence {𝒇𝒏 (𝒙)} is composed of continuous functions on the closed and bounded interval [𝒂, 𝒃], hence
each 𝒇𝒏 is uniformly continuous and bounded on [𝒂, 𝒃] due to the Heine–Cantor theorem.
The sequence is monotonically increasing, i.e., 𝒇𝒏 (𝒙) ≤ 𝒇𝒏 𝟏 (𝒙) for all 𝒙 ∈ [𝒂, 𝒃].
The sequence converges pointwise to a function 𝒇(𝒙).
The limit function 𝒇(𝒙) is continuous on [𝒂, 𝒃].
Our goal is to prove that {𝒇𝒏 (𝒙)} converges to 𝒇(𝒙) uniformly on [𝒂, 𝒃].
To proceed, define a new sequence of functions:
𝒈𝒏 (𝒙) = 𝒇(𝒙) − 𝒇𝒏 (𝒙)
Then each 𝒈𝒏 (𝒙) is continuous on [𝒂, 𝒃] because both 𝒇(𝒙) and 𝒇𝒏 (𝒙) are continuous. Also, since {𝒇𝒏 (𝒙)} is
increasing and converging pointwise to 𝒇(𝒙), it follows that 𝒈𝒏 (𝒙) is a monotonically decreasing sequence of non-
negative continuous functions:
𝒈𝒏 (𝒙) ≥ 𝒈𝒏 𝟏 (𝒙) ≥ 𝟎 for all 𝒙 ∈ [𝒂, 𝒃]
and
𝐥𝐢𝐦 𝒈𝒏 (𝒙) = 𝒇(𝒙) − 𝐥𝐢𝐦 𝒇𝒏 (𝒙) = 𝟎
𝒏→ 𝒏→
So, we have a sequence {𝒈𝒏 (𝒙)} of non-negative, continuous, monotonically decreasing functions converging
pointwise to the zero function.
Now we invoke Dini's Theorem, which states:
If {𝒈𝒏 (𝒙)} is a sequence of continuous functions on a compact interval [𝒂, 𝒃] such that:
𝒈𝒏 (𝒙) converges pointwise to a continuous function 𝒈(𝒙),
the sequence {𝒈𝒏 (𝒙)} is monotonic (increasing or decreasing), then the convergence is uniform on [𝒂, 𝒃].
Here, the interval [𝒂, 𝒃] is compact, 𝒈𝒏 (𝒙) is continuous, decreasing, and converging pointwise to 𝒈(𝒙) = 𝟎, which is
also continuous. Hence, by Dini’s Theorem, 𝒈𝒏 (𝒙) → 𝟎 uniformly on [𝒂, 𝒃].
This means that for any 𝜺 > 𝟎, there exists a natural number 𝑵 such that for all 𝒏 ≥ 𝑵 and for all 𝒙 ∈ [𝒂, 𝒃],
|𝒈𝒏 (𝒙) − 𝟎| = |𝒇(𝒙) − 𝒇𝒏 (𝒙)| < 𝜺
This is precisely the definition of uniform convergence of the sequence {𝒇𝒏 (𝒙)} to 𝒇(𝒙).
Therefore, the sequence {𝒇𝒏 (𝒙)} converges uniformly to 𝒇(𝒙) on [𝒂, 𝒃].
Hence, the Kamke’s Convergence Theorem is proved.
Ques 2: Discuss Nagumo's and Osgood's Criteria
Nagumo’s Criterion:
Nagumo’s criterion provides a condition under which the initial value problem (IVP) for a first-order differential
equation has a unique solution.
Consider the differential equation
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
Nagumo's Criterion states: Let 𝒇(𝒙, 𝒚) be continuous in a region 𝑹 containing the point (𝒙𝟎 , 𝒚𝟎 ). Suppose there exists
a function 𝝓: [𝟎, 𝒂) → [𝟎, ∞) such that:
1. 𝝓 is continuous and non-negative,
𝜹 𝟏
2. ∫𝟎 𝒅𝒔 = ∞ for some 𝜹 > 𝟎 (this is the divergence condition),
𝝓(𝒔)
3. and for all 𝒙 near 𝒙𝟎 and all 𝒚𝟏 , 𝒚𝟐 near 𝒚𝟎 , with 𝒚𝟏 ≠ 𝒚𝟐 ,
𝒇(𝒙, 𝒚𝟏 ) − 𝒇(𝒙, 𝒚𝟐 )
≤ 𝝓(|𝒚𝟏 − 𝒚𝟐 |)
𝒚𝟏 − 𝒚 𝟐
then the IVP has a unique solution in some neighborhood of 𝒙𝟎 .
Significance: Nagumo’s condition weakens the Lipschitz condition (used in Picard–Lindelöf theorem) by replacing
the constant Lipschitz bound with a function 𝝓, but still ensures uniqueness of the solution. It's useful when the
function 𝒇 is not Lipschitz but still behaves “mildly” enough for uniqueness to hold.
Osgood’s Criterion:
Osgood’s criterion also provides a uniqueness condition for the solution of the initial value problem, similar to
Nagumo’s criterion but formulated differently.
Consider the IVP:
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
Osgood's Criterion states: Let 𝒇(𝒙, 𝒚) be continuous in a neighborhood of (𝒙𝟎 , 𝒚𝟎 ). Suppose there exists a non-
decreasing, continuous function 𝝓: [𝟎, 𝒂) → [𝟎, ∞) with 𝝓(𝟎) = 𝟎, and:
1. For all 𝒙 near 𝒙𝟎 and all 𝒚𝟏 , 𝒚𝟐 near 𝒚𝟎 ,
|𝒇(𝒙, 𝒚𝟏 ) − 𝒇(𝒙, 𝒚𝟐 )| ≤ 𝝓(|𝒚𝟏 − 𝒚𝟐 |)
2. The integral
𝜹
𝟏
𝒅𝒔 = ∞ for some 𝜹 > 𝟎
𝟎 𝝓(𝒔)
Then the IVP has a unique solution.
Key Points and Comparison:
Both Nagumo’s and Osgood’s criteria are generalizations of the Lipschitz condition, used when 𝒇(𝒙, 𝒚) may
not satisfy the strict Lipschitz condition.
Osgood's condition gives an upper bound on the difference 𝒇(𝒙, 𝒚𝟏 ) − 𝒇(𝒙, 𝒚𝟐 ), while Nagumo's gives an
upper bound on the quotient of that difference over |𝒚𝟏 − 𝒚𝟐 |.
𝟏
The divergence of the integral ∫ 𝒅𝒔 near zero ensures that small differences in the function’s output do
𝝓(𝒔)
not accumulate rapidly enough to allow multiple solutions.
If 𝝓(𝒔) = 𝑳𝒔, a linear function, then both reduce to the standard Lipschitz condition, and the divergence
𝟏
integral becomes ∫ 𝒅𝒔 = ∞, satisfying both criteria.
𝑳𝒔
Ques 3: Explain Fredholm Integral Equation with Separable Kernel
A Fredholm integral equation is a type of integral equation in which the unknown function appears under the
integral sign. The general form of the Fredholm integral equation of the second kind is:
𝒃
𝒇(𝒙) = 𝝓(𝒙) + 𝝀 𝑲 (𝒙, 𝒕)𝒇(𝒕) 𝒅𝒕
𝒂
Where:
𝒇(𝒙) is the unknown function to be determined
𝝓(𝒙) is a known function (called the inhomogeneous term)
𝝀 is a parameter (often a constant)
𝑲(𝒙, 𝒕) is a given function called the kernel
The integration is over a finite interval [𝒂, 𝒃]
Separable Kernel:
A separable kernel (also called a degenerate kernel) is a kernel 𝑲(𝒙, 𝒕) that can be written as a finite sum of products
of functions of 𝒙 and functions of 𝒕. That is,
𝒏
𝒄𝒋 = 𝑨𝒋 + 𝝀 𝑩𝒋𝒊 𝒄𝒊 for 𝒋 = 𝟏, 𝟐, . . . , 𝒏
𝒊 𝟏
This is a system of 𝒏 linear equations in 𝒏 unknowns 𝒄𝟏 , 𝒄𝟐 , . . . , 𝒄𝒏 , which can be solved using matrix methods.
Once the constants 𝒄𝒊 are found, the solution to the original integral equation is:
𝒏
Conclusion:
Zeroth approximation: 𝒖𝟎 (𝒙) = 𝐜𝐨𝐬𝒙
𝒙
First approximation: 𝒖𝟏 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧𝒙
𝟐
Second approximation:
𝒙
𝒙 𝟏
𝒖𝟐 (𝒙) = 𝐜𝐨𝐬𝒙 + 𝐬𝐢𝐧𝒙 + 𝒕 𝐬𝐢𝐧(𝒙 − 𝒕) 𝐬𝐢𝐧𝒕 𝒅𝒕
𝟐 𝟐 𝟎
Ques 6: Find the resolvent kernel for 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕 , on the interval [𝟎, 𝟏]
We are given the kernel:
𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕 , for 𝟎 ≤ 𝒕 ≤ 𝒙 ≤ 𝟏
We are to find the resolvent kernel 𝑹(𝒙, 𝒕; 𝝀), which satisfies the integral equation:
𝒙
𝑹(𝒙, 𝒕; 𝝀) = 𝑲(𝒙, 𝒕) + 𝝀 𝑲 (𝒙, 𝒔)𝑹(𝒔, 𝒕; 𝝀) 𝒅𝒔
𝒕
The general method to compute the resolvent kernel is to use the Neumann series expansion:
𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
Where:
𝑲𝟏 (𝒙, 𝒕) = 𝑲(𝒙, 𝒕)
𝒙
𝑲𝒏 𝟏 (𝒙, 𝒕) = ∫𝒕 𝑲 (𝒙, 𝒔)𝑲𝒏 (𝒔, 𝒕) 𝒅𝒔
Conclusion:
The resolvent kernel for 𝑲(𝒙, 𝒕) = 𝒆𝒙 𝒕
is:
𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
where each 𝑲𝒏 (𝒙, 𝒕) is generated recursively. The first two terms give the approximate resolvent:
𝟏
𝑹(𝒙, 𝒕; 𝝀) ≈ 𝒆𝒙 𝒕 + 𝝀 ⋅ 𝒆𝒙 𝒕 (𝒆𝟐𝒙 − 𝒆𝟐𝒕 )
𝟐
Ques 7: State and prove Gronwall’s Inequality
Statement of Gronwall’s Inequality (Integral Form): Let 𝒖(𝒙) and 𝒈(𝒙) be continuous, real-valued, non-negative
functions on the interval [𝒂, 𝒃], and let 𝝀 ≥ 𝟎 be a constant. Suppose that for all 𝒙 ∈ [𝒂, 𝒃], the function 𝒖(𝒙) satisfies
the inequality:
𝒙
𝒖(𝒙) ≤ 𝒈(𝒙) + 𝝀 𝒖 (𝒕) 𝒅𝒕
𝒂
Then Gronwall’s inequality asserts that:
𝒙
𝒖(𝒙) ≤ 𝒈(𝒙) + 𝝀 𝒈 (𝒕)𝒆𝝀(𝒙 𝒕)
𝒅𝒕
𝒂
Moreover, if 𝒈(𝒙) is a constant function 𝒈(𝒙) = 𝑪, then the inequality simplifies to:
𝒖(𝒙) ≤ 𝑪𝒆𝝀(𝒙 𝒂)
Conclusion:
Gronwall’s inequality is a powerful tool in differential and integral equations. It allows us to bound a function 𝒖(𝒙)
that is controlled by an integral involving itself. It is especially important in proving uniqueness and stability results
for solutions of differential equations.
Ques 8: State and prove Picard–Lindelöf Theorem
Statement of the Picard–Lindelöf Theorem (Existence and Uniqueness Theorem):
Let the function 𝒇(𝒙, 𝒚) be defined on a rectangle 𝑹 = {(𝒙, 𝒚) ∈ ℝ𝟐 : |𝒙 − 𝒙𝟎 | ≤ 𝒂, |𝒚 − 𝒚𝟎 | ≤ 𝒃}. Suppose that:
1. 𝒇(𝒙, 𝒚) is continuous in 𝑹,
2. 𝒇(𝒙, 𝒚) satisfies a Lipschitz condition in 𝒚, i.e., there exists a constant 𝑳 > 𝟎 such that for all
(𝒙, 𝒚𝟏 ), (𝒙, 𝒚𝟐 ) ∈ 𝑹,
|𝒇(𝒙, 𝒚𝟏 ) − 𝒇(𝒙, 𝒚𝟐 )| ≤ 𝑳|𝒚𝟏 − 𝒚𝟐 |
Then, there exists a unique function 𝒚(𝒙), defined in some interval |𝒙 − 𝒙𝟎 | ≤ 𝒉 ≤ 𝒂, which satisfies the initial value
problem:
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
Uniqueness:
Suppose there exist two solutions 𝒚(𝒙) and 𝒚(𝒙) satisfying the same IVP. Then:
𝒙 𝒙
|𝒚(𝒙) − 𝒚(𝒙)| = [ 𝒇(𝒕, 𝒚(𝒕)) − 𝒇(𝒕, 𝒚(𝒕))]𝒅𝒕 ≤ 𝑳 |𝒚(𝒕) − 𝒚(𝒕)|𝒅𝒕
𝒙𝟎 𝒙𝟎
By Gronwall’s inequality, we conclude:
|𝒚(𝒙) − 𝒚(𝒙)| ≤ 𝟎 ⇒ 𝒚(𝒙) = 𝒚(𝒙)
So the solution is unique.
Conclusion:
The Picard–Lindelöf Theorem guarantees both existence and uniqueness of the solution to an initial value problem
under two key conditions:
Continuity of the function 𝒇(𝒙, 𝒚)
Satisfaction of a Lipschitz condition in 𝒚
This theorem forms the foundation of the theory of ordinary differential equations and provides a method (Picard
iteration) to actually approximate the solution.
Ques 9: State and prove Peano’s Theorem
Statement of Peano’s Existence Theorem:
Let the function 𝒇(𝒙, 𝒚) be continuous in a rectangle 𝑹 = {(𝒙, 𝒚) ∈ ℝ𝟐 : |𝒙 − 𝒙𝟎 | ≤ 𝒂, |𝒚 − 𝒚𝟎 | ≤ 𝒃} containing the
point (𝒙𝟎 , 𝒚𝟎 ). Then, the initial value problem (IVP):
𝒅𝒚
= 𝒇(𝒙, 𝒚), 𝒚(𝒙𝟎 ) = 𝒚𝟎
𝒅𝒙
has at least one solution in some interval |𝒙 − 𝒙𝟎 | ≤ 𝒉 ≤ 𝒂.
Final Answer:
𝒚(𝒙) = 𝒆𝒙
This function satisfies the given integral equation.
Ques 11: Find the resolvent kernel for
𝒌(𝒙, 𝒕) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕), 𝟎 ≤ 𝒙 ≤ 𝟐𝝅, 𝟎 ≤ 𝒕 ≤ 𝟐𝝅
𝑹(𝒙, 𝒕; 𝝀) = 𝝀𝒏 𝟏
𝑲𝒏 (𝒙, 𝒕)
𝒏 𝟏
Final Answer:
𝑹(𝒙, 𝒕; 𝝀) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕)
This is the resolvent kernel for 𝒌(𝒙, 𝒕) = 𝐬𝐢𝐧(𝒙 − 𝟐𝒕) over the square region 𝟎 ≤ 𝒙, 𝒕 ≤ 𝟐𝝅.
Ques 12: State and prove Arzelà–Ascoli Theorem
Statement of Arzelà–Ascoli Theorem (Real-valued version):
Let 𝓕 be a family of real-valued functions defined on a closed and bounded interval [𝒂, 𝒃]. Suppose that:
1. Every function 𝒇 ∈ 𝓕 is continuous on [𝒂, 𝒃],
2. The family 𝓕 is uniformly bounded, i.e.,
∃𝑴 > 𝟎 such that |𝒇(𝒙)| ≤ 𝑴, ∀𝒙 ∈ [𝒂, 𝒃], ∀𝒇 ∈ 𝓕
3. The family 𝓕 is equicontinuous, i.e.,
∀𝜺 > 𝟎, ∃𝜹 > 𝟎 such that |𝒙 − 𝒚| < 𝜹 ⇒ |𝒇(𝒙) − 𝒇(𝒚)| < 𝜺, ∀𝒇 ∈ 𝓕
Then every sequence in 𝓕 has a uniformly convergent subsequence whose limit is a continuous function on [𝒂, 𝒃].
Conclusion:
The Arzelà–Ascoli Theorem guarantees that from any equicontinuous, uniformly bounded sequence of continuous
functions on a compact interval, one can extract a uniformly convergent subsequence.
This is a fundamental result in functional analysis and is used in solving integral and differential equations by
ensuring that sequences of approximate solutions have convergent limits.
Ques 13: Apply Picard method to solve
𝒅𝒚
= 𝒚 − 𝒙, 𝒚(𝟎) = 𝟐
𝒅𝒙
Step 1: Express the differential equation in integral form
Using the Picard iteration method, we write the integral form of the equation:
𝒙
𝒚(𝒙) = 𝒚𝟎 + 𝒇 (𝒕, 𝒚(𝒕)) 𝒅𝒕
𝟎
Here,
𝒇(𝒙, 𝒚) = 𝒚 − 𝒙
𝒚𝟎 = 𝟐
So,
𝒙
𝒚(𝒙) = 𝟐 + ( 𝒚(𝒕) − 𝒕) 𝒅𝒕
𝟎
Conclusion:
Using Picard's method, the successive approximations are:
𝒚𝟎 (𝒙) = 𝟐
𝒙𝟐
𝒚𝟏 (𝒙) = 𝟐 + 𝟐𝒙 −
𝟐
𝒙𝟐 𝒙𝟑
𝒚𝟐 (𝒙) = 𝟐 + 𝟐𝒙 + −
𝟐 𝟔
𝒙𝟐 𝒙𝟑 𝒙𝟒
𝒚𝟑 (𝒙) = 𝟐 + 𝟐𝒙 + + −
𝟐 𝟔 𝟐𝟒
And the exact solution is:
𝒚(𝒙) = 𝒆𝒙 + 𝒙 + 𝟏