You are on page 1of 2

First Order Differential Equations

Variable Separation:

y h y t sdyy r t dt S y R t C
′ = ( , )→
( )
= ( ) → ( ) = ( )+

First Order Standard Linear Form:


y Pty Qt ′+ ( ) = ( ) (1)

Solve By Homogeneous Equation Method


y P t y Q t yh P t yh homogeneous eq
dyh -P t dt yh e- P t dt
′+ ( ) = ( )→ ′+ ( ) = 0 ⋯

yh
∫ ( )
= ( ) → =

y u t y h y u t y h u t yh
u t y h u t yh P t u t y h Q t
= ( ) → ′ = ′( ) + ( ) ′

u t y h u t yh P t y h Q t u t y h Q t
′( ) + ( ) ′ + ( ) ( ) = ( )

⏠⏣⏣⏣⏡⏣⏣⏣

′( ) + ( )( ′ + ( ) ) = ( )→ ′( ) = ( )

u t ∫ Qyht dt y t yh ∫ Qyht dt
Homogeneous Eq=0

( ) = yh e- P t dt
( )
→ ( ) =
( )
∧ =
∫ ( )

Integrating Factor

t e P t dt yh - y t Q t t t dt Ryht yCh
𝜌( ) =
∫ ( )
= ( )
1
→ ( ) =
∫ ( ) ⋅ 𝜌( )

𝜌( )
=
( )
+
(2)
⏠⏣⏣
⏡⏣⏣
⏢ ⏠⏣⏡⏣⏢ Long term Transient
Steady term

Linear Combination

If you a have a set of functionF t f t f t fn t then the combinations


functions are going to have the form c f t cf t cf t
( ): 1( ), 2( ) ⋯ ( )


t t is a linear combination of t t
1 1( )+ 2 2( )+ 3 3( )

Example: 2 cos + 3 sin( ) cos( ) ∧ sin( )

t is a linear combination of t
9
5
+3
5
+1

Superposition Principle

Multiplication
pn t y n t pyt qt pn t y n t p y t aq t
⏠⏣⏣⏣⏣⏣⏣⏣
⏡⏣⏣⏣⏣⏣⏣⏣
⏢ ⏠⏣⏣⏣⏣⏣⏣⏣
⏡⏣⏣⏣⏣⏣⏣⏣

( ) ( )+ ⋯ + 0 ( ) = ( ) → ( ) ( )+ ⋯ + 0 ( ) = ( )

yt ay t Solution= ( ) Solution= ( )

Sumatory
pn t y n t
( ) ( )+ ⋯ + py t q t y t 1( ) = 1( )→ 1( )

pn t y n t py t q t y t
1 0

( ) ( )+ ⋯ + 2( ) = 2( )→ 2( )

pn t y n t py t q t q t y t y t y t
2 0

→ ( ) ( )+ ⋯ + 0 3( ) = 1( )+ 2( )→ 3( ) = 1( )+ 2( )
3

Complex Numbers

Complex numbers are expressions of the form a bi, where and are real numbers, and i is a new symbol.
Just as real numbers can be plotted on a line, complex numbers can be plotted in a plane: plot a bi
+

at the point a b .
+

bi is called purely imgainary


( , )
Imaginary
A complex number of the form
A complex number of the form a i is called Real
∙ 0+ Axis
∙ +0

i - i j (Sometimes) a bi C
2
= 1, = i ∧ + ∈

Re a bi a Im a bi b ⏨⏨⏨ a bi a-bi 0 1 2 Real

⏠⏣⏣⏣⏡⏣⏣⏣⏢
( + ) = ( + ) = + = Axis
−1
Complex Conjugate −i 2−i
Addition [a bi + ]+[ c di a c+ ] = + +( b di + )

Multiplication a bi c di ac adi bci-bd ac-bd ad bc i


a bi a bi c-di a bi c-di
[ + ][ + ] = + + = +( + )

[ + ] [ + ] [ ] ( + )( )
Divide
c di [ + ]
=
[c di c-di
+ ]

[ ]
=
c d 2
+
2

Module a bi | + | = a b 2
+
2

Identitys:

Re z z ⏨z =
+
, Im z z-i ⏨z ⏨z z z⏨z z
= , = , = | |
2

Re cz 2
( ) = c Re z Im cz c Im z ,
2
( ) = .

Polar coordinates:
Given a z a bi this can be write as z r
= + = (cos(𝜃) + i sin(𝜃)) → a r = cos(𝜃), b r = sin(𝜃)

or instaed like z rcis rei = (𝜃) =


𝜃

Backwards Euler's:

a e e a e -ei
ia -ia +
ia -ia
cos( ) = , sin( ) =
2 2

Second Order DE

Second Order differential equation with constanst


x
We start by find two general solutions x x Cx Cx
this solution most be independent x x x
1 ∧ 2 → = 1 1 + 2 2

≠ 1( 2)

␒␒y Ay␒ By y e re
rt rt Arert Bert
1

2
+ + = 0→ = → + + = 0

P x Characteristic Polynomial r
( ) = =
2
+
⏦Ap r ⏦Bn + = 0
2 𝜔

Roots: r
-A A - B -p=
±
2
4
= ± p- n 2
𝜔
2

Real and diferrents: r r r r R y c er t c er t


2
1 ≠ 2 ∧ 1, 2 ∈ → = 1
1
+ 2
2

Some graph solutions


- e- x
1

2
3
7
e- x e-x
3
+
3

+ e-x
3

2
2 2

2
e- x - e-x
3
3

- e- x e-x
7

2
3
+
3

r r r a bi y e a bi t eat c bt c bt
Complex Roots: 1 ≠ 2 → = ± → =
( ± )
= ( 1 cos( )+ 2 sin( ))

Roots: r
-A i A - B this roots let us 2 eq values for y a Re and a Im
= ±
2
4

⏠⏣⏣
⏡⏣⏣
Real
Part

a
2
⏠⏣⏣⏣
⏡⏣⏣⏣
b⏢ Imaginary part
2

r a bi y eatat bt Re y y cy cy
ie bt Im y
cos( ) 1
1 = + → = So the general solution is: = 1 1 + 2 2
sin( ) 2

eat bt for a Graph of e at bt for a


Graph of cos( ) < 0 sin( ) < 0

Damped solutions to mx ␒␒ bx␒ kx : + + = 0

p bm d mk-b -p k 2
4
where n
m n m
2 2
= , 𝜔 = = 𝜔 ⋯ 𝜔 =

Undamped: b
2 2

∙ = 0

General solution is a simple armonic oscillator:

x c e-i n c ei n A nt = 1
𝜔
+ 2
𝜔
= cos(𝜔 + 𝜙)

∙ Underdamped: 2 < 4 b mk
The general solution is, and have imaginary roots:
x Ae-pt = cos(𝜔 d t- 𝜙) for some A and 𝜙
Pesudo Period: T =
2𝜋
𝜔 d

Ae-pt

-Ae-pt
∙ Overdamped: b 2
> 4 mk
All roots are reals, and negatives, so the solution tends to 0

∙ Critically damped b 2
= 4 mk
The discriminant makes it 0, so there's only one negative solution. Because it, you have to add another
solution. Letting the general solution like:
y ert c c t e-pt c c t
= ( 1
+ 2
) = ( 1
+ 2
)

Sinusoidal Functions

a b C - C
⏠⏣⏣⏣
⏡⏣⏣⏣

∙ cos(𝜃) + sin(𝜃) = cos(𝜃 𝜙)

( a-bi i )(cos(𝜃) + sin(𝜃))


Sionusodial general form
b
= a b e-i ei a b ei -
2
+
2 𝜙

𝜃
=
2
+
2 (𝜃 𝜙)
𝜙

a
Easy to use and graph: A cos(𝜔 t- 𝜙)

There're characteristic
parts of this to graph:

P T = =
2𝜋

t 0 =
𝜙

𝜔
, 𝜈 =
T
1

High Order differential Equations with constant terms

The span is the linear algebra term for the phrase "all linear combinations."
Span( f f
fn c f c f cnfn where f f fn are the "basis"
1, 2, ⋯ , ) = { 1 1, 2 2, ⋯ } ( 1, 2, ⋯ , )

Dependence: Vetors v v vn are linearly dependent if there exist numbers c c cn not all
zero such that c v cv cv cn vn
1, 2, ⋯ , 1, 2, ⋯ ,

1 1 + 2 2 + 3 3 + ⋯ + = 0

The meaning of linear dependence is that at least one of the functions on the list is redundant (and can be
expressed as a linear combination of the others.)

Example:
The vectors R (a real vector space). This is because any form a basis for 3

vector x y z can be written as (real) linear combinations of these 3 basis vectors, and these 3 basis
(1, 0, 0), (0, 1, 0), (0, 0, 1)

vectors are linearly independent: c c c only if c c c all equal


( , , )

1 (1, 0, 0) + 2 (0, 1, 0) + 3 (0, 0, 1) = 0 1, 2, 3

zero.

Wronskian: Too see if f and f are linear dependent on each other, the wronkskian must give 0 ( W = 0),

W
1 2

if there're not linear dependent on each other, the wronkskian must give ≠ 0 ( ≠ 0).

Wf f f f f f -f f
f f
1 2
( 1
, 2
) = = 1 2
′ 2 1

1′ 2′

Dimension theorem.The dimension of the space of solutions to an nth order homogeneous ODE with
n
constant coefficients is .

Generalize solution to n th ODE homogenous:


Given:

an y n ( )
+ ⋯ + a y␒ a y + = 0
⏪⏪⏪⏪
y ert ⏫
By susbtitute

P an r ar a
1 0

n
=

: The characteristic polynomial = + ⋯ 1 + 0

then you can factorize the characteristic polynomial as:


P r an r-r r-r
( ) = ( 1 )( 2) ⋯ ( r-rn )

where ri are the characteristic roots of the polynomial (Maybe complex).


If all are differents er t er t er t e rn t→
1
+
2
+
3
+ ⋯ +

If r r rn are not distinct, then er t er t er t ernt cannot be a basis since some of 1 2 3

these functions are redundant (definitely not linearly independent)! If a particular root r is repeated m times,
1, 2, ⋯ + + + ⋯ +

then:
m copies
⏜⏟⏟⏟⏟⏟ ⏝⏟⏟⏟⏟⏟ ⏞
Replace e ri t + e ri t + e ri t + ⋯ e ri t
by e ri t + te ri t + t e ri t + ⋯ t m- e ri t 2 1

Complex Roots

Complex basis vs real-valued basis. Let yt be a complex-valued function of a real-valued variable . If t y


⏨y are part of a basis of solutions to a homogeneous linear system of ODEs with real coefficients, then:
( )

and
replacing y ⏨y by Re y Im y , ( ), ( )

Existence and uniqueness theorem for a linear ODE.


pn- t
Let p t q t be continuous functions on an open interval I. Let a I and let
1( ), …, 0( ), ( ) ∈

b b bn be given numbers. Then there exists a unique solution on I to the nth order linear ODE
0
, 1
⋯ ,

y n pn- t y n- p t y␒ p t y q t ( )
+ 1( )
( 1)
+ ⋯ + 1( ) + 0( ) = ( )

satisfying the initial conditions

y a b y␒ a b
( ) = 0, ( ) = 1, …, y n- a bn-
( 1)
( ) = 1.

As before, existence means that there is at least one solution; uniqueness means that there is only one
solution. Caution with the number evaluate at ya ( ) is the same in all

Operator Notation

A derivate can be representated as: ′( ) = ( ), f x Df x f x D f x f x D f x ″( ) =


2
( ),
(3)
( ) =
3
( ) this
notation takes the name of "Operator Notation"
The linear differential equation can take the form of:

y Ay␒ Cy
␒␒ + D y ADy Cy D AD C
⏠⏣⏣⏣
L ⏡⏣⏣⏣

2 2
( + ) = 0→ + + = 0→ + + = 0

=Linear Operator

Example: t , De t e t Dt 3 2 4 4

D Is Linear: D x y Dx Dy Dc y c Dy
= 3 = 4

→ ( + ) = + ∧ 1 = 1

L vx ux
The Linear Operator is a balck box which gives a fuction

This is linear if
⏪⏪⏫ ⏪⏪⏫
D is linear (Yes), so it most complete the linear properties:
( ) ( )

Lu x u x Lu x Lu x (1)
L c u x cL u x c cte u x func
( 1( )+ 2( )) = ( 1( )) + ( 2( ))

( 1 ( )) = ( ( )), = ∧ ( ) =

(LTI)=Linear Time Invariant System

f(t ) x(t )

Any LTI
System

f (t − t 0 ) x (t − t 0 )

Solve Inhomogeneous equations:


To understand the general solution to an inhomogeneous linear ODE
inhomogeneous equation: pn t y n pn- t y n-
( )
( )
+ 1( )
( 1)
+ ⋯ + p ty qt
0( ) = ( )

do the following:
∙ List all solutions to the associated homogeneous equation

pn t y n pn- t y n- p ty ( )
( )
+ 1( )
( 1)
+ ⋯ + 0( ) = 0

i.e., write down the general homogeneous solution y h .


Find (in some way) any one particular solution y p to the inhomogeneous ODE.
Add y p to all the solutions of the homogeneous ODE to get all the solutions to the inhomogeneous ODE.

y yp yh General Solution
=
Particular solution
+
general homogeneous solution

Substitution rule: P D e rt P r ert ( ) = ( )

Find Particular Solutions (Exponential Response Formula=ERF):


Question: For any polynomial P and number, what is a particular solution to
PDy et ( ) =
𝛼

We have:

P D e t P e t e t P D Pe
t 𝛼
𝛼 𝛼 𝛼
( ) = (𝛼) → = ( )
(𝛼)

So the particular solution is: y P


e t , and exist superpostion for this solving: 𝛼

P t =
(𝛼)

P D y cne cn- e n- n t ( )c e t y yn yn-


=
𝛼
+ 1
𝛼 1
+ ⋯ + 0
𝛼0
→ = + 1 + ⋯ + y 0

Where y k is the particular solution of: P D y k ck e k t ( ) =


𝛼

The existence and uniqueness theorem says that


PDy et
( ) =
𝛼

should have a solution even if P (when ERF does not apply). Let's start with the one case:
P is a polynomial and P , but P
(𝛼) = 0

ERF Suppose that for some number (𝛼) = 0 ′(𝛼) ≠ 0 𝛼. Then

yP Pt ne is a particular solution to P D y e t
n t 𝛼
𝛼
= ( ) =
( )
(𝛼)

Complex Replacement:
A method to find a particular solution to an inhomogeneous ODE:
P D x k a t b t Re k a-bi ei t P D xC k a-bi ei t
( ) = ( cos(𝜔 ) + sin(𝜔 )) = ( )
𝜔
→ ( ) = ( )
𝜔

Find x C xP ixZ , then xP to get the particular solution, we can make this by ERF xC k a-bi
= +
Pi
ei t =
(

( 𝜔)
)
𝜔

Complex Gain: G x ( )

When we are solving an inhomogeneous ODE with a system input and response:
P D x Q D y yC ke t

𝛼


( ) = ( ) ∧ =
Response Input

We know that the general solution is x C


Q ke t , and we call G x Q (𝛼) (𝛼)

P P
𝛼
= ( ) =
(𝛼) (𝛼)

G x complexified system response


complexified system input
( ) = .

Transient and Steady State solutions to stable ODE's:


Wa can say that in general a transiente solution of a Linear ODE, of any grade is the solution part which
needs constants to be fully completed and also, needs to tends to zero with a long time, and the steady state
solutions, doesn't need any constant which can be founded with initial values.

Inhomogeneous linear ODE:


pn t y n pn- t y n- ( ) ( 1)
p ty qt
yy cy cy cnyn ⏠⏣y⏡P⏣⏢
( ) + 1( ) + ⋯ + 0( ) = ( )

⏠⏣⏣⏣⏣⏣
⏡⏣⏣⏣⏣⏣

→ ( ) = 1 1 + 2 2 + ⋯ +

Transient solution Steady


term

Nonlinear ODE

First we meet 1° ODE:


y x fxy
′( ) = ( , )

Graphic Methods:

The function f x y , can be representend in the x y plane, at the point x y we put an line with slope
f x y , doing like that a SlopeField, and the soultions to the ODE, are gonna be the line function tangential
( , ) , ,

( , )

to the Slope Lines. (Red, Green and Blue)

To draw an isocline (Function with values of the same slope), we set f x y c, and we get a funtion of
y x , which is a line with the same slopes c.
( , ) =

( )

dy f y
Autonomous Equation: no t on the right hand side dt Not t on f = ( ) ⟹

Phase lines:
1. Find critica point, y␒ f y f yc , find these yc . Every spution which starts at yc is a
= ( )→ ( ) = 0

constant solution which no changes with time.

2. Determine the periods at fy ( ) < 0 and fy ( ) > 0, we plot a graph of y fy× ( )

fy ( ) > 0

fy ( )
Critical Points

c c c c y
fy
1 2 4
3

( ) < 0

The phase line of a first order autonomous DE ␒ = ( ) is a plot of the -axis with all critical points and y fy y
with an arrow in each interval between the critical points indicating whether solutions increase or decrease

- ∞
Positive Negative Positive Negative Positive
+∞

c c c c 1 2 3 4

Estable solutions are which a point starts there and wants to be closer and closer, and unstable solution wants
to get out all near point. Example: c c Estable c c Unstable 1, 3 = ∧ 2, 4 =

And there's another one, semistable:

In one side a have atraction and on the other side I have repultion

Numerical Solutions to nonlinear ODE

Eulers method:
y xy fxy
yx y
′( , ) = ( , )
Given a ODE with intial values we can do an aproximation by follow the trace of the
( 0) = 0

slope field

Given a initial par of point we know the slope at that point by evaluating
y that point in the funtion f x y , after that we walk a space h, and we
put another point x y and so...
( 0, 0)
0

h
( 1, 1)

xn h xn yn yn f xn yn mn
x x x x
+1 = + , +1 = + ( , )⋅

0 1 2 3

If the ODE solution is gonna be convex the euler solution is too low, and if the ODE solution is concave the
euler solution is too high (as euler as its times)

Second-order Runge–Kutta method (RK2)


Here is how one step of this method goes:
1. Starting from ( n , n ) look ahead to see where one t y
yn +1 step of Euler's method would land, but do not go there!
tn yn
fh

Call this temporary point +1


,
+1
nt nt

yn
+

2. Find the midpoint between the starting point and


2

tn tn yn yn
1+

+1
,

+ +
ny ny

+1 +1

yn yn the temporary point: ,


+
2

+ 2 2
+1
1+

3. Use the slope at this midpoint to find


yn
2

yn yn hf tn tn yn yn = +
+ +1
,
+
+1

tn tn tn tn
+1
2 2

tn tn h yn yn hf tn yn
+ +1
+1
= + , = + ( , )

h
2 +1 +1

Heun's method
1. Get the coordinates of the next euler point tn yn
+1 , +1

yn +1 2. Find the slope of eulers point fn


mean fn fn tn yn
+1

yn +1
+ 3. Sketch the point ( +1 , +1 ) from the first points
+1
tn yn fn fn
fn
( , ) with the avergae slope of +1

fn tn tn h y n y n h fn fn
+1
+
+1

+1 +1 = + , +1 = +

yn
2

fn fn +1
= f tn yn hf tn yn
( +1 , + ( , ))

tn tn +1

Matrices and vectores

We can express a system of differentials equations as a product of matrices and vectors, per example:

h␒ -ah ah= + h␒ -a a h
h␒ ah -ah h␒ a -a h
1 1 2 1 1
→ =

2
= 1 2 2
⏠⏣⏣⏣
⏡⏣⏣⏣

Coeffient Matrix
2

Example, coddled egg

By newton cooling law:


dT a T -T
dTdt
1
= ( 2 1)

dt
2
= a T -T b Te -T
( 1 2) + ( 2)

x A x b
⏜⏟⏟
T⏝⏟⏟

⏞ ⏜⏟⏟⏟
-a ⏝⏟⏟⏟
a ⏞ ⏜⏟⏟
T⏝⏟⏟
⏞ ⏜⏟⏟
⏝⏟⏟

a -a-b T bTe
0

T␒
1 1
→ = +
2
2

x =Vector of dependent variables, = Coefficient Matrix, A b Constant vector =

With this, we can re write the system of ODE as:


d x Ax b x␒ Ax b
dt = + → = +

General form of a system of differential equations:


x␒ a t x b t y r t if r t r t
y␒ c t x d t y r t
= ( )· + ( )· + 1( )
1( )∧ 2( ) = 0→ There're homogeneous ODE sys

d x Ax r
= ( )· + ( )· + 2( )

dt = +

The companion system:


If we have a 2nd order differential equation, we can convert this ODE to a 1st order ODE.
By taking a change of variable, we get a system of 1st order ODE. Per example:
The RLC circuit is a 2nd order ODE:

I RL I␒ LC I V␒Lt ␒␒ + +
1
=
( )

I␒ y I␒ I
R V
␒ t
=

y␒ - LC -R L y
0

V␒ t L
0 1

y␒ - L y- LC I L=
1
+
( ) → =
1/ /
+
( )/

And basically you can Dimish a nth order ODE to a st order ODE by taking: 1

x x␒ x x␒ x x␒ xn- x␒ n- 1 = , 2 = , 3 = ⋯ 1 =

and you gonna get a sys of n equations.


1 2 2

Converting from systems to higher order equations:


Basically if you have a system like:
x␒ f x y t
y␒ g x y t
= ( , , )

= ( , , )

We can get a high order ODE by getting a y from the 1st equation, or getting a x from the 2nd equation and
substituting into the other equation.

Eigenvalues and eigenvectors


An eigenvalue 𝜆 of A is a scalar such that Av v for some nonzero vector. = 𝜆

An eigenvector v of A associated with an eigenvalue is a nonzero vector v such that Av v. 𝜆 = 𝜆

We also say an eigenvector “corresponding to," or “belonging to" an eigenvalue.


Warning: The zero vector v is never an eigenvector. However,
= (0, 0) can be an eigenvalue. 𝜆 = 0

Example 5.2 Let A


- and let v
- - .Is v an eigenvector of A?
1 2 2
= =
1 0 1
Solution: The calculation:

Av - - - =
- v 1

1 0
2 2

1
=
4

2
= 2 , 𝜆 = 2

Shows that v is an eigenvector, and that the associated eigenvalue is . 𝜆 = 2

Eigenvectors associated with different eigenvalues are linearly independent.

A Faster method to get the eigenvetors is seeing as a dot product, and that's mean that if we have a system:

x Ax A u u
u u
11 12
′ = → = , Then the eigenvalues are: 𝜆

u - u v
21 22

And we know that for the eigenvalues most be true: 𝜆 · = 0

v u - u
11 12

This's a dot product equals to zero so: ⟂ 𝜆

-u
11 12

And that's mean: v =


u - 11
12

Solve ODE systems


To find a solution for a system ODE ␒ x Ax, we must assume that the basis solution is x ve t
= =
𝜆

v a . Pluging v in the system we get:


a
1
where =
2

ve t Ave t v Av v - Av
𝜆
𝜆
=
𝜆
→𝜆 A- I v I
= →𝜆 = 0→( 𝜆 ) = 0, =
1

0
0

This led us to P I-A v , and we don't want to v , so by algebra we assume that


I-A , from here we have to substract a N value N Grade of the system
∙ (𝜆) = (𝜆 ) = 0 = 0

det(𝜆 ) = 0 𝜆 ( = )

∙ And now we need to find th evalues of the vector solution: v. We make this with every value of i . And 𝜆

we execute the equations of:


v Av 𝜆 =

We can see that the system of 2 equation inded is 2 linear dependent equations, so we only get a ratio
between a and a , with which we're gonna put a constant term variable.
1 2

So we get that the general solution is:


x cve t cve t = 1 1
𝜆1
+ 2 2
𝜆2

More general and fast form to solve 2 × 2 systems:

x␒
x␒ ac db x x e t ab x
y␒ cd y
𝜆
= → = → =

This led us to: - a d ad-bc 𝜆


2
( + )𝜆 + = 0

This is the characteristic equation: -tr A A 𝜆


2
( )𝜆 + det( ) = 0

tr A a a a
( ) = 11
ann Trace function
+ 22
+ 33
+ ⋯ + =

Make a big parentesis before continuing:


Rotation Matrix 𝜃 degree's Refexion Matrix about a line at 𝜃 degree's

-
{ - }
cos(𝜃) sin(𝜃) cos(2𝜃) sin(2𝜃)

sin(𝜃) cos(𝜃) sin(2𝜃) cos(2𝜃)

Geometry of the graphs Phase Portraits


The Solution to a system 2 × 2 like: ␒ x Ax, can be sketch by a parametric function of x t y t
= ( ( ), ( ))

In the figure above the red lines represent a parametric function were 𝜆 1 ∧ 𝜆 2 ∈ , this eigenvalues are R
gonna have a own eigenvector which have a basis, depends on the eigenvector paris, one is gonna be
dominand over the other, so per example, the pink line corresponds to a eigenvector that domains over
t → ∞, and the green line corresponds to a eigenvector that domainas as t - ∞. And all the solutions
t t

most be parallel to the pink line in the bigs values of and most get closer to the green line as get smaller.
xy
The ( , )-plane on which we sketch the trajectories of solutions is known as the phase plane. And a phase
portraits mean a phase plane with all solutions.

We can clasify the phase portraits by its eigenvalues:

Sink node (or Nodal sink or Attracting node ): Source node (or Nodal source or Repelling node ):
Eigenvalues are both negative 𝜆1 , 𝜆2 < 0 Eigenvalues are both positive 𝜆 1 , 𝜆2 > 0

Saddle: Eigenvalues
have opposite signs
𝜆 1 > 0, 𝜆 2 < 0

Complex EigenValues And EigenVectors


When the characteristic equations gives a complex eigenvalue, we have to solve all normally.

⏨are complex and purely imaginary. Let a i bb


Suppose that the eigenvalues 𝜆,
a be the
1 1
𝜆 𝛼 = +

and ⏨be the eigenvector of ⏨. The general real solution is:


2 2

eigenvector of 𝜆 𝛼 𝜆

C Re e t C Im e t 1
𝛼
𝜆
+ 2
𝛼
𝜆

And the phase portrait is a center and it's a ellipse or can also be a sink ellipse or a nodal ellipse.

Spiral source (or repelling spiral ): Spiral sink (or attracting spiral ):
Eigenvalues are complex and have Eigenvalues are complex and have
positive real part 𝜆 = s i s+ 𝜔, > 0 negative real part, 𝜆 = s i s
+ 𝜔, < 0

I we have a general 2 × 2 system of the form ␒ x Ax, making a tr A det A -plane we can separate
the solution by its type depending on the regions. We separate the regions by the tr A -axes, det A -axes
= ( ( ), ( ))

( ) ( )

and the function det A tr A : ( ) = ( )


2
/4

Comb case:

Comb : Eigenvalues are real and distinct,


with exactly one being zero.
General solution: c 1 𝛼1 + c e 2 𝛼2
𝜆2 t

Degenerate cases:
Suppose that there is a repeated real eigenvalue, 𝜆 1 = 𝜆, 𝜆 2 = 𝜆 The space of eigenvectors of 𝜆 could be
either 1-dimensional or 2-dimensional.

Repeated nonzero eigenvalue with 1-dimensional


space of eigenvectors 𝜆 ≠ 0, and ≠ 𝜆 A I
There is just one eigenline.
Every trajectory not contained in the eigenline
becomes tangent to it at (0, 0), and becomes
parallel to it when far away from (0, 0). Such
trajectories are
repelled from (0, 0) if 𝜆 > 0

attracted to (0, 0) if 𝜆 < 0


This is a borderline case between a node and a spiral.
c et c et t et
1 𝛼1
𝜆
+ 2 𝛼
2
𝜆
+ 𝛼
3
𝜆

Repeated nonzero eigenvalue,


2-dimensional space of eigenvectors,
𝜆 ≠ 0 and A I = 𝜆

1. Repeated zero eigenvalue with 1-dimensional space of eigenvectors, 𝜆 = 0 and ≠ 0 There is only one A
eigenline. Points on the eigenline are stationary. All other trajectories are lines parallel to the eigenline.
General solution: x c c t w
= 1𝛼 + 2( 𝛼+ ) where w is an eigenvector of 0 and the constant vector w
satisfies 𝛼 = ( A- I w . 𝜆 )

2. Repeated zero eigenvalue with 2-dimensional space of eigenvectors, 𝜆 = 0 and A = 0. Every solution is
a stationary point. General solution: x = 𝛼

Full picture of trace determinant plane

If the phase portrait type is robust in the sense that small perturbations in the entries of do not change the
type of the phase portrait, then the system is called structurally stable.

2×2 Nonlinear Systems:

The general form of a 2 × 2 Non-Linear System is:


fxy x␒
hxy y␒
= ( , )

= ( , )

When, f x y ( ,h x y depends in a non linear form of x y and specially if they don't depends on
) ∧ ( , ) ∧

the time, is an autonomous system. The Phase Plane of this system is made it by a parametric equation and
the equation system tell us how is the velocity at any point x y . ( , )

∙ The first step in solving a first order autonomous system is to find all the critical points of the system. A
critical point of an autonomous system
x␒ f x y
y␒ h x y
= ( , )

= ( , )

xy
is a point ( , ) at which the velocity vector ( , ) 0 x␒ y␒ is . It corresponds to a constant solution to the system.
Geometrically, a critical point is a trajectory that stays at the same point on the phase plane. For this reason, a
critical point is also called a stationary point or a fixed point.

To find the critical points, we need to solve the following system of 2 algebraic non-linear equations
simultaneously:
fxy
hxy
( , ) = 0

( , ) = 0

If a problem you are trying to solve is too difficult because it involves a non-linear function f x y , use the
a b : that approximation is
( , )

best linear approximation near the most relevant point ( , )

f a b f ax b x-a f ay b y-b ( , )+
∂ ( ,


)
( )+
∂ ( ,


)
( )

since this linear polynomial has the same value and same partial derivatives at a b as f x y . ( , ) ( , )

∙ The second step in sketching the solutions to a 2 × 2 autonomous non-linear system is to linearize the
system at each critical point and sketch the trajectories near it.

fxy fab J a b x-a


hxy hab y-b
( , ) ( , )
≈ + ( , )·
( , )
x a ( , )

y b

Jab fx a b fy a b
hx a b hy a b
( , ) ( , )
( , ) = = Jacobian Matrix
( , ) ( , )

Stability of a critical point: The stability of a critical point is determined by the stability of the linearization
of the system at that point. A critical point is called stable (unstable) if the linearized system at that point is
stable (unstable).

You might also like