2.2.2 problem 2
Internal
problem
ID
[18191]
Book
:
Elementary
Differential
Equations.
By
R.L.E.
Schwarzenberger.
Chapman
and
Hall.
London.
First
Edition
(1969)
Section
:
Chapter
4.
Autonomous
systems.
Exercises
at
page
69
Problem
number
:
2
Date
solved
:
Thursday, December 19, 2024 at 06:17:49 PM
CAS
classification
:
system_of_ODEs
\begin{align*} x^{\prime }&=x\\ y^{\prime }&=x+2 y \end{align*}
Solution using Matrix exponential method
In this method, we will assume we have found the matrix exponential \(e^{A t}\) allready. There are
different methods to determine this but will not be shown here. This is a system of linear
ODE’s given as
\begin{align*} \vec {x}'(t) &= A\, \vec {x}(t) \end{align*}
Or
\begin{align*} \left [\begin {array}{c} x^{\prime } \\ y^{\prime } \end {array}\right ] &= \left [\begin {array}{cc} 1 & 0 \\ 1 & 2 \end {array}\right ]\, \left [\begin {array}{c} x \\ y \end {array}\right ] \end{align*}
For the above matrix \(A\) , the matrix exponential can be found to be
\begin{align*} e^{A t} &= \left [\begin {array}{cc} {\mathrm e}^{t} & 0 \\ {\mathrm e}^{2 t}-{\mathrm e}^{t} & {\mathrm e}^{2 t} \end {array}\right ] \end{align*}
Therefore the homogeneous solution is
\begin{align*} \vec {x}_h(t) &= e^{A t} \vec {c} \\ &= \left [\begin {array}{cc} {\mathrm e}^{t} & 0 \\ {\mathrm e}^{2 t}-{\mathrm e}^{t} & {\mathrm e}^{2 t} \end {array}\right ] \left [\begin {array}{c} c_{1} \\ c_{2} \end {array}\right ] \\ &= \left [\begin {array}{c} {\mathrm e}^{t} c_{1} \\ \left ({\mathrm e}^{2 t}-{\mathrm e}^{t}\right ) c_{1}+{\mathrm e}^{2 t} c_{2} \end {array}\right ]\\ &= \left [\begin {array}{c} {\mathrm e}^{t} c_{1} \\ \left (c_{1}+c_{2}\right ) {\mathrm e}^{2 t}-{\mathrm e}^{t} c_{1} \end {array}\right ] \end{align*}
Since no forcing function is given, then the final solution is \(\vec {x}_h(t)\) above.
Solution using explicit Eigenvalue and Eigenvector method
This is a system of linear ODE’s given as
\begin{align*} \vec {x}'(t) &= A\, \vec {x}(t) \end{align*}
Or
\begin{align*} \left [\begin {array}{c} x^{\prime } \\ y^{\prime } \end {array}\right ] &= \left [\begin {array}{cc} 1 & 0 \\ 1 & 2 \end {array}\right ]\, \left [\begin {array}{c} x \\ y \end {array}\right ] \end{align*}
The first step is find the homogeneous solution. We start by finding the eigenvalues of \(A\) . This
is done by solving the following equation for the eigenvalues \(\lambda \)
\begin{align*} \operatorname {det} \left ( A- \lambda I \right ) &= 0 \end{align*}
Expanding gives
\begin{align*} \operatorname {det} \left (\left [\begin {array}{cc} 1 & 0 \\ 1 & 2 \end {array}\right ]-\lambda \left [\begin {array}{cc} 1 & 0 \\ 0 & 1 \end {array}\right ]\right ) &= 0 \end{align*}
Therefore
\begin{align*} \operatorname {det} \left (\left [\begin {array}{cc} 1-\lambda & 0 \\ 1 & 2-\lambda \end {array}\right ]\right ) &= 0 \end{align*}
Since the matrix \(A\) is triangular matrix, then the determinant is the product of the elements
along the diagonal. Therefore the above becomes
\begin{align*} (1-\lambda )(2-\lambda )&=0 \end{align*}
The roots of the above are the eigenvalues.
\begin{align*} \lambda _1 &= 1\\ \lambda _2 &= 2 \end{align*}
This table summarises the above result
eigenvalue
algebraic multiplicity
type of eigenvalue
\(1\) \(1\) real eigenvalue
\(2\) \(1\) real eigenvalue
Now the eigenvector for each eigenvalue are found.
Considering the eigenvalue \(\lambda _{1} = 1\)
We need to solve \(A \vec {v} = \lambda \vec {v}\) or \((A-\lambda I) \vec {v} = \vec {0}\) which becomes
\begin{align*} \left (\left [\begin {array}{cc} 1 & 0 \\ 1 & 2 \end {array}\right ] - \left (1\right ) \left [\begin {array}{cc} 1 & 0 \\ 0 & 1 \end {array}\right ]\right ) \left [\begin {array}{c} v_{1} \\ v_{2} \end {array}\right ]&=\left [\begin {array}{c} 0 \\ 0 \end {array}\right ]\\ \left [\begin {array}{cc} 0 & 0 \\ 1 & 1 \end {array}\right ] \left [\begin {array}{c} v_{1} \\ v_{2} \end {array}\right ]&=\left [\begin {array}{c} 0 \\ 0 \end {array}\right ] \end{align*}
Now forward elimination is applied to solve for the eigenvector \(\vec {v}\) . The augmented matrix is
\[ \left [\begin {array}{@{}cc!{\ifdefined \HCode |\else \color {red}\vline width 0.6pt\fi }c@{}} 0&0&0\\ 1&1&0 \end {array} \right ] \]
Since the current pivot \(A(1,1)\) is zero, then the current pivot row is replaced with a row with a
non-zero pivot. Swapping row \(1\) and row \(2\) gives
\[ \left [\begin {array}{@{}cc!{\ifdefined \HCode |\else \color {red}\vline width 0.6pt\fi }c@{}} 1&1&0\\ 0&0&0 \end {array} \right ] \]
Therefore the system in Echelon form is
\[ \left [\begin {array}{cc} 1 & 1 \\ 0 & 0 \end {array}\right ] \left [\begin {array}{c} v_{1} \\ v_{2} \end {array}\right ] = \left [\begin {array}{c} 0 \\ 0 \end {array}\right ] \]
The
free variables are \(\{v_{2}\}\) and the leading variables are \(\{v_{1}\}\) . Let \(v_{2} = t\) . Now we start back substitution.
Solving the above equation for the leading variables in terms of free variables gives equation
\(\{v_{1} = -t\}\)
Hence the solution is
\[ \left [\begin {array}{c} v_{1} \\ t \end {array}\right ] = \left [\begin {array}{c} -t \\ t \end {array}\right ] \]
Since there is one free Variable, we have found one eigenvector
associated with this eigenvalue. The above can be written as
\[ \left [\begin {array}{c} v_{1} \\ t \end {array}\right ] = t \left [\begin {array}{c} -1 \\ 1 \end {array}\right ] \]
Let \(t = 1\) the eigenvector becomes
\[ \left [\begin {array}{c} v_{1} \\ t \end {array}\right ] = \left [\begin {array}{c} -1 \\ 1 \end {array}\right ] \]
Considering the eigenvalue \(\lambda _{2} = 2\)
We need to solve \(A \vec {v} = \lambda \vec {v}\) or \((A-\lambda I) \vec {v} = \vec {0}\) which becomes
\begin{align*} \left (\left [\begin {array}{cc} 1 & 0 \\ 1 & 2 \end {array}\right ] - \left (2\right ) \left [\begin {array}{cc} 1 & 0 \\ 0 & 1 \end {array}\right ]\right ) \left [\begin {array}{c} v_{1} \\ v_{2} \end {array}\right ]&=\left [\begin {array}{c} 0 \\ 0 \end {array}\right ]\\ \left [\begin {array}{cc} -1 & 0 \\ 1 & 0 \end {array}\right ] \left [\begin {array}{c} v_{1} \\ v_{2} \end {array}\right ]&=\left [\begin {array}{c} 0 \\ 0 \end {array}\right ] \end{align*}
Now forward elimination is applied to solve for the eigenvector \(\vec {v}\) . The augmented matrix is
\[ \left [\begin {array}{@{}cc!{\ifdefined \HCode |\else \color {red}\vline width 0.6pt\fi }c@{}} -1&0&0\\ 1&0&0 \end {array} \right ] \]
\begin{align*} R_{2} = R_{2}+R_{1} &\Longrightarrow \hspace {5pt}\left [\begin {array}{@{}cc!{\ifdefined \HCode |\else \color {red}\vline width 0.6pt\fi }c@{}} -1&0&0\\ 0&0&0 \end {array} \right ] \end{align*}
Therefore the system in Echelon form is
\[ \left [\begin {array}{cc} -1 & 0 \\ 0 & 0 \end {array}\right ] \left [\begin {array}{c} v_{1} \\ v_{2} \end {array}\right ] = \left [\begin {array}{c} 0 \\ 0 \end {array}\right ] \]
The free variables are \(\{v_{2}\}\) and the leading variables are
\(\{v_{1}\}\) . Let \(v_{2} = t\) . Now we start back substitution. Solving the above equation for the leading variables
in terms of free variables gives equation \(\{v_{1} = 0\}\)
Hence the solution is
\[ \left [\begin {array}{c} v_{1} \\ t \end {array}\right ] = \left [\begin {array}{c} 0 \\ t \end {array}\right ] \]
Since there is one free Variable, we have found one eigenvector
associated with this eigenvalue. The above can be written as
\[ \left [\begin {array}{c} v_{1} \\ t \end {array}\right ] = t \left [\begin {array}{c} 0 \\ 1 \end {array}\right ] \]
Let \(t = 1\) the eigenvector becomes
\[ \left [\begin {array}{c} v_{1} \\ t \end {array}\right ] = \left [\begin {array}{c} 0 \\ 1 \end {array}\right ] \]
The following table gives a summary of this result. It shows for each eigenvalue the algebraic
multiplicity \(m\) , and its geometric multiplicity \(k\) and the eigenvectors associated with the
eigenvalue. If \(m>k\) then the eigenvalue is defective which means the number of normal linearly
independent eigenvectors associated with this eigenvalue (called the geometric multiplicity \(k\) )
does not equal the algebraic multiplicity \(m\) , and we need to determine an additional \(m-k\)
generalized eigenvectors for this eigenvalue.
multiplicity
eigenvalue algebraic \(m\) geometric \(k\) defective? eigenvectors
\(1\) \(1\) \(1\) No \(\left [\begin {array}{c} -1 \\ 1 \end {array}\right ]\)
\(2\)
\(1\)
\(1\)
No
\(\left [\begin {array}{c} 0 \\ 1 \end {array}\right ]\)
Now that we found the eigenvalues and associated eigenvectors, we will go over each
eigenvalue and generate the solution basis. The only problem we need to take care of is if the
eigenvalue is defective. Since eigenvalue \(1\) is real and distinct then the corresponding
eigenvector solution is
\begin{align*} \vec {x}_{1}(t) &= \vec {v}_{1} e^{t}\\ &= \left [\begin {array}{c} -1 \\ 1 \end {array}\right ] e^{t} \end{align*}
Since eigenvalue \(2\) is real and distinct then the corresponding eigenvector solution is
\begin{align*} \vec {x}_{2}(t) &= \vec {v}_{2} e^{2 t}\\ &= \left [\begin {array}{c} 0 \\ 1 \end {array}\right ] e^{2 t} \end{align*}
Therefore the final solution is
\begin{align*} \vec {x}_h(t) &= c_{1} \vec {x}_{1}(t) + c_{2} \vec {x}_{2}(t) \end{align*}
Which is written as
\begin{align*} \left [\begin {array}{c} x \\ y \end {array}\right ] &= c_{1} \left [\begin {array}{c} -{\mathrm e}^{t} \\ {\mathrm e}^{t} \end {array}\right ] + c_{2} \left [\begin {array}{c} 0 \\ {\mathrm e}^{2 t} \end {array}\right ] \end{align*}
Which becomes
\begin{align*} \left [\begin {array}{c} x \\ y \end {array}\right ] = \left [\begin {array}{c} -c_1 \,{\mathrm e}^{t} \\ c_1 \,{\mathrm e}^{t}+c_2 \,{\mathrm e}^{2 t} \end {array}\right ] \end{align*}
Figure 2.47: Phase plot
Maple step by step solution
\[ \begin {array}{lll} & {} & \textrm {Let's solve}\hspace {3pt} \\ {} & {} & \left [x^{\prime }=x, y^{\prime }=x+2 y\right ] \\ \bullet & {} & \textrm {Define vector}\hspace {3pt} \\ {} & {} & {\moverset {\rightarrow }{x}}\left (t \right )=\left [\begin {array}{c} x \\ y \end {array}\right ] \\ \bullet & {} & \textrm {Convert system into a vector equation}\hspace {3pt} \\ {} & {} & {\moverset {\rightarrow }{x}}^{\prime }\left (t \right )=\left [\begin {array}{cc} 1 & 0 \\ 1 & 2 \end {array}\right ]\cdot {\moverset {\rightarrow }{x}}\left (t \right )+\left [\begin {array}{c} 0 \\ 0 \end {array}\right ] \\ \bullet & {} & \textrm {System to solve}\hspace {3pt} \\ {} & {} & {\moverset {\rightarrow }{x}}^{\prime }\left (t \right )=\left [\begin {array}{cc} 1 & 0 \\ 1 & 2 \end {array}\right ]\cdot {\moverset {\rightarrow }{x}}\left (t \right ) \\ \bullet & {} & \textrm {Define the coefficient matrix}\hspace {3pt} \\ {} & {} & A =\left [\begin {array}{cc} 1 & 0 \\ 1 & 2 \end {array}\right ] \\ \bullet & {} & \textrm {Rewrite the system as}\hspace {3pt} \\ {} & {} & {\moverset {\rightarrow }{x}}^{\prime }\left (t \right )=A \cdot {\moverset {\rightarrow }{x}}\left (t \right ) \\ \bullet & {} & \textrm {To solve the system, find the eigenvalues and eigenvectors of}\hspace {3pt} A \\ \bullet & {} & \textrm {Eigenpairs of}\hspace {3pt} A \\ {} & {} & \left [\left [1, \left [\begin {array}{c} -1 \\ 1 \end {array}\right ]\right ], \left [2, \left [\begin {array}{c} 0 \\ 1 \end {array}\right ]\right ]\right ] \\ \bullet & {} & \textrm {Consider eigenpair}\hspace {3pt} \\ {} & {} & \left [1, \left [\begin {array}{c} -1 \\ 1 \end {array}\right ]\right ] \\ \bullet & {} & \textrm {Solution to homogeneous system from eigenpair}\hspace {3pt} \\ {} & {} & {\moverset {\rightarrow }{x}}_{1}={\mathrm e}^{t}\cdot \left [\begin {array}{c} -1 \\ 1 \end {array}\right ] \\ \bullet & {} & \textrm {Consider eigenpair}\hspace {3pt} \\ {} & {} & \left [2, \left [\begin {array}{c} 0 \\ 1 \end {array}\right ]\right ] \\ \bullet & {} & \textrm {Solution to homogeneous system from eigenpair}\hspace {3pt} \\ {} & {} & {\moverset {\rightarrow }{x}}_{2}={\mathrm e}^{2 t}\cdot \left [\begin {array}{c} 0 \\ 1 \end {array}\right ] \\ \bullet & {} & \textrm {General solution to the system of ODEs}\hspace {3pt} \\ {} & {} & {\moverset {\rightarrow }{x}}=\mathit {C1} {\moverset {\rightarrow }{x}}_{1}+\mathit {C2} {\moverset {\rightarrow }{x}}_{2} \\ \bullet & {} & \textrm {Substitute solutions into the general solution}\hspace {3pt} \\ {} & {} & {\moverset {\rightarrow }{x}}=\mathit {C1} \,{\mathrm e}^{t}\cdot \left [\begin {array}{c} -1 \\ 1 \end {array}\right ]+\mathit {C2} \,{\mathrm e}^{2 t}\cdot \left [\begin {array}{c} 0 \\ 1 \end {array}\right ] \\ \bullet & {} & \textrm {Substitute in vector of dependent variables}\hspace {3pt} \\ {} & {} & \left [\begin {array}{c} x \\ y \end {array}\right ]=\left [\begin {array}{c} -\mathit {C1} \,{\mathrm e}^{t} \\ \mathit {C1} \,{\mathrm e}^{t}+\mathit {C2} \,{\mathrm e}^{2 t} \end {array}\right ] \\ \bullet & {} & \textrm {Solution to the system of ODEs}\hspace {3pt} \\ {} & {} & \left \{x=-\mathit {C1} \,{\mathrm e}^{t}, y=\mathit {C1} \,{\mathrm e}^{t}+\mathit {C2} \,{\mathrm e}^{2 t}\right \} \end {array} \]
Maple dsolve solution
Solving time : 0.081
(sec)
Leaf size : 23
dsolve ([ diff ( x ( t ), t ) = x(t), diff ( y ( t ), t ) = x(t)+2*y(t)]
,{ op ([ x ( t ), y(t)])})
\begin{align*}
x &= {\mathrm e}^{t} c_2 \\
y &= -{\mathrm e}^{t} c_2 +c_1 \,{\mathrm e}^{2 t} \\
\end{align*}
Mathematica DSolve solution
Solving time : 0.009
(sec)
Leaf size : 33
DSolve [{{ D [ x [ t ], t ]== x [ t ], D [ y [ t ], t ]== x [ t ]+2* y [ t ]},{}},
{x[t],y[t]},t,IncludeSingularSolutions-> True ]
\begin{align*}
x(t)\to c_1 e^t \\
y(t)\to e^t \left (c_1 \left (e^t-1\right )+c_2 e^t\right ) \\
\end{align*}