Markov Chains

In a previous page, we studied the movement between the city and suburbs. Indeed, if I are S are the initial population of the inner city and the suburban area, and if we assume that every year 40% of the inner city population moves to the suburbs, while 30% of the suburb population moves to the inner part of the city, then after one year the populations are given by

\begin{displaymath}\left(\begin{array}{c}
0.6 I + 0.3 S\\
0.4 I + 0.7 S\\
\end...
...ay}\right) \left(\begin{array}{c}
I\\
S\\
\end{array}\right).\end{displaymath}

The matrix

\begin{displaymath}P = \left(\begin{array}{cc}
0.6&0.3\\
0.4&0.7\\
\end{array}\right) \end{displaymath}

is very special. Indeed, the entries of each column vectors are positive and their sum is 1. Such vectors are called probability vectors. A matrix for which all the column vectors are probability vectors is called transition or stochastic matrix. Andrei Markov, a russian mathematician, was the first one to study these matrices. At the beginning of this century he developed the fundamentals of the Markov Chain theory.
A Markov chain is a process that consists of a finite number of states and some known probabilities pij, where pij is the probability of moving from state j to state i. In the example above, we have two states: living in the city and living in the suburbs. The number pij represents the probability of moving from state i to state j in one year. We may have more than two states. For example, political affiliation: Democrat, Republican, and Independent. For example, pij represents the probability of a son belonging to party i if his father belonged to party j.
Of particular interest is a probability vector p such that $A \mbox{\bf p} = \mbox{\bf p}$, that is, an eigenvector of A associated to the eigenvalue 1. Such vector is called a steady state vector. In the example above, the steady state vectors are given by the system

\begin{displaymath}\left(\begin{array}{cc}
0.6&0.3\\
0.4&0.7\\
\end{array}\rig...
...}
-0.4&0.3\\
0.4&-0.3\\
\end{array}\right)\cdot X = {\cal O}.\end{displaymath}

This system reduces to the equation -0.4 x + 0.3 y = 0. It is easy to see that, if we set $x = 0.3 \alpha$, then

\begin{displaymath}X = \left(\begin{array}{c}
x\\
y\\
\end{array}\right) = \alpha \left(\begin{array}{c}
0.3\\
0.4\\
\end{array}\right).\end{displaymath}

So the vector $\mbox{\bf p}_1 = \displaystyle \left(\begin{array}{c}
0.3\\
0.4\\
\end{array}\right)$ is a steady state vector of the matrix above. So if the populations of the city and the suburbs are given by the vector $\mbox{\bf p}_1$, after one year the proportions remain the same (though the people may move between the city and the suburbs).

Let us discuss another example on population dynamics.

Example: Age Distribution of Trees in a Forest
Trees in a forest are assumed in this simple model to fall into four age groups: b(k) denotes the number of baby trees in the forest (age group 0-15 years) at a given time period k; similarly y(k),m(k) and o(k) denote the number of young trees (16-30 years of age), middle-aged trees (age 31-45), and old trees (older than 45 years of age), respectively. The length of one time period is 15 years.
How does the age distribution change from one time period to the next? The model makes the following three assumptions:

Note that the total tree population does not change over time.

We obtain the following difference equations:

b(k+1) = $\displaystyle d_b\cdot b(k)+d_y \cdot y(k) +d_m\cdot m(k) + d_o\cdot o(k)$ (1)
y(k+1) = (1-db) b(k) (2)
m(k+1) = (1-dy) y(k) (3)
o(k+1) = (1-dm) m(k) + (1-do) o(k) (4)

Here 0 < db,dy,dm,do <1 denote the loss rates in each age group in percent.

Let

\begin{displaymath}{\bf x}(k)=\left(\begin{array}{c}b(k)\\ y(k)\\ m(k)\\ o(k)\end{array}\right)\end{displaymath}

be the ``age distribution vector". Consider the matrix

\begin{displaymath}A = \left(\begin{array}{cccc}
d_b&d_y&d_m&d_o\\
1-d_b&0&0&0\\
0&1-d_y&0&0\\
0&0&1-d_m&1-d_o\\
\end{array}\right).\end{displaymath}

Then we have

\begin{displaymath}{\bf x}(k+1)=A\cdot {\bf x}(k).\end{displaymath}

Note that the matrix A is a stochastic matrix!
If, db=.1,dy=.2,dm=.3 and do=.4, then

\begin{displaymath}A = \left(\begin{array}{cccc}
0.1&0.2&0.3&0.4\\
0.9&0&0&0\\
0&0.8&0&0\\
0&0&0.7&0.6\\
\end{array}\right).\end{displaymath}

After easy calculations, we find the steady state vector for the age distribution in the forest:

\begin{displaymath}\mbox{\bf p} = \left(\begin{array}{cccc}
1/3.88\\
0.9/3.88\\...
...n{array}{cccc}
1\\
0.9\\
0.72\\
1.26\\
\end{array}\right) .\end{displaymath}

Assume a total tree population of 50,000 trees. Suppose the forest is newly planted, i.e.

\begin{displaymath}{\bf x}(0)=\left(\begin{array}{c}50,000\\ 0\\ 0\\ 0\end{array}\right)\end{displaymath}

After 15 years, the age distribution in the forest is given by

\begin{displaymath}{\bf x}(1) = \left(\begin{array}{cccc}
0.1&0.2&0.3&0.4\\
0.9...
...\begin{array}{cccc}
0.1\\
0.9\\
0\\
0\\
\end{array}\right).\end{displaymath}

After 30 years, we have

\begin{displaymath}{\bf x}(2) = \left(\begin{array}{cccc}
0.1&0.2&0.3&0.4\\
0.9...
...in{array}{cccc}
0.19\\
0.09\\
0.72\\
0\\
\end{array}\right)\end{displaymath}

and after 45 years

\begin{displaymath}{\bf x}(3) = 50,000\left(\begin{array}{cccc}
0.19\\
0.09\\
0.72\\
0\\
\end{array}\right)\end{displaymath}

After 15n years, where $n=1,2,\cdots$, the age distribution in the forest is given by

\begin{displaymath}{\bf x}(n) = \left(\begin{array}{cccc}
0.1&0.2&0.3&0.4\\
0.9...
...eft(\begin{array}{cccc}
1\\
0\\
0\\
0\\
\end{array}\right).\end{displaymath}

So the problem is to find the nth-power of the matrix A. We have seen that diagonalization technique may be helpful to solve this problem. Another problem deals with the long term behavior of the sequence x(n) when n gets large.

The calculations on the example above becomes tedious. Let us illustrate the problem on a small matrix.

Example. Consider the stochastic matrix

\begin{displaymath}A = \left(\begin{array}{cccc}
0.8&0.2\\
0.2&0.8\\
\end{array}\right).\end{displaymath}

Note this is a symmetric matrix. The characteristic polynomial of A is

\begin{displaymath}p(\lambda) = (0.8 - \lambda)^2-0.2^2 = (1-\lambda)(0.6 - \lambda)\end{displaymath}

An eigenvector associated to 1 is

\begin{displaymath}\left(\begin{array}{cccc}
1\\
1\\
\end{array}\right)\end{displaymath}

and an eigenvector associated to 0.6 is

\begin{displaymath}\left(\begin{array}{cccc}
1\\
-1\\
\end{array}\right).\end{displaymath}

If we set

\begin{displaymath}P = \left(\begin{array}{rr}
1&1\\
1&-1\\
\end{array}\right),\end{displaymath}

then we have

\begin{displaymath}P^{-1}AP = D = \left(\begin{array}{cc}
1&0\\
0&0.6\\
\end{array}\right).\end{displaymath}

So, we have

\begin{displaymath}A^n = P D^nP^{-1} = P\left(\begin{array}{cc}
1&0\\
0&(0.6)^n\\
\end{array}\right)P^{-1}.\end{displaymath}

When n gets large, the matrices An get closer to the matrix

\begin{displaymath}P\left(\begin{array}{cc}
1&0\\
0&0\\
\end{array}\right)P^{-1}.\end{displaymath}

So the sequence of vectors defined by

\begin{displaymath}X(n+1) = A X(n),\;\;\mbox{given $X(0)$}\end{displaymath}

will get closer to

\begin{displaymath}X(\infty) = P\left(\begin{array}{cc}
1&0\\
0&0\\
\end{array}\right)P^{-1}X(0)\end{displaymath}

when n gets large. If $X(0) = \displaystyle \left(\begin{array}{cc}
a\\
b\\
\end{array}\right)$, then we have

\begin{displaymath}X(\infty) = \left(\begin{array}{rr}
1&1\\
1&-1\\
\end{array...
...ac{a+b}{2}\left(\begin{array}{cc}
1\\
1\\
\end{array}\right).\end{displaymath}

Note that the vector $X(\infty)$ is proportional to the unique steady state vector of A

\begin{displaymath}{\bf p} = \frac{1}{2}\left(\begin{array}{cc}
1\\
1\\
\end{array}\right).\end{displaymath}

This is not surprising. In fact there is a general result similar to the one above for any stochastic matrix.

[Geometry] [Algebra] [Trigonometry ]
[Calculus] [Differential Equations] [Matrix Algebra]

S.O.S MATH: Home Page

Do you need more help? Please post your question on our S.O.S. Mathematics CyberBoard.

Author: M.A. Khamsi

Copyright © 1999-2024 MathMedics, LLC. All rights reserved.
Contact us
Math Medics, LLC. - P.O. Box 12395 - El Paso TX 79913 - USA
users online during the last hour