Hi, in this set of lecture we're going to talk about Markov Models. Now Markov

Models are really simple, they consist of just two parts. The first thing is there's

a set of states, so those are states that a person's psyche could be in. They could

be the state of a particular government or an economy. And then there's going to be

transition probabilities and the transition probabilities are going to tell

us the probability of moving from one state to another. So remember all the

stuff we learned about probabilities. How they sum to one, and that sort of thing.

All those rules are gonna apply here. But the probabilities are gonna tell us how

likely it is to move from, say, one state to another. Let me give a couple examples.

So first, let's suppose we have some students. And those students could be in

either one of two states. They could be alert, or they could be bored. Now,

there's gonna be some probability, P, that they move from alert to bored. And maybe

some probability Q that they move from bored to alert. And over time, students

are gonna be moving back and forth from the alert state to the bored state. And

the Markov process will give us A framework with which to understand how

those dynamics take place. Now, alert and bored students don't seem maybe that important. So let's do

something more relevant. Let's talk about countries being free or not free. So

those are the two states. A country can be free. A country can be not free. Now what

we can do is, we can [inaudible] data [inaudible] what's the probability that a state moves

from free to not free, and what's the probability that a state moves from not

free to free? That'll also be, a Markov process. So if we look at

historically the number of free states and not free states and then create a

third category called "partly free", which is the red line, we can see that there's these

different trends, right? The free states seem to be increasing. The not-free

tend to be decreasing. What we can do is, we can use Markov process to figure out,

where is this process going to end up. Is this going to end up with all free states?

Or are we going to end up with maybe some moderate number of free states in the

process still churning? That's where the model's going to help us. Now remember we

talked about the different sorts of things that processes can do? They can go to

equilibria, they can be cycles, they can be completely random, or they can be

complex. What we're going to find, is that as long as just a few assumptions hold,

that Markov processes are gonna be here: they're gonna go to

equilibrium. And so there's a theorem called the Markov convergence theorem.

This Markov convergence theorem tells us: as long as a couple really mild

assumptions hold, namely that we have, like a finite number of states, and those

probabilities stay fixed. And then one other thing, you can get from any

position, any state, to any other state. Then what we'll get, is the system goes to

an equilibrium. So this is a really powerful thing and has all sorts of implications

that we're gonna flesh out as we look more deeply into the model. Now to do this, to

understand Markov processes, we're going to have to introduce a little bit more notation,

another technique, another tool from the study of models. And these are called

matrices. So matrices are really just like a little grid, make it two by two, or

three by three. Where you put numbers in here, like point four, point five, point

six, point five. And those will be the transition probabilities. So what we're

gonna learn how to do, is we're going to learn how to multiply by matrices in order

to understand these Markov processes, and in particular to understand how the Markov

convergence theorem works. We'll use these matrices to explain why these

systems go to equilibria. Now the reason we do Markov processes is twofold. One is,

they're really sort of a useful way to think about how the world works and

we get this really powerful result, the Markov convergence theorem that says these

systems are going to go to these, this, these equilibria. Any Markov process goes

to an equilibrium. Second reason we're going to do them, is what we talked about

in the previous lecture, it's this idea of exaptation. That the Markov model's

incredibly fertile. Once we have the Markov idea in our head, once we

understand what a Markov process is, we can apply it in a whole bunch of different

settings. In fact one of my colleagues, we give him almost anything, he'll say "that's

a Markov process". And there's a sense in which a lot of things are Markov processes,

and it's often really, really useful to think of things in the context of Markov

processes. It's also true, once you have this idea of transition probabilities and

matrices we see can use those in a lot of settings as well. Okay, so let's get

started. I'm going to start out just by a very, very simple Markov process, then

what we're going to do is we're gonna look at a slightly more complicated one and

then see how the Markov convergence theorem works. Once we've got all that in

play, then we'll go back and talk about exaptation, where we can apply other

settings. Okay, thanks.