So now we have seen where LTI systems come from. We've seen non linear models that turn into very well behaved and pretty LTI systems. And we've seen non linear models that don't necessarily produce useful LTI models. But a lot of systems do produce useful LTI models. And it's really our. Most systematic way of designing controllers. And they are extremely useful . So even though there aren't that many bananas in the universe. A lot of things act like bananas. so what we're going to do now is, we're going to start by understanding how these systems behave. So what I'm going to do in this lecture is, I'm actually going to find. The solutions to these systems and once we have those solutions we can start talking about how they behave. And we're going to start by simply ignoring the input and ignoring the output. So we're going to start by just saying that, let's say that I have x dot is Ax and at time t not, this is the time when we wake up, we start somewhere. So this is the, the physical part of the system. Not the thing that we bought actuators for. Not the thing that we bought sensors for. It's just x dot is ax. Let's see what happens in that case. How does the system behave or drift if you will, when you're not messing with it. So we need to solve x dot is ax. let's start with a scalar version of this, where x is just a number, right? So the scalar version I'm going to write this x dot as little ax. So this is a scalar version. And I start some Somewhere. Well, you may not know this, but if you take in or see differential equations, the solution to this differential equation is actually given by x of t is e to the a, t minus t not times, times X not. So here, professor shows up and says, ohhh, this is the solution to this differential equation. Now you clearly are critical thinking people who don't just accept anything the professor says, so what you want to do now is make sure that this is indeed correct. So how do you ensure that what someone feeds you, say here's a solution to differential equation, how do you make sure that this is correct? How do we know? Well, the first thing you have to do is make sure that the initial conditions are right. Meaning that, my solution here actually respects this initial condition. So what I'm going to do, is, I'm just simply going to plug in t not here, and see what I get. Well, if I do that, I get x of t not is e to the a. T not minus t not times x not. Well, this thing is zero, right? So I get e to the power zero x not. And e to the power zero is always equal to one. So, the exponential evaluated at zero is one. so x of t not is equal x not. Which means that the initial condition is correct. So we're done with this. Now, clearly, we need to deal with this, right? We need to make sure that the dynamics is indeed correct. So now I'm going to take the time derivative of my proposed solution. So I'm going to take d, dt of this thing, and see what I get. Well, the time derivative of an exponential. All we do is we pull out the coefficient there. So we're going to pull out a, and write an extra a there, That's all we do. And this is why exponentials are so wonderful. So the time derivative of x with respect to t is a times what we have here? Well that this thing, this thing here that's x right. So the prime derivative of x, my proposed x is equal to a times x. Well that's where we started right. So what we now know is that the dynamics is correct as well. And if the initial conditions are right then the dynamics is right. We know Thanks to the existence and uniqueness of solutions to differential equations. That this is, indeed, the right solution. Now here is the kicker. For higher order systems. So now, x is in rn. We get the same solution. We have x. is e to the at minus t not x not here. Well, now we have this, x dot is the same thing. The only thing I did different was I wrote capital A instead of lowercase a. And the thing to keep in mind here is that this is what's called a matrix exponential, instead of a sc alar exponential, which looks kind of, just a little scary. But we're not scared of matrix exponentials. In fact, what we do, is we look up the definition of an exponential. And an exponential, e to the a t for scalars, well it's simple, simply this sum. This is the definition of what, the exponential is. Well, here is just multiplications, and we can write multiplications for matrices as well. So, the definition of a matrix exponential is just this sum. Now, it turns out that it's actually not that important to us to be able to compute matrix exponentials very much. However, we need to know where they come from. And they come from this sum. And the reason why this is useful, it actually allows us to compute the derivative. Of a matrix exponential. So let's take the derivative, the time derivative of this whole sum, right? So this is the sum here. Well the first term that I'm going to do is I'm giong to, going to pull out the k equal to zero term here. So then I get A to the power zero times t to the power zero divided by zero factorial which is one. So this whole thing is actually equal to one. And now I'm going to take the time derivative of one well the time derivative of one is a big fat zero. So I'm going to pull out the first term and then I'm going to take the derivative, of the remaining terms with respect to t. So all I get here is I get an extra k. Well, now I can rewrite things, I can pull out then a and write everything in terms of k minus 1 instead of k here. But I'm summing from one to infinity so if I shift my k now to see out at infinity I get back this thing. So what this means is that the time derivative of e to the At is simply big A times E to the At. So the matrix exponential behaves just like the scalar exponential. That's all I wanted to show with this slide is that, even though this looks a little awkward, we have these sums of matrix powers, all it means is that we can take derivative of matrix exponentials and trust that they behave just like in the scalar case. In fact, e to the a, t minus t not is such a fundamental object in linear systems theory. That it has, it has been given its own name. It's known as the state transition matrix. And sometimes, I'm actually going to write big pi of t and t not. And what we should then remember, and probably I will remind you of it, is that this is simply this matrix exponential. That's all it means, but it will show up quite a bit. Okay. X dot is Ax. That means, in fact, that x of t is e to the big A, t minus t not. X of t naught or in general I can write it on this form. It's this state transition matrix which we now know is just a fancier name for this matrix exponential. And it turns out that it doesn't matter if it's t naught or not, it's just whatever time tao, well we just multiply what x was at that time tao times the state transition matrix. So this is simply code for x of t is e to the a t minus tau, x at time tau. So the point is that we know what the solution to, to this equation actually is. And the way you would show that this is the solution is you would use the following two properties. And I encourage you to go home and do this. the first is the thing we just established. Which is that the time derivative of pi is a tines oi. The other is that, pi tt is the identity. Well, pi tt I just plug in a t here instead of a t not. So, then I get e to the power of zero, in the scalier case that's one, in the matrix case that's the identity matrix. So, that's the only difference when you go matrix. Fine, so now we actually have a control system. So, we have x dot is Ax. Plus Bu, what happens? Well again, the professor goes well, here's my claim, this is what I claim that the solution is. This looks like a mouthful doesn't it? It doesn't look pleasant at all. Some stuff, in fact this is the thing we had when we had no B matrix at all, at all and then we have thing, thing here that's If you want to be picky this is what's called a convolution, but, we don't have to be calling it convolution. All w e need to know is that, you know what this is what we claim the solution is. But how do we actually verify that this is correct? Well, we do exactly what we did before. We have to check the initial conditions and the dynamics. So let's plug in t0 see if we get the right thing. Then we get pi. Instead of t here, I'm going to write t not. And then, instead of t here and here. I'm going to write t not and t not. Okay. Let's see what this is. Well, pi tt is equal to the identity matrix, no matter what t is, right? So this is the identity. Now, here's an integral between t not and t not. So this is clearly zero. 'because I'm just taking the integral over this. Individual points. So this interval is zero. So what I have is I have that x of t not is equal to what it should be, x of t not. So we're going to declare success on the initial condition. Now, we need to deal with dynamics and that's harder. First of all, we use the fact that if I take the derivative of this, I get an A out. So the first component is no big deal. But then we have this awkward object here. We have to take the derivative of an integral, with respect to t, when t shows up both here and here. And this, it's not a trivial thing. In fact, what you need to do, is you need to use something known as rule. That tells us that if I have a general function here of t and tau and I take the derivative of this thing with respect to t. Well, first this contribution here translates into plugging in, instead of tau I am plugging in t and then I am just getting rid of the integral. That's the first piece.The other piece is I pooled d dt inside the interval and I have to take the reverse of this thing. With respect to t. So this is technically what we have to do to compute this. So let's do that. Well, f t, t. Well, let's pull it, pull out this thing, and evaluate it at tau is equal to t o' clock. Then I get phi t, t times bu of t. Which, in fact, is simply u of b, u of t, right? Because this thing is density matrix And then, I get the time derivative of this thing. Or, in other words, the derivative of this with respect to t. Well, I know that that's an a that I just have to pull out in the beginning. So this is a little bit of a mouthful, I realize, to that. So, take a deep breath, and redo this computation, just so that you believe it. But what happens when you've done this, then. Is, you can actually pull out the a, and find that the time derivative of my. Proposed x is, big A, times this whole thing, plus B times u. And now, this whole thing is equal to the same thing here. So instead of writing this rather awkwards big expression, I'm just going to write, x sub t here. Or in other words, d, xdt is ax plus bu which, which is where we started. So we can declare success also on the dynamics. So to summarize, after all these pushups, and I realize that today's lecture was a little bit of a it was a little thorny in terms of all the integrals and derivatives. In fact, it was much thornier than anything we've ever seen before. The reason I needed to do it was not because I think you guys need to be world champions at applying rule. I just want to be able to say the following. That if I have x dot is ax plus bu. Y is cx. Then I can write y of t as C times x of t where we computed the solution. So we actually know that the output is given by this thing in yellow here and you know what? Let's add another sweetheart to this. So all these push ups just ended up with us being able to write. Explicitly what the solution is. Now, we're not going to be able to or particularly interested in actually computing this at all. But we need to know it to move forward. So at the end of this application of rule, what we ended up with was an expression for the output or the state if you want to Get rid of the c matrices of this general LTI system. And fi here, the thing to remember is that phi known as the state transition matrix was simply given by this matrix exponential. What we're going to do now in the next lecture is see h ow does this actually translate into us. Being able to say things about how the system behaves. And in particular, we're going to look at stability.