0:07

Now we're coming to the last topic in control theory.

I don't have any explicit homeworks on this.

This took actually quite a bunch of algebra, where was this?

Either my postdoc year or my last years of PhD student,

we were just publishing some work and

one of the reviewers self-identified which they don't typically do.

And he said it was really nice work and everything else but

you don't cite my work.

Which is, happens.

But he was right, we hadn't seen it.

There's always just pockets of community beside each other and

then sometimes you just haven't seen that group, and didn't read that journal,

and didn't know it existed.

And this was a paper I just honestly didn't know existed.

From it's a very interesting paper.

And so far we started out with, these are the dynamics.

I have to have a way to come up with the control to guarantee stability.

So we use Lyapunov methods, and the Lyapunov method kind of dictates

to make v dot this form, this is what the control must be, right?

Sometimes we can tweak some things and still guarantee negative and so forth.

But it's pretty much dictated by that.

They took a very different approach.

They're going to approach is going to see,

let's have a perfectly linear closed-loop dynamics.

Not something that's almost linear, like with MRPs that we're getting.

We want perfect linear closed-loop dynamics.

1:19

How do we come up with a control to achieve that?

And that's what I'm going to show you here.

It's kind of fun, elegant, it brings all the stuff together.

Has a lot of kinematic properties that we've seen back from chapter three that

are going to come back again.

And some amazing math that simplifies to levels I didn't think it would.

[LAUGH] So you do the math, this is really pages.

We think all the primary addition property is bad, this gets pretty bad.

But it's fun, it's nice.

So linear closed-loop dynamics, that means their epsilon here is actually

the quaternion vector part, that's beta one, two, and three, right?

If epsilon goes to zero, your attitude error went to zero.

Maybe you went the short, maybe you went the long way, but

your attitude errors went to zero, right?

So, and if you want to deal with unwinding, you can switch.

We know how to handle that, but if you start out with these, you can specify,

look, I want something that looks just like this.

It's perfectly linear in terms of my attitude errors.

2:12

And you can add now things like integral feedback pretty readily and

you can do all kinds of classic linear control formulations you can throw in

here quite readily.

And this is the closed-loop response.

So these gains, this will be perfect.

It'll tell you precisely what the settling time is, what's the natural frequency,

what's the damped frequency?

With my MRP one, I was close within a few percent, but this will tell you for large

motions precisely because it is a linear system once you've feedback compensated.

Some of the nice aspects of their control is when we had our control, and we put

everything in the closed-loop dynamics from the kinetic side, we have this.

This is all linear but

we know that sigma differential equations are slightly nonlinear.

But it's also inertia in here.

So we're closed-loop response.

If I pick certain piece in case and then make my inertia ten times larger,

I'm going to have a different response, right?

Here in their form, they don't want any system parameters appearing.

So there's no inertia happening up here.

Which is nice because then you get to control automatically.

I want it critically damped,

you pick the right P's and K's that will be critically damped.

And you can throw in the inertias but

you don't have to change the gains, it will be critically damped.

So just subtleties, conveniences.

One doesn't exclude the other.

So this is what we'd like to have.

This is what we have had.

And we know there's a nonlinear relationship between this and this.

So these aren't exactly linear yet.

How do we get this other form?

3:37

And so really what you have to do is here, if you wanted to substitute everything

with del omega, we know sigma dot was one over four B.

I think there's a one over four missing here.

But times the del omega, if you had del omega dot,

you have to differentiate those B's, you've seen that matrix, lots of terms,

you have to take its sine derivative, chain rule, that's more terms, all right?

And substitute, and that's definitely going to be messy.

4:01

All right?

It's going to be messy, but the question is, does that mess lead somewhere elegant?

And I think it's one of the reasons why I love astrodynamics so much.

These other fields, structural mechanics, fluids,

really complicated, complex nonlinear systems.

Strongly coupled, nanofine, and all that stuff.

4:18

If you look at complicated systems, it just gets even more complicated.

You add an extra thing, it gets even more complicated.

In astrodynamics, in translation and in attitude.

It's amazing how often with the right formulation, the right kinematics,

we end up incredibly elegant,

simple solutions that comes out of this stuff which, to me, is very exciting.

So occasionally we get a little carrot,

it's not just being beat over the head, prove all the parameters from property.

But at the end, it was a beautiful simple one.

This was another case, so I want to show you, outline you, how this math and

logic works.

You won't be following all the details of the substeps.

But the logic of how we have to implement it, and why these quaternion properties,

and also vice versa similar MRP properties can be exploited,

becomes very apparent in this formulation.

So, that's why I've chosen to kind of end the control discussion with this one.

So our desired thing here is I'm going to follow up.

Which did it in terms of quaternions.

So their epsilon is beta one, two, and three,

the vectorial part of the quaternion.

And they want a perfectly linear closed-loop dynamics.

And we have this dynamical system.

So the question is what control u do we have to create

such that the error response perfectly satisfies this?

And this becomes basically an inverse dynamics approach.

We start out with a desired closed-loop dynamics and

then we have to back substitute everything and solve all the algebra back for u.

So right now here there's no u in site so we have to get there.

All right?

So this is a step.

How do we make this work?

Well this is the definition of epsilon, you can see the classic definition.

If you go back and look in our quaternion notes, there was some other formulations,

this is the T matrix, it's a three by three.

The quaternion, the beta dots had these four by three times omega so

that's that lower three by three part.

Basically that's where that T matrix is.

It's a three by three matrix defined here and you can see it's a diagonal of betas

so zero times a diagonal of identity and the other three that's actually nothing

but a skew-symmetric matrix, that's nothing but epsilon tilde.

So, different forms.

You can look it up.

Nothing too fancy yet.

6:22

Now, we want to start to simplify all this stuff, and

we want to be able to substitute it back.

We had epsilon in equation, epsilon dot, and epsilon double dot.

Epsilon, we have.

What is epsilon dot?

In the end, everything has to come out something in terms of our control.

So we have epsilon dot, good, we take its derivative, with chain rule,

that would be a T dot and there'll be an omega dot.

Go okay, that's moving along.

How do we find T dot?

Well, T was defined as beta nought times identity plus this epsilon tilde so

T times omega is this term.

If you just differentiate T, you get a dot here.

And you will also have a tilde dot, but

epsilon tilde omega is the same thing as minus omega tilde epsilon, so

if I just differentiate the epsilon part, you can put that dot outside.

And you get it.

The B dot has this differential equation,

again you can go look at your quaternion math side, you can quickly validate that.

And go okay, we've got that term, so we can plug it in.

The epsilon dot, we've already got defined up here, so that's good.

So we can plug that in.

And what we're really looking for is the epsilon double dot.

So now we've plugged all these in here, we can take this term and plug it in there.

As you can see, lots of algebra.

Even when you know how to find the answer.

So in the end, we get, after substituting all that stuff,

this is our epsilon double dot expression.

7:45

So we're going to look at that second term that's right there that will make it tilde

T omega and explore that a little bit.

T was to find as beta naught identity plus epsilon tilde so

I put that in front But you can see here, several things are going to happen.

You've omega tilde times identity, that's omega tilde, times omega.

That term is going to go 0 exactly.

So we can factor it out, and this is going to vanish in a moment.

The other one is just going to be omega tilde,

epsilon tilde and omega, so that one vanishes.

And I reverse the order, so that vector cross product,

epsilon tilde omega is minus omega tilde epsilon.

So now I have that form, and you go, okay, that would go here but

it doesn't really help yet.

So omega tilde, the omega tilde's really a double cross product rule.

And we've actually used this earlier in class and some homeworks.

We had I think e hat, e tilde squared, it's one of those proofs we had to use.

You can prove this then with cross product rule.

This has to be equivalent to this.

So if this gets applied here, this becomes this expression in the end.

8:56

And you can actually carry omega transpose, epsilon as a scalar so

I can shift it in front.

And this is just omega squared, identity times epsilon is just epsilon.

And now you plug this in here, you can see this term and this term perfectly cancel,

which is very nice, because we had enough terms to carry along.

So after all of this math, in the end,

epsilon double dot just has this expression.

Which is encouragingly simple [LAUGH] at this stage, at least.

So now we have epsilon double dot in terms of this stuff.

We have epsilon dot already that we had from the differential kinematic

equation from quaternions.

You plug all this stuff in and in the end you get this expression.

10:03

It does break down, at 180 degrees.

But otherwise, this is always full rank so we can do this.

But already here, we get a little bit of an inkling that while we're using

quaternion that are always a non-singular, you can describe any attitude.

There are some odd things happening on 180 argument.

And this will reflect itself in the control.

But that basically means this bracketed term must be 0.

So let's see, this T inverse we can actually simplify.

You can show that T inverse is the same thing as T transposed plus this

outer product.

And the T itself, we can plug in.

So you can put all that terms in there, or algebra.

And quite a bit of algebra actually, but

this can be crunched down to this form in the end.

11:01

And it's -P omega, that look just like our classic proportional feedback and rates.

There is some 2 times K instead of just K, times epsilon, so

that's kind of our proportional feedback on attitude.

Here we have omega squared times epsilon, definitely a non-linear term.

But it's essentially this one little non-linear term that was needed to turn

this other system into a perfectly linearized closed loop dynamics.

So that was a bunch of algebra, you get down to this form.

So how do you implement this?

Well, this is still the system we're controlling.

And again, I'm showing a single rigid body.

Next lectures we're going to talk about CMGs and reaction wheels and so forth.

It's just extra gyroscopics, it's still the same.

What this dictates is this is the angular acceleration you must

achieve on your dynamical system.

So you just plug it in here, you put this omega in here and then back solve for

your control.

And if it's reaction other things,

it's just a different term of the control you solve for.

So this scales and you can lift it up from this problem to other

classic actuation methods very readily.

So now if you solve for the control, like before, we feedback compensated for

the omega tilde i omega with the of one that we derived.

We had a minus again times omega, so that's here, but

here that gain gets scaled by inertia.

That kind of makes it work, that's fine.

And, because you can compensate, whatever the inertia is you can adjust, but

this also comes out of the fact that our closed-loop dynamics didn't depend on

inertia.

We knew at some point, if your craft it's ten times heavier than the first one,

12:32

your control has to be stronger to get the same critical response.

That's why it's the gains times inertia, this gives you that automatic scaling.

But then the last part we've got a gain times the attitude error and

that's the non-linear part that happens still.

So that's actually pretty neat.

It's not that different structurally than our earlier derived control.

Fundamentally, the only new term is this one, this coupling with the attitude.

That was it.

But do you see any issues with this control?

>> Need to know the inertia, so it's not robust.

>> Right, so, well, you can't guarantee it's not robust, but it's, you have to

know the inertia, so you'd have to investigate robustness, how robust is it?

And I've tried it, actually, with inertia errors of 40%, 50%, 60%,

still works pretty well, so it's not necessarily sensitive to that, but

you do need to know inertia, absolutely.

>> The fact that the inertia is in the control doesn't mean that it's not robust.

>> Right. >> Okay.

>> It just means you have to do extra tricks, because maybe it does.

Maybe the control performance is impacted by it but it doesn't.

13:38

Well, it's only to know inertia.

Yeah, just because it depends on inertia, doesn't mean it's very sensitive to that.

Sensitive would means hey, a small epsilon off and I get three times the error or

something.

If you're off by 5, 10%, it actually converges just fine.

It may take two seconds longer,

three seconds longer, some small differences like that.

But what's the other issue?

There's one big glaring pink elephant in this slide.

Nobody's raised your hands yet with this control.

14:38

>> Theta 0?

>> Right, so what is the vectorial part of quaternion divided by the scalar part?

What do we call such coordinates?

>> CRP. Yes, it is CRP.

All right, we took beta i over beta naught, that was the definition.

This is just written in vectorial form.

So even though we derived everything in terms of quaternion thinking, yeah,

it's going to be non-singular, wonderful,

global, the control that dies it turn our it's feeding back CRPs.

Which means this beautiful perfectly and linear stabilizing control,

actually turned out to be quite robust, cannot handle this.

15:37

Very similar, this is very similar.

But again, the inertias appear somewhere, and

this has to do with how we define this.

So there's not much difference to this, so that's good.

But in essence, despite using quaternions in deriving it,

we end up with essentially a slightly modified CRP feedback control.

With some nice closed-loop properties,

at least from a performance prediction perspective.

There's nothing inherently better with linear versus nonlinear but

you have more tools for that.

If you wanted to add integral terms, it's really, we've already computed this,

we've computed this.

This just becomes an extra term you kind of carry along in all that algebra.

And this stuff stays all the same but this extra term,

once you do this t inverses and plug it in and do stuff.

You end up with this extra term that you have to compute.

So you can pretty readily now include robustness enough on model

torques by using an integral feedback term on the closed-loop dynamics.

But still you can't do this.

It would've blown up several times right there, 180.

But whatever you do with this method the strategy is always you solve for

angular acceleration.

And then plug it into dynamical system and back solve for the control.

This is the control that will make this act this way, except for

180 which is a problem.

If you're doing tracking it's the same mathematics again the kinematics

are the same.

This exelon is b relative to r now, instead of b relative to n.

And you need omega b relative to r which we often call del omega.

All right, it is omega b relative to n minus omega r relative to n.

So when you solve for del omega dot,

you've actually found omega dot minus omega r dot.

And to plug into the equations you need omega dot not omega r dot.

So all you have to do is put this over to the right hand side which is what

we have here.

And now so with that little simple modification I can do a tracking problem

with perfectly linear closed loop dynamics.

But I still can't track with errors exceeding over 180 degrees, right.

But there's nice simple extensions that you can do with this stuff.

So this is why I love MRP's, right.

17:47

it's inverse was very analytical.

It was just a transposed plus an extra term.

So it had some nice easy analytical inverses.

If you take a three-by-three general matrix and go to mathematic and ask for

an inverse it takes half a page already.

And you can imagine the algebra you have to grind through and try to reduce and

simplify.

It will put all the parameters in property to shame.

So MRP's, were not orthogonal then that b matrix but it was almost orthogonal.

Let's see if we can use that.

So that was the property we used for the quaternions and

we want to use MRP's now which are, okay if a single MRP can't handle this but

I can switch it 180, right?

So I'm hoping I get a control in terms of MRP's because it handles 180 and

then I simply switch the descriptions.

And this inverse is like this.

Now, I'm not going through the same, it's basically the same type of derivation.

But different kinematics thrown in and

your inverse kinematic dynamics, everything you're solving it.

This is what we're looking for, perfectly linear MRP differential

18:47

close to dynamics essentially differential equations.

And much, much, much algebra later.

This is the omega rate you have to have.

It's still the same minus p omega.

I get a gain times sigma, but you can see that gain now get's modified by this term.

I still have an omega squared times the attitude measure and there's an omega,

omega transpose that comes in.

So not quite as elegantly simple as the quaternion control, okay.

19:14

But the best part is this thing is now feeding back on MRP's.

There's nothing dividing by zero ever unless I have sigma go

to infinity which of course I wouldn't.

Because at some point I can reset my state errors and say, okay,

I know I'm guaranteed stable up to this point.

Now flipped upside down, I'm simply resetting my control problem.

And even then with a switched linear system you can guarantee global stability.

So I can switch my MRP's and run this control.

And this gives me very predictable close loop performance in that sense for

extremely large motions.

And it's also very robust.

I've tried it with pretty large errors because inertia errors and tumble rates.

because this omega dot you put back in the equation which gives it an inertia tense

a times that again.

20:00

So this is kind of the last slide on this when I actually applied it.

You can see a simulation with some inertias and I've got some attitude

errors, I don't tumble to crazy here actually I don't flip.

But this gives me if you would super impose this on top of the linear

post dynamics I gave with these initial conditions, it's a one to one match.

So you found the control that perfectly give you the linear system.

And it's a little bit more complicated than the quaternions.

But compared to the quaternion one, which ended up being a CRP feedback,

these can handle tumbling, tumbling, tumbling bodies.

And if I back in my post-doc days I've ran lots of simulations like this.

And I was quite impressed with the overall performance.

So even dealing with saturation, uncertainties, and

inertia, all of that stuff It

gave very nice-looking performance in a closed loop sense as well.

And you can handle tumbling whereas the original formulation couldn't.

But they really helped identify that this inverse kinematics approach actually has,

if you used to write all coordinates has some very elegant stuff.