0:06

So now the triad method was a nice way to go, here's an observation.

Â A unit direction vector, here's another observation, a unit direction vector.

Â And yes, Tibo, they're not orthogonal, I'm sorry, they're not aligned in some way.

Â They have to give you a unique plane, then you can do this stuff.

Â Now we want to look at, what if we want to use all of the information?

Â This was fast, this was easy, makes it work, has flown.

Â But what if we wanted to know everything, we want to use everything?

Â We have some information an that we left on the table.

Â And particularly, there's all kinds of systems these days flying with multihead

Â star tracker systems.

Â You not only have one star tracker, you have multiple star trackers.

Â because as you're rotating, as Kenneth Spencer was highlighting earlier,

Â sometimes your sensor is staring at the Earth.

Â 0:58

What star tracker doesn't have exact x-ray vision to look through the Earth and

Â see where the stars are behind the Earth.

Â So it's blocked,

Â it's a fantastic expensive sensor giving the absolutely zero right now.

Â So you want to make sure you have a second sensor.

Â But normal on operations hopefully,

Â you're flying in a way where both star trackers are useful.

Â And now you can blend these informations, or

Â even with this magnetic fields we have horizon sensors.

Â There's all kinds of ways we can get headings.

Â Even with relative motion if you think of this stuff,

Â there's fiducials, computer science using this stunts too.

Â So fiducials are visual trackers that you're doing.

Â If I'm looking in my image plane and I'm tracking this point and this point and

Â this point.

Â Each one of those points that I'm tracking is a unit direction vector in that

Â camera frame.

Â How do I reconstruct now the full attitude, right?

Â And the same algorithms work for this kind of stuff.

Â So this is a fundamental thing, more and

Â more you're finding them actually built into the solutions which is kind of nice.

Â But if you have to write them yourself, this is what we're going to do.

Â So how do we deal with multiple observations?

Â I now have N number of observations and this is an interesting thing.

Â So assume we have N observations, N bigger than 1, because we know N1 is not enough.

Â I need at least two, the two is immediately making it over determinant.

Â But you may have three, you may have five, especially with visual cameras sensors.

Â You're tracking stars, if you're doing a star track, mathematics inside.

Â You're tracking five, six, ten stars, that's ten headings,

Â you have to reconstruct all of a sudden.

Â So we have N as a large number fundamental,

Â or it could be a large number.

Â How do we now do this?

Â And Wahba, she's a mathematician who did this back, many decades ago now.

Â But she post this at a conference and said, well for this estimation problem.

Â Fundamentally it always breaks down to this,

Â we have N number of observations measured in the body frame.

Â I know what these observations are in the inertial frame.

Â Again assuming we know where my location is, and I know what my environment is,

Â otherwise this doesn't work.

Â 3:29

We're looking at the residual, if this were the matrix that maps this perfectly

Â into this, then vb minus BN v in the N frame, that's going to give you 0.

Â 0 squared is 0 times weight still 0, summed up is still 0.

Â So if you had a perfect measurement, no noise, no corruption, no ignorance and

Â you map it up together.

Â This cost function gives me 0.

Â You had a question?

Â >> Why scale it by a half?

Â Why would that make a difference?

Â >> Laziness, right?

Â This is what we're after, laziness in this class.

Â Because if I don't do a half, you will see in a moment.

Â If you have a function you want to optimize, if you have y of x,

Â find the minimum of that function, you take the derivative of that.

Â And if you have something squared, taking the derivative.

Â I get two times that, and there's a two factor write out.

Â And I'm really lazy, so I put a half in front of it.

Â That's honestly, if you look at the papers, there's mathematical language and

Â very flowery.

Â And why this makes sense, but

Â the bottom line is we're lazy, that's really what it's about.

Â So let's talk about these terms.

Â So if it were perfect we would get this, but we know with measurement errors.

Â This is not going to have five observations that are perfectly going to

Â match from known quantities to the measured quantities.

Â There's going to be some things off.

Â They're not completely compliant.

Â So this is going to be non zero.

Â So we take the errors, [COUGH] norm them.

Â That's a vector, we did the norm, L2 norm.

Â We square that, that's basically the dot product between these two things, right?

Â And we sum them all up,

Â that's a least squares error measure, like you're doing a least squares estimation.

Â It'll quickly come in here.

Â The next thing we could introduce is weights.

Â That's really important because we know, for example, a sum sensor is way better

Â than an emingnetic field sensor, now what is way better mean?

Â This is were you control it with the weights,

Â do I trust the sound sensor 5 times more than the magnetic field sensor?

Â Then you just have to make sure the weights are such that.

Â The weight of the sun sensor is 5 times bigger than the weight of the magnetic

Â field sensor.

Â The actual value of the weight doesn't matter.

Â So you could make magnetic field sensor 100 and then the sun sensor 500.

Â Or you could make one 5,000ths, and then the other one 1,000ths.

Â The absolute value doesn't matter, it's the relative value of the weights.

Â So again laziness, the easiest way to have is what?

Â One, exactly.

Â Yeah, we just pick one and then everything is relative to one.

Â So if something's better, I would go, that's good, that one's one.

Â These one's twice as good or twice as bad.

Â You can go to 0.5 or 2 depending on which one you're scaling on.

Â They're all equally good and in homeworks, that's kind of what you do.

Â You just set all the weights to one.

Â Since absolute values don't matter, this is what allows us to

Â throw a one-half in front of this and just avoid a factor of two afterwards.

Â If you're optimizing a cost function and you multiply the cost

Â function times some positive scaler, you don't change the optimum location of it.

Â So it's just a convenience.

Â But now this is Wahba's problem that she posed.

Â And if you go Google Wahba's problem, it's amazing to this day, decades and

Â decades later.

Â People are still solving Wahba's problem in novel and interesting ways.

Â And I will show you some classic methods.

Â There's Davenport's q-method which was very nice,

Â kind of groundbreaking in what it does and it enabled other methods.

Â There's QUESTs method.

Â The QUEST method flown probably that's the most popular thing flying right now.

Â And it was done by Malcolm Shuster and

Â there's lots of add-ons that were done around QUEST.

Â And then there's something very recent called OLE method,

Â it's a different kind of a thing.

Â And also dosens kind of optimisation, but in a different way.

Â So those are the three that we're going to focus on today and the rest.

Â But this is it, so the least squares fit.

Â If these mathematics look weird,

Â everybody has seen this kind of a least squares problem.

Â We've all taken measurements in labs somewhere, and it's supposed to be

Â a straight line, you never get a straight with line measurements.

Â And then you have to fit somehow a least squares behavior to it.

Â That's what we're doing with the attitude measure.

Â But we have to come up with a way to fit this three by three DCM that was embedded

Â in this function, right?

Â And that's kind of tricky.

Â And so how do we do this?

Â And that's what lots of papers to look at.

Â So I'm going to just start here for a few minutes and then we'll continue this.

Â But I wanted to set up at least the idea.

Â So embedded in here is this DCM.

Â And tracking this in a DCM form here to project inertial quantities into

Â body frame quantities, the DCM makes perfect sense.

Â 8:00

But I have nine coordinates,

Â I have to estimate nine coordinates that could do this.

Â And it's not just estimating nine coordinates,

Â it's estimating nine coordinates draw subject to how many constraints?

Â Thank you. See?

Â Okay, so I'll never forget that anyway.

Â >> I know. [LAUGH]

Â >> I'll let go.

Â But that's what makes it really tedious, right?

Â We have nine coordinates.

Â It immediately becomes a highly constrained optimization

Â process because of the DCM.

Â So is there an easier way?

Â And this was done back many decades before all these modern computing techniques,

Â and this is Devenport's q-Method that was published.

Â It's a really beautiful, beautiful solution,

Â but it has some challenges that you will see.

Â So instead of going directly for the DCM, we can write the DCM in terms of

Â an infinity of different attitude coordinates.

Â Let's see if there are other coordinates that make our life easier.

Â In Davenport's q-method, he was dealing with quaternions here.

Â So q is a common name often for quaternions.

Â The q is also a common name for CRP, so

Â we just have to kind of pay attention as to what we're discussing here.

Â Here, this is quaternions, and

Â I've rewritten in terms of the betas that we have.

Â But sometimes you'll find formulas where it's in terms of qs and this could be q0,

Â q1, 2, and 3.

Â And some papers have q1, 2, 3, and then q4, right?

Â It's the 4 through 0 that is the same, that's the scaler part and

Â it just kind of flips things around a little.

Â So depending on the Voksen papers, just something you have to pay attention to.

Â I rewrote this to be consistent with how we've been doing it in this class.

Â 9:34

So if we do this, we have a cost function and

Â we can start to expand this cost function.

Â We need the norm of this, and the norm with the A vector squared is the same

Â thing as the A vector dotted with itself.

Â Where in matrix form it's A transposed A gives you a norm squared,

Â that's essentially what we're doing.

Â We don't just have A, we have VB minus BN VN so

Â it's this times this, now we just do start multiplying out.

Â You have a VB transposed with VB, a VK transposed this whole

Â thing flips the order times this, that gives you BN transposed BN.

Â Which is going to be simply identity and that leaves you with this.

Â And then there's some VB transposed with this.

Â Here, two terms that have to be rewritten into the same form.

Â Remember, this is a scalar.

Â 10:35

You will get this form and one form that's flipped.

Â But in a whole series of matrix math gives you a scalar answer.

Â You can always transpose that matrix math because the transpose of a scalar is still

Â the same answer, right?

Â So you can often transpose every element and reverse the order, and

Â that's how you end up getting this too.

Â So I'll let you do that on your own, but that's there.

Â Now if you look at this, this becomes very simple.

Â A unit vector dotted with itself is just going to be 1.

Â A unit vector dotted with itself is going to be 1.

Â 1 plus 1 is 2, that's already a factor 2 here, and there's a half, wow.

Â I love this, right?

Â All those twos vanish and we just end up with this form.

Â So this is just a classic first step, how to rewrite.

Â We still have Wahba's problem, but

Â we've rewritten it in a form that'll be a little bit easier to optimize.

Â And that's where we'll pick up next time.

Â So we'll start up from here, okay?

Â Good, I'll see you on Tuesday.

Â Have a good weekend.

Â