So today let's start out with a review quickly of what we did last time which was attitude estimation. So Matt, you look a little bit more awake today. Let's see if we can get something on Davenport's Q-method. What can you tell me about that procedure? >> [INAUDIBLE] >> All the harassment paid off, there we go. >> [INAUDIBLE] >> You asking as a question? Are you asking me? >> Yeah, I'm asking >> What is it? >> I know you're minimizing, you're trying to minimize the cost function. >> Yes. >> I can't remember if it's exactly the same as Wahba's problem. >> Nick, since you were kind of sleepy last time, too. >> Yeah, so it's some crazy formulation based on the combination of keys. It's like I forget exactly how the equation- >> But you're definitely on the right track. It was Wahba's problem. Right, remember estimation theory, you've got some observation. You may have multiple ones. An observation is all a unit half vector, right? It's large information just in those little scribbles. Now what you want to find is the estimated body attitude that takes the same observation in a known form, which means you know your environment. You know where you are. So this is what the magnetic field should be doing at this location, right? Times this matrix should be equal to this. And to turn it into a cost function that we're trying to minimize, you make one minus that. Now this is still a vector. I want a scalar cost function. So we basically do the norm squared of that 3 by 1 matrix, pretty easy. So that's this transposed with itself. You sum over all of them, and this is your cost function in terms of the DCM. This was Wahba's problem. There was one extra term, I left space in here. What else did we put in here, Evan? >> Usually a weight. >> Weights, right. I think I may have used k in the notes, but it doesn't matter. Okay, weights, what can you tell me about the weights? >> So they're mostly up to you to choose, they can really- >> Mm-hm, and what's the critical part about them? >> The critical part is that they have to be between 0 and 1. >> No, they can be 10 or 1,000 or 1 million. >> The sum of them [INAUDIBLE] >> You're thinking of a different class. That's estimation theory, where the sum of the probabilities have to add up to one of all the possible cases, yeah. Not the case here. Andre, help him out, weights, what comes to mind? >> [INAUDIBLE] the ratios. >> The ratios, right. So if the sensors are equally good, you can make 10 and 10. We typically use 1 and 1 just because 1's such a simple number to have, right? It could be .1, .1. As long as they're equal, the math works out. You get the same answer, and you can try this quickly when you do these little tasks and try to solve some of these problems. You can put in weights of 10, weights of 1. You should get back exactly the same answer with Davenport's Q-method or anything that solves Wahba's problem. Yes, Matt? >> So that's because it's [INAUDIBLE] of all this, so you can divide by the biggest one and just do it all on one side? >> Essentially, yeah, but also just the weighting, it balances out that it doesn't shift the answer. You're looking for the extreme end point of this cost function, typically the minimum in this case, right, and this weight. So just going to raise it all. I could take this cost function and multiply it times 50, and it's just going to scale things up. The extremums will happen at exactly the same place. That's not the way to think of this, right? That's why whatever weight you come up with, I could take this cost function and multiply it times any positive scalar at least. And I'm not changing where minimums will occur. I'm just stretching it out for some reason, that's all. So good, this was Wahba's problem. And yes, Matt, you're right. Davenport's Q-method solves this. Now let's see, Bryan. How does Davenport's Q-method solve this? Just give me a quick highlight. >> Changes in the eigenvalue problem. >> Through, do we solve it in terms of the DCM? This cost function is written in terms of the DCM right now. >> No. >> What do we use? What attitude coordinates? >> Euler parameters. >> Euler parameters, right. So q that comes in a quaternion notation, at least, that's where it is. q within our class is also sometimes used for CRPs, in fact. Quest, you will see CRPs appearing, so just be careful with the notation there. So yeah, so Davenport maps it off, changes this cost function, realigns it into a nicely quadratic term, in terms of the quaternion. And there was this 4 by 4 K matrix, so Brian's already said, okay. So with this K matrix, the betas end up being eigenvectors of that. That's where the extremums happen, so we did a constraint optimization. Instead of minimizing this, we were able to rewrite it. There was a separate function, g, that we had to maximize. I'll just refer to your notes on that, right? Which of these, if you have a 4 by 4, we have 4 eigenvectors, 4 eigenvalues. Which one of these four is the optimal answer, Marion? >> The maximum one. >> The maximum one because we had to maximize this g. This g in the end, you plug it in and it just ended up being lambda. There's a few steps that we had to do there, right? So that's really nice, out of an infinity of possible attitudes, we narrow down to four. And then by looking at we have to maximize g we come up with no, it's just the one that's the biggest. That's the key. And now we can do that. Good, that's Davenport's method. It's a very elegant method, but what was the big challenge with this one? Why don't we typically fly this one, Nathan? >> because you don't want to solve an eigenvalue problem. >> Exactly, that's at the heart of this one. So we don't want to solve eigenvalues, so therefore quest, right? Robert, tell me something about quest, without reading too much. Robert, look up. More learning happens if you give me what's stuck. >> Less computing power. >> True, that's definitely the direction of this, right? That's why we went after quest. So less computing means somehow we have to avoid the eigenvalue, eigenvector evaluation of this kind of a thing. Anybody remember what was the key insight with the quest algorithm? >> The root solving method. >> We made it a new root solving method, true. How did we get there? Matt, what was the insight that gets us to the root solving method? >> If the sum of the weights is close to that largest eigenvalue. >> Yes, so if you look at the cost function j, we can rewrite it as the sum of the weights, I think minus this g. And the g, we know is going to be lambda optimal. So you can write lambda optimal is equal to sum of the weights minus j. And this j is typically almost zero, hopefully. It's small, right, because hopefully you don't have sensors that are 60 degrees off, but just a fraction. So you should get reasonably close, but we want to get as good as we can do, right? So with that insight, and we saw numerical examples, you could kind of, to first order say, well, this is just it. Now this would give us not the true answer. Some of the weights is not equal to the optimal eigenvalue, but it's close. So now we have to solve an iterative problem really. To find out the eigenvalue of a 4 by 4 matrix, you take the matrix minus s times the identity of a 4 by 4. And then you take a determinant. So this gives you a 4th order polynomial, right? And we now have to do a root solve on a 4th order polynomial. We have a darn good guess on where that root is. And you saw, it locks in very, very quickly. Good, once we have the eigenvalue, anybody remember, we now were able to solve this k beta equal to lambda beta problem for the attitude. But we didn't do it in term of quaternions. What coordinates did we have to jump to, Casey, do you remember? >> I'll guess CRPs, but I don't remember. >> No, it was CRPs, yes. We ended up dividing those betas by beta naught. The last three lines gives you basically 3D math. You can do a unique inverse, and you come up with CRPs, which is good. But fundamentally CRPs can go singular if the attitude is 180. And the way you avoid that is you don't just have one body frame. You have a second body frame, typically just twisted 90 degrees about any axis. So if one of them is 180 singular, the other one is fine. And then you just use 90 degrees at additions, subtractions to always reconstruct in a non-singular way what the attitude is. And then you can map the quaternions again. So there's ways around that but man, it's very, very fast. Good, this also solves Wahba's problem. What about OLAE, the optimal linear attitude estimator. Does this one solve Wahba's problem? No, it's a different formulation. So in fact, this one used the Cayley's theorem to rewrite. And the key thing here is you can rewrite the estimation problem as a perfectly linear estimation problem. You still need two observations at least but I can do n, I can add weights, I have all the other features. But it's rewritten as a different optimization basically. But also we're estimating here with Cayley's theorem, we're getting q tildes which are the tilde version of the CRPs. So we again get CRPs. But using the same tricks as what we do with quest methods, you could use sequential rotations to have two alternate body frames. And one of them is always going to be non-singular. And reconstruct the non-singular measure in the end like a DCN, MRP, quaternion, like that. But that was kind of OLAE that we have.