We've said several times now that life is much easier if we can construct an orthonormal basis vector set, but we haven't talked about how to do it. So in this video we'll do that starting from the assumption that we already have some linearly independent vectors that span the space we're interested in. So let's say I have some vectors V, and I've got a whole group of them v1, v2, all the way up to vn. There's enough of them that they span the space. So let's sort of sketch them out. So I've got to say a v1 here, we have a v2 over here, another v3 down there somewhere and they're linearly independent, let's assume that. If you want to check linear independence you can write down their columns in a matrix and check the determinant isn't zero. If they were linearly dependent, that would give you a zero determinant. But they aren't orthogonal to each other or of unit length. My life would probably be easier if I could construct some orthonormal basis somehow. And there's a process for doing that which is called the Gram-Schmidt process, which is what we're going to look at now. Let's take arbitrarily, the first vector in my set, call him v1, so we take v1. In this first step she, lets call him she, she gets to survive unscathed. So we're just going to normalize her and we're going to say that my eventual first basis vector e, is going to be equal to v1, just normalized to be of unit length. Just divided by that, its length. So, e is just going to be some normalized version of v1. And I can now think of v2 as being composed of two things. One is a component, let's do this in orange, a component that's in the direction of e1 like that, plus a component that's perpendicular to e1. But the component that's in the direction of e1, I can find by taking the vector projection v2 onto e1. So I can say v2 is equal to the vector projection of v2 onto e1 dotted together. And if I want to get that actually as a vector, I'll have to take e1, which is of unit length so I'd have to divide by the length of e1 but the length of e1 is one. So, forget it. And if I take that off of v2, then I'll have this guy, and let's call him u2. So I can then say that u2, so plus u2, so I can then rearrange this and say that u2 is then equal to v2 minus this projection v2.e1 times e1. And if I normalize u2, if I take u2 divided by its length, then I'll have a unit vector which is going to be normal to v1. So if I take a normalized version of that, let's say it's that, that will be e2. And that will be at 90 degrees to e1. So it'll actually be there, e2, once I've moved it over. And that will be another unit length vector normal to e1. So that's the first part of taking an orthonormal basis. Now my third vector v3 isn't a linear combination of v1 and v2, so v3 isn't in the plane defined by v1 and v2. So it's not in the plane of e1 and e2 either. So I can project v3 down, let's say something like that, onto the plane of e2 and e1, and that projection will be some vector in the plane composed of e2s and e1s. So I can then write down that v3 minus v3 dotted with e1, e1's. That's going to be the component v3 that's made up of e1's, minus v3 dotted with e2, e2's. That's the component of v3 that's made up of e2's. And then all that's going to be left is going to be this perpendicular guy there, so that's going to be a perpendicular vector which we'll call u3, which is perpendicular to the plane. This is some funny 3D space so the diagram gets quite messy. And then if I normalize u3, divide by the length of u3, then I'll have a unit vector which is normal the plane, normal to the other two. So now I've got an orthonormal basis for e1, e2, e3, and I can keep on going through all the vn's until I've got enough orthonormal basis vectors to complete the set and span, the space that I originally had. But I've gone from a bunch of awkward, non-orthogonal, non-unit vectors to a bunch of nice orthogonal unit vectors, an orthonormal basis set. So that's how I construct an orthonormal basis set, I make my life easy so that my transformation vectors are nice, my transformation matrices are nice, sorry. And so that I can do the transposes, the inverse, and all those lovely things. So I can use dot product projections for the transformations, all those nice things that are going to make my life very very much nicer whenever I'm doing any transformations, or rotations, or whatever it is I want to do with my vectors. So that's going to be really nice. This is a really nice process. And what we'll do next is we'll apply this, we'll try this for an example and see how it rolls and then apply that to doing a transformation.