And lastly we had, this is very important, the eigenvalues

were invariant to a change of basis.

So you can pick any basis you want to represent a n.

And your eigenvalues will always be the same.

Because, remember, when we represent A in a new basis, what we're doing

is finding the transformation of A that preserves its action.

And its action can be defined by what it does to the eigenvectors.

Namely, multiplying them by their eigenvalues, I use.

So since the action of A is the same regardless of what basis it's in so

will the eigen values be.

So hopefully they're not too scary yet.

If you want to check if something is an eigen vector that's pretty easy we go.

So for example, let's say A 2, 3, 1 1.

And we want to check if the vector v = (1 1) is

an eigen vector of A, well what do we say?

So we can write

out Av = (2 1, 3 1) (1 1) and that's equal to, 5 2.

And 5 2 is clearly not a multiple of 1 1.

So v is not an eigenvector.

I'm not going to go through all of the details about how to

calculate eigenvectors.

These videos are more to explain the intuition behind these concepts.

But the starting point is to write out your eigenvector equation.

So A times e1 = lambda1 e1.

This could be rearranged such that A minus lambda times

the identity times e1 = 0.

And by solving this equation for lambda 1 and

e1 you get your eigenvector and your eigenvalue.

And in fact what you'll find is that this equation has several solutions.

Specifically, it will have at least as many solutions as the dimensionality of

your vector.

So if A is a 2x2 matrix and

it operates on two dimensional vectors you will have at least two eigenvectors.

So lastly just an example of why these are useful, well here are a few examples.

So here are some examples of when eigenvectors are useful So

one is it can help you decouple a set of differential equations.

Which is what they're used for in lecture two this week.

When we're finding the eigenvalues of the recurrent connection matrix.

It can also be used to decorrelate a random vector.

And so this is what we did when we were doing PCA.

In PCA what we had was a bunch

of random vectors drawn from some distribution.