I still lost the case. Let's consider the following situation. It's case 3, complex eigenvalues. Again, we are considering the system. Where A is n by n real matrix, real constant matrix. Even though A is a real constant matrix, A may have a complex eigenvalue because eigenvalues are solutions of the characteristic equation of the matrix A and the characteristic equation of the matrix A is a polynomial equation of degree n. Even though the polynomial has real coefficients, its routers maybe complex. That means A may have a complex eigenvalues. Now, we assume that A has a complex eigenvalue, say, Lambda 1 is equal to Alpha plus i Beta, where Beta is positive. Because any routers of the polynomial equation must happen in the complex conjugate pairs so that, if this is an eigenvalue, then Lambda 1 conjugate to say, Alpha minus i Beta is also an eigenvalue of A. Moreover, if vector K_1 is an eigenvector for the complex eigenvalue Lambda 1 is equal to Alpha plus i Beta, then the corresponding eigenvector is also complex, and so that its conjugate, is an eigenvector for eigenvalue Lambda 1 bar, which is equal to Alpha minus i Beta. Then anyway, in this case, we have two linearly independent solution, then X_1 that is equal to K_1 times e to the Lambda1_t and conjugate, that is K_1 conjugate and e^Lambda 1 conjugate the t, they are two linearly independent solution of the problem, X prime is equal to A times X, so they are two linearly independent solutions of the original problem, X prime is equal to A times of X. But notice the following thing. This power Lambda 1 and the Lambda 1 conjugate, they are complex numbers. Corresponding eigenvectors K_1 and K_1 conjugate, they are complex, constant vectors so that they are not just a solution, but they are two linearly independent. Let me add one more thing, complex-valued solutions. They are two linearly independent complex value of the solutions of the problem, but our original problem is the coefficient matrix A. This is a real n by n matrix and the unknown vector X is a real vector. We prefer to have real-valued solutions, instead of the complex-valued solution. Let's look at the following thing. X_1 plus X_1 bar, both are solutions of this homogeneous system of equation divided by 2. What is it? Now, there was this a real part of X_1. The left hand side is a linear combination of x_1 and x_1 conjugate. Both are solution of this homogeneous problem so that by the superposition principle, this is also a solution. This is also a solution of this problem. Similarly, x_1 minus x_1 conjugate times 1 over 2i. This is equal to imaginary part of x_1. Again, by the superposition principle, this is also a solution. Furthermore, because I'm taking real part of this complex value of the vector, and I'm taking the imaginary part of this complex vector, so they are real value the vector's way. In other words, these two are the real values, the solutions of the given problem, and they are linearly independent so these two are two linearly independent real value the solutions. That's the value we actually want. From the real problem, a real coefficient to the system of equation we try to have real value of the solutions and they are given by this two-way. Real part of x_1 and the imaginary part of x_1. How to find actually those real and the imaginary part of the given complex value the solutions. It's easy to pick up those real and the imaginary part, once we write this complex value to the solutions, this a complex value, the eigenvector. Take its real part as a B_1 plus i times the imaginary part is C_1. B_1 is the real part of a K_1. C_1 is the imaginary part of K_1. We can express this complexity vector into that way and e^Lambda 1 of t that is Alpha plus i Beta of t. That is equal to e^Alpha t times B_1 plus IC_1, e^i Beta t is cosine of Beta t plus i sine Beta t by the Euler's identity. Just looking at this form, easy to pick up the real part of x_1 and the imaginary part of x_1 so that real part of x_1 is equal to first, this one times that, e^Alpha t is a common, and the vector B_1 cosine Beta t. Another real part is this time there, i times i is equal to minus 1, so minus C_1 and the sine of Beta of t. That's the real part, plus i times now the imaginary part, i times e^Alpha of t, and B_1 times this so B_1 sine Beta t plus from this one, C_1 cosine of Beta of T1. What I am saying, this is the real part telling me the corrected some minor mistake and what I'm saying is this is the imaginary part of x_1. I just did the pickup the real part of this complex vectors and the imaginary part of the complex vectors. This is one solution and that is another solution which is linearly independent from this. Let's explain this situation, the concrete example. Let's consider the example. Considering the following initial value problem, 2, 5, 1, minus 5, minus 6,4,0,0, and 2 and x, y and initial condition is x_0 is equal to 10, minus 8, and 0. Eigenvalues of A, is this coefficient matrix. Lambda_1 is equal to minus 2 plus 3i. Lambda_2 is equal to Lambda_1 conjugate two, which is minus two minus 3i. Lambda_3 is equal to 2. We have these. Three distinct eigenvalues of which the first two are the complex eigenvalues. Corresponding eigenvectors are forced to k_1 is equal to 5 minus 4, 0 plus i times 0, 3, 0. Then second eigenvector is just a conjugate of k_1. The final is the k_3 that is equal to 28 minus 5, and the 25. I recommend you to conform the computations. Then miss the words. We have a three linearly independent solution. Let's pick up the real solutions. First, x_1 is equal 2, can you do that way? That is e^ minus 2t, force this one, five minus 4 and 0. Then what? From this, can you can form a cosine of 3t minus this 0, 3, 0 and the sine of 3t. That's the one solution, and the second solution, x_2 is equal to now the imaginary part of e^ Lambda k_1 times e^ Lambda_1. That will be e to the minus 2t times 5, minus 4, 0 and the sine 3t and plus 0, 3, 0 and the cosine of 3t. That's the second solution. Finally, we have a correspondent to this. We have a third of solution that is 28 and minus 5 and 25 and e^ corresponding power is 2t. We have such three linearly independent solution. Make a linear combination of this. We have a general solution, so that the general solution is X is equal to c_1 X_1 plus C_2 X_2 plus the C_3 X_3, that is C_1. In fact, the five cosine of 3t minus 4, cosine 3t minus 3 sine 3t and zero and e^ minus 2t and plus a C_2, 5 sine 3t. From here, minus 4 sine 3t plus 3 cosine 3t and finally zero and plus a C_3 of this one, 28 minus 5 and 25 and e^2t. This is a general solution. Finally, they're using this initial condition, determine C_1 and C_2 and C_3. Then plug in zero and this added is equal to, plug in t is equal to 0 in this expression and set it equal to the tan minus 8 and 0 and find the C_1, C_2, C_3. Then skipping the details of the computation, I claim that this initial condition gives you C_1 equal to 2 and C_2 equal to C_3 equal to 0. That finally, the general solution and the solution of the initial value problem by C to 0. We don't need this one. C_3 0, so we don't need that either way. C_t s equal 2, so two times of it, that's a solution to the given initial value problem.