We can define the concepts of a global minimum of a function f of x of many variables, and also local minimum in the same fashion. So, we can take these definitions state into just recently and we simply reverse the order of the inequalities. Now, if we consider a particular problem of optimization, we have to separate two kinds of optimization problems, or we can consider an unconstrained optimization versus constrained. So what's the difference? In unconstrained optimization problems, we deal with the function which we are trying to either maximize or minimize on what set. Usually it's a domain. So, for x belonging to U where U is a set either open or sometimes sometimes closed from R_n. We start our discussion of how these problems are being solved from the unconstrained case which is considered more simple than constrained optimization. In constrained optimization problems, once again we have a function which we call an objective function of the problem which should be maximized or minimized subject to some constraint or constraints. These constraints take form of either equalities or inequalities or a mixed constraints combining both types, equality constraints inequality constraints. So for instance, we start considering a case then there are some constraints in the form of equations or equalities. So, we start with unconstrained problems which is provided here. We need to state and formalize both necessary and sufficient conditions for a point to be a point of a maximum or a minimum, and we will be considering a search for local extrema. Extremum is either maximum or minimum of a function. What about the global extrema? We'll be considering but it will be done later. So, from now on we'll be considering just a case of a search for a local extrema, and we'll be using calculus like in a single variable case. So, let's suppose that our function f of x which is defined on some set U, and we'll consider an open set as earlier. But this time it is continuously differentiable. So it belongs to the class C_1 of functions, and I am ready to state a necessary condition for a point to be a local maximum or a local minimum in the form of a theorem. Let x_* be a point from the domain of the function. Then, first order derivatives or this function taken at this point equals zero for all i starting from one up to n. This is a theorem which can be easily proven. So let's prove it. I'll provide the proof in case n equals two. Will be considering two dimensional case. Let me draw a graph. So here we have a function whose domain is a set in cartesian plane. Here we have a point where this function, so that will be our domain U. First I decided to choose a rectangle but I can replace this rectangle with any open set whatever. Suppose x_* which is a point of local maximum to make it concrete. Then, if we fix x_2 at the value of x_2 star, and we will be changing x_1 within some neighborhood at this point. Then we have a function of a single variable only. So this function is a function F of x_1, x_2 star is fixed. Then according to the theorem from a single variable calculus, its first order derivative taken with respect to x_1. So this derivative should turn zero at x_1 star, and this is exactly condition in the form of a partial derivative with respect to x_1 which turns zero. The same true if we fix this time x_1 value at x_1 star and we will be changing x_2 only. So that some kind of a cross here and the same true for derivative, and that completes the proof of this theorem.