Hi there. Applications related to derivatives We're on our third issue. This gradient and nature general equations. Chain derivative gradient rules of a natural We have seen that the occur. A vector and a single gradient variable function d, but derivatives such as d * processor vectors, thus multivariate generalization functions. Born in the equation There are many benefits to these. This is already partly denkem in terms of differential equations and partial derivatives to examine the also one of the key goals. Solutions of these equations a separate business expertise. In this class you will see more of emissions. But in advance of this equation presence would be useful to see. Already many in your class this Do you already have problems with equations. I also see a psychological benefit. Of course the process of differentiation, partial derivatives an end in itself. But also of this nature shows equations One of the most important tools. Therefore, as a means also of paramount importance. Now let's see them. Gradient was defined as follows. Let's start from the first component. d d d d x and the y components derivative of a vector processor. If we write this, with the unit vector, ie units vector d d x plus j times Once the unit vector d d y. This is the equivalent of two impressions. This numerical gradient of f If function etkilet components d f d * d f d y A vector v occurs. Here now, two variables and Cartesian We are entering the coordinates of the issue. Inasmuch as the gradient of a vector processor, vector operations application could imagine. Vector operations in two important inner product and vector product. If we consider the inner product of a vector u get. U and v components. We also see here a gradient. Means the inner product of the first component, to multiply the first component. It is divided by x gives us d. The second component in the second component to multiply. He also gives us d v d h divided. So the inner product of a vector gradient giving as a result a significant effect. Now it is a work pays off in nature, but many do not know yet equation, then we will see a little later, with this process is defined. Curl, name on it about a rotation. For example, electromagnetic winds of the day, gÃ¶rdÃ¼ÄmÃ¼z hoses on the surface of the sun Or we gÃ¶rdÄÃ¼ earth hoses or when you turn on a tap water events such as turning a rotation going. So in nature have a rotation event. Can express the best of them we call this rotational processor, what it does to this again we can say that even without knowing. Now that we have a vector, Let vector related operations. Here we take the inner product. Or, in a sense equivalent to the inner product There is also an integral vector multiplication. Vector multiplication as we do. i j k're writing. The components of the first vector We are writing in the second row. D d d d x y thereof is We know in the case of two variables. Three of them in short time We will also generalize variable. We are writing the last line of the second vector. These components of the vector x and y functions. To account for this, This is going to calculate the determinant. I components of the first column and first line throwing the determinant account. d d y are zero effect, gives the product of v is reset to zero. In J component is still Effect of d x is zero and reset it to zero gives the product of u. But the resultant gene k'yÄ±nc the first row and the third I also took the column demyelinating When we put aside, the remaining two binary determinant of the matrix, See there on d d x the effect is that there will be d *. Here we see him. d d y will also effect on will give u d y o d. They write a short representation, we with respect to x v x v is and the partial derivatives of u partial derivative to y. Of course, a minus sign comes together. As you can see a short a component in the direction of turns. In such a size that there is a laplacian. Our vector u that our internal internal multiplying diverjansl vector multiplication we get is derived from a gradient. Then this is of course the components of the gradient d u d y d x and d have to be. So u d f d x. A further derivative thereof with respect to x When we received is x squared square is going to be fun. v d f d y. It once again to y We'll take derivatives. d y d square for square. This representation as well. F was a gradient whereby the gradient f We take the inner product with the gradient. As you can see here two this one nabla or del As we arrived at the processor icon We show that as the square. This is a token. Because the square of a vector is not defined. But we understand it as an icon. It is this. It's called Laplacian. Laplacian of a function second derivative with respect to x. their sum is the second derivative of y. They started off very simple logic. Now that we have a vector that What can we do vector work? Here is their work that can be done. We can do it with the triple vector operations. For example, a vector, of a vector from vectors are hatÄ±rÄ±y with a vector in the direction of their vector product is zero data. What will be the first because of the length plus sine of the angle of the second length. The angle sinus zero. Because the two are parallel. Triple product conceivable. a vector with a vector b When we receive the product of this a vector perpendicular to the plane came out. Also take a more Why not get the product to a'yl a is a vector perpendicular thereto Because it was multiplying zero. Or another interpretation would be the following. The triple product of vectors of this We have seen that the volume of Show vector operations. But it appears twice in the triple does not occur here for a volume. Because in three different directions You're a volume must be formed. Here the volume is zero. In this way we can understand. There are also triple vector product. You'll see a little later. They gradient vector processor with Divergence and curl features is happening. They are very easy to prove. See here ui Considering the gradient, with the gradient of a vector is a multiplication of vectors. U f x and f y components of this will be. Now let's create the vector multiplication. We can see two types, but the most easy the following:i j k're writing. The first vector used We are writing line. The second vector f x and f y from which the We are writing the third row is made. This is now starting to be of zero two variables in our study. A bit later, these three We will generalize to variable. Now this, we calculate the determinant by the time we open the first line of work. The compounds I will be zero, Reset to zero FDI in the PA product, In the same way that j'nin. When we come to those k'nÄ±n the PA i.e. the first partial to y this time the partial derivative of the derivative with respect to x ie it consists of mixed derivatives, minus the partial derivative with respect to x to y receiving the second derivative so again, although in a different mixed second order derivative is happening. But we know that the function is continous which is continuous derivatives It does not matter what we get in this Both these two are equal FXY and are equal in fyx for here are zero. So a gradient Curl gives the zero vector. Here too it in the vector because they have similar operations gradient processor looks twice here A look twice like that. If we take this one, any the divergence of the rotational vector If we take what is rotational were calculated on the previous page, a rotational vector in two dimensions rotational vx comply minus k times, that has only k components. To get it to diverjansl We are writing to gradient We take the inner product of this comply with this vector vx k times, See here also becomes zero bit k because of different reasons i'yl We stand out because they are perpendicular to each other zero because the inner product of vectors in length multiplied by the cosine of the angle E i and k orthogonal vectors, cosine say that ninety the cosine of zero degrees. Similarly, the inner product of k with J zero, so that's a vector any of the rotational vector zero divergence involved. When we come to the third A frequently encountered anything in particular electromagnetism In theory it is encountered. We saw the triple vector product, see If y'all forgot the initial You can see the summary. From here it will be a vector In a multiplication of vectors in c'yl and a multiplication of the b'yl direction vectors they will be going digital, a vector of dimension b and c the reason for this to be composed I do not want to go too, but if b is the same as a'yl See if we take some There will be a simplification to see b a'yl will be a matter of time equal to the inner product, wherein the length of a frame that is the square of the length of a feature. We hit c again to a'yl also to a'yl CAE has also hit him. Now it's the gradient vector thinking that processor Let's do so in a rotational We're taking a more rotational. We have calculated the rotational rotational, There are k components. This means that rotational gene ijk'y We are writing the first line, a first vector components in the second row We are writing the second vector components of it only, Because component components zero means zero vx will be fit. We bumped it a bit by hitting Edit your here is the identity emerges. They electromagnetism, for example fluid removal equations mechanics are very useful process. Such, for example, these features are a digital function with a gradient hit here consists of a vector of course you can get the divergence. A digital function with a gradient hit it possible to get the rotational, because all these vector operations defined here comes from the gradient vector, this vector by a number of vector gets hit again, it is a means to get to the divergence of means to make the inner product of two vectors internal multiplication, where two vector product of vectors. They also have these features two size homework for you to show I left as it is important that the identity of but very simple features. Now there are problems resolved In order to consolidate a bit, If you assume the function FXY x cube as given therein plus x and y are asking these questions three square Calculate the gradient of f, Calculate a laplasyon this Calculate the rotational gradient. Let's see this process mean gradient to get derivative of f with respect to x get the f's to y Taking the derivative of a vector and two to combine components. See, here we find a vector fx fy first component second component. Laplacian with respect to x for the partial A further variant of the derivative I need to get to x a partial derivative according to y I need to get one more derivative. This means combining Laplacian Zealand receive altogether those selected Because of the random function characterized in It is not, but you do not need to explain here We see that it is zero, What is important to make this process. See, wherein X according to Get more times by the negative derivative six x to y comes here once Get more derivative of x turns out to six again. Also see curl again when he calculated we know we need to perform the procedure. ijk'y are writing the first line, followed by gradient lines We are writing components, following line in the second, that is the last line in the third gradient here we calculate the gradient components Or as vector components fx and fy When we consider this component, and this component We put them here also bring. When we take account of these two genes dimensional equations in the vector it's always something often encountered i and j components will be zero. ddy'yl six zero zero times x and y, DDX times when we receive .mu.l zero zero times anything here, but when we take K, .delta See six x to x of y six years by the partial derivatives involved, here come the cons y according to the three partial derivatives y derivative of the square, it's going to six years, it turns out to zero. We already know from the calculations could because of a previous identity we see a rotational zero gradient We know of no account would do. This is the first identity ensuring nothing From this account we could find not do. I want to give an assignment, the assignment u A given vector in the previous example We started with a numeric function. Given another vector, there is provided a vector and shown in another vector w. With their divergence and We want to calculate the rotational. Divergence see it here first components plus derivative with respect to x y is derived based on the second component. Curl just the job accounts as you can. In the second third in the same operation with the vector fourth vector in the same process the same process with the vector, and the answers given here this answers your account to provide a way to show you. Now I said since the beginning of nature partial equations is expressed in derivatives. Now let's start the bivariate case because it will generalize the three variables In nature, only the x dimension There are no x and y dimensions, There is also a z-dimension time dimension. Now where x is the position vector, t the If the variable indicates the time frame c The wave equation as the equation Known showing the speed of the waves If you do not like a physical size physical than anything you can get one. As you can see with respect to x second second derivative According the second variable derivatives and in the meantime have a minus sign. Immediately following this See equation alpha gene a physical coefficients xa but by the end of the second derivative, There the first derivative with respect to t constitutes a second type. See also in this third place x and y Used by the coordinates x derivative of a second derivative respect to y It looks like it, but in between the first equation plus sign, and these equations changing the very nature of this negative to the formation of wave motion is allows them to show. Filtering or pain or, we call leakage or heat equation your tea spoon in the equation Put into tea as a then you will see that while heat but this is not like a wave. Or water into the soil If you just put it after a while is infiltrated goes into a bucket if you throw in the middle of paint to bring this After a while a go spread movement, but not the type of wave, This damped motion and heat Showing spread this equation, mass propagation with the equation Showing and in some cases What message do not in any way If you are staying local effects. They also Laplace equation tries as shown here. As the names fitted with simulation See where x squared minus y Affinity can be square t there but the important thing x According to the second variable squared minus that way, minus the square. In the Laplace here again x squared y There are square, but Affinity plus sign between them. Here are the x squared, but the second frame Because it is not only the first derivative. See this analogy here with a hyperbola equation is equal to you say will be the equation of a hyperbola you say here is equal to a zero the equation of the parabola will be released is equal to Is a pattern of an ellipse or circle, circle is a special case of the ellipse this name is used in these here for him to resemble a hyperbola but of course this is not something hyperbola To rename a simulation of a simulation, it is called hyperbolic type, it is called parabolic type, it is called the elliptic type. These are the three basic equations of classical physics. They have extensions or something, but the main When you download the essence There are three equations. A well outside the classical physics that in quantum physics for atomic-scale events coefficients are still here, If you do not like from this physical factor It does not matter you can ignore them. There are second derivatives with respect to x with respect to t There are some first derivative of the parabolic equation looks like, but here so there is good number of virtual volumes. Therefore, these equations makes complex equations. This is a derivative here There is not a term. As you can see, these equations always partial contains derivatives and partial derivatives An important goal of learning this to be able to have partial derivatives. Because I'm already giving these equations In many places you will encounter them how these solutions work already in a separate, but at least thereof in which perhaps a psychological view events I hope he can also give enthusiasm. Three variables that If we generalize exceptionally simple. where x and y are also the intermediate compound we're putting the gradient According to the partial derivative of x to y and z of the consists of a numerical function. UV divergence vector previously we will now also had component. There was previously a dy dx dv dw in dz inner product because it will be the first to say second components with the first component third components among themselves their Collect them will hit This work gives divergence. Rotational also very similar situation 're writing again followed ijk'y the components of the first vector line We are writing before DDX FDI ddz There was only the x and y'l terms, and only terms previously V'la while the third line is also equipped w. Now if we turn them from a very simple expansion As you can see, these terms are obtained. Laplacian the same situation Zealand d square for square d square before dx f dy had a square of the z'li As is happening in this way, Or head fitting and this necessarily Let us not also say z gradient of f If you look at the divergence of this to get it Taking the derivative with respect to x first to say, The second to y Taking the derivative of the third and take the derivative with respect to z to collect, a generalization that is very consistent. So far we have not mentioned but in the matrix, I'm sorry we mentioned in vector There is a dyadic product, If that juxtaposes two vectors What happens, what point the way there internal What vector multiplication as in There are multiplied by the multiplication sign. This product is also significant in many places useful and naturally self- interests in fluid mechanics such as gas it occurs naturally in a dynamic in the dynamics of rigid bodies it occurs naturally. Here is a gradient any vector u with vector multiplication of the dyadic gives us a matrix. Again, a similar numerical function Suppose that u is the gradient of f, here again the two vectors dyadic This matrix multiplication is As you can see here gives all possible partial derivatives of f but that's obviously not random settled the case in an order. These are mainly of continuum mechanics solids, elasticity theory, fluid mechanics, gas dynamics In the basic concepts in subjects such as such as self emerges In cases like stress tensor that are inherent in these definitions It is involved and their fantasy that does not come as naturally I find great benefit in view. We have seen them before a rotational divergence of three size is zero, one of the gradient In three dimensions curl is zero. In the process they If you do you will see. The identity of this gene, In this way again the third identity shows itself three components also available. Learning means that the two components calculate ease less You can see and but almost always, The generalization to three dimensions is possible in every case. In a paper that three Do the size of proof I say this because of the two-dimensional the two variables've done this before. Here, the same process three I'm waiting for you to size. Third of these is a little difficult, but no If the first two of these is easy. Three variables that we generalize Although only xyz where x, while this time with Laplacian second derivative is shown. This wave equation, but this time three only in the size of the wave equation x and if y for example, a water Or on a thin In cases like the vibrations of the membrane again but it appears Laplacian bivariate As you can see it time and time Laplacian becomes The basic equation for the wave equation, heat equation and Laplace from two dimensions to three dimensions equations We're generalizing here and Laplacian naturally self-involved. In quantum mechanics the In the SchrÃ¶dinger equation where physical values removing the put. second derivatives with respect to x Laplacian is replaced with. From this enthusiasm excitement hopefully I You will hear this vector processors basic equations of nature shows a convenient way. Initially bringing this They did not know this new gradient Laplacian, very long pages full equations and they had to prove was shortened due to extraordinary and has facilitated transactions. Now stand here today I want it alone I want to introduce a bit of the next issue. We have seen a gradient in two variables a digital function xa and components according to y component partial derivatives When taken as a gradient we obtain the vector, but Or Cartesian coordinates xy ij Not a single coordinate type. Most simply have a circular coordinates. The following question will be asked. We take care of our problem a circular geometry then coordinates the circular geometry What happens gradient would be more appropriate? This more in our next lesson