Welcome to calculus. I'm Professor Grist. We're about to begin Lecture 25 on the definite integral. In this lesson, we'll turn our attention from the indefinite integral, a class of functions, to the definite integral, a numerical quantity. No doubt you've seen definite integrals before. But do you remember how they're defined and what they really mean? This is one of those concepts that takes a few readings to really sink in. In this lesson, we'll give you a fresh look at the definite integral. This lesson is all about adding larger and larger numbers of smaller and smaller local amounts into some global sum. That's not too unusual of a thing to do, so let's do so, in the context of a simple, classical example. Compute the sum. i goes from 1 to n of i, that is 1+2+3 all the way up until n. Now, we could think about this a bit more globally, or geometrically, by representing each i as a column of i squares, each with side length 1. The net sum then looks something like a triangle with base n and height n, but is discretized into these squares. What is the sum represented as an area? Well, the area of the triangle would be 1/2 n times n. But this ignores a few small leftover triangles, each with area of 1/2. How many are there? Well, of course there are n such leftover triangles. That yields a net area of 1/2 n times (n+1). Now if we think about what it would take to add up 1 + 2 + 3 all the way up until n, these are local additions or local computations. On the other hand, this general formula, as a function of n, is something that is more global. And that's really the key intuition behind what we're about to build. That is, the definite integral. The definite integral is a generalization of this kind of reasoning to more difficult or nonlinear sums. The definition of the definite integral is a little bit involved. So stick with me and review again as necessary. We write the integral f(x)dx as x goes from a to b as a certain limit, but what is that limit? How do we set it up? Well first, we restrict to the interval from a to b, and then we build a partition. That is, we split this interval up into sub-intervals. P1, P2, all the way up to Pn, that fill up the domain from left to right. Now, each sub-interval has a width associated to it, this is called delta x sub i within each partition element, we choose a point x sub i that is within piece of i. This is called a sampling. Now it doesn't matter which point you choose, just pick one, one per subinterval. Then, we first define the Riemann sum to be the sum as i goes from 1 to n of f, evaluated at the sampling point, times the width, delta x sub i of the partition element. This Riemann sum is often visualized in terms of columns or rectangles sitting over top of the partition. Then with this in mind, the definite integral is defined to be a limit of Riemann sums, where it's an unusual sort of limit. We're taking the limit as the partitions get smaller and smaller, as the widths of the partition element goes to 0. Now, you can see that as those widths get smaller and smaller, the dependence on the sampling seems to be less and less important, and indeed, that intuition does hold true. Now there's a little bit of notation that goes into this. First of all, you should notice that that integral sign is really a form of the English letter s in the same way that the summation sign is a form of the Greek letter, sigma. Both connote a sum, so a definite integral is really a sum, and all of the notation associated with it matches the corresponding notation in the Riemann sum, where dx is something like the limit of delta x, as delta x is going to 0. Now, the second thing to note is the limits of integration. One often writes the integral from a to b of f(x)dx, I prefer to write the integral as x goes from a to b of f(x)dx, this tells you exactly which variable you're talking about in terms of the limits. I'm not always going to use that notation, but I will sometimes, and I suggest you do likewise. Sometimes we'll be sloppy and just write the integral from a to b. Lastly, the variable with which you do the integration is not so important. The integral of f(x)dx, as x goes from a to b, is the same as the integral of f(t)dt, as t goes from a to b. One could use other symbols still. What matters is the value of the integral, not the name of the variable with which you integrate. Sometimes we'll just write the integral of f from a to b, if it's clear which variable we mean. Well, let's do an example. Compute the definite integral of x dx as x goes from 0 to 1. From our definition, this is a limit of Riemann sums over partitions as the partition elements go to 0 in width. Now since f(x) is equal to x, that Riemann sum is just the sum of x sub i times the width delta x sub i. Let's choose a particularly nice partition, one which is uniform. That means the widths are constant. Explicitly, we're going to set P sub i = (i-1)/n to i/n. This sub-interval is going to depend on n, and so, we'll have a sequence of partitions. Now we need to choose a sampling point, one x sub i in each P sub i. For simplicity let's just choose the right hand endpoint, i/n. Then the width delta x sub i is a constant 1/n because we have a uniform partition. Therefore, the Riemann sum can be expressed as a limit as n goes to infinity. That is, as the widths are going to 0, what's the Riemann sum look like? It looks like the sum. i goes from 1 to n of x sub i. That's i/n times the width 1/n. Now what's this limit going to look like, well, we're summing over i, and n is a constant. Therefore we can factor a 1/n squared out of the sum, and we're left with the sum as i goes from 1 to n of i. And now comes the hard part. Fortunately, we've seen that sum before. What's the sum i goes from 1 to n of i? Well that's really 1/2 n times (n+1). And now we see that dividing by n squared, the leading order term in this Riemann sum, is 1/2. Everything else is a higher order in 1/n, and hence goes to 0 as n goes to infinity. The answer to this definite integral is 1/2. Indeed, as it must be. Do notice that the difficult part of this computation was that sum of i, as i goes from 1 to n. Note also that the definite integral satisfies certain properties, for example, linearity. If you have the integral of the sum of two functions, f and g, then it's really the sum of the integrals. Otherwise said, if you add your two integrands together and then integrate, you get the same thing as if you integrate the pieces, and then add them together. This is true at the level of an individual Riemann sum element. And so it's true, in the limit. Likewise, if you multiply an integrand f by a scalar c, then the integral is equal to that constant c times the integral of f. Again, otherwise said, you can multiply by a constant and then integrate. Or integrate and then multiply by constant. It doesn't matter, you get to the same place, whichever path you take. Again, the reason why this is true is because it's true at the level of Riemann sums and hence, in the limit. Another important property is that of additivity, which states that if you take the integral of f from a to b and add to it the integral of f from b to c, because those limits match up, you get the integral of f from a to c. This certainly makes sense that the level of a Riemann sum, you can concatenate these intervals together. We're going to think of it in terms of adding paths together, a perspective that makes sense in the context of orientation. That is, the integral of f from a to b is minus the integral of f for b to a. Now why does this happen? Well, let's think of the following terms. If we were to move the integral from b to a over to the left hand side of the equation, we would get that the integral from a to b + the integral from b to a equals 0. Why would that have to be true? Well, from additivity, the limits match up and give us the integral from a to a, which clearly must be 0. That's one way to make sense of this orientation property. Another way to think about it is that we are adding directed paths together. And when you have the same path from a to b with the orientations reversed, it's as if the paths cancel, and you wind up getting the integral over a point, which is 0. The last property we'll discuss is that of dominance. That states that if f is a non-negative function, then the integral of f over an interval is also non-negative. From that follows a slightly less obvious result. Namely, if you have a function g, which is bigger than f, then g- f is non-negative, which means that the integral of g- f is non-negative, which by linearity means. But if g is bigger than f, then the integral of g is bigger than the integral of f. So much for the good news. The bad news is, we can hardly compute anything with this definition. There are two definite integrals we can compute. We can compute the integral of a constant by, let's say, choosing a uniform partition and then taking the appropriate limit, you can see that you get the constant times the width of the interval. The other integral that we can do is the one that we've done already. The integral of xdx, if we do that over a general interval from a to b, then I'll leave it to you to set up the uniform partition, reduce it to a limit, and get the answer, which is, as it must be, 1/2(b squared- a squared). That's about it. There's a little bit more that we can do, for example, if we tried to integrate sine of x or cosine of x, not over an arbitrary interval, but over a symmetric interval from negative L to L, then there are a few things we would observe. For sine, there's a symmetry about the origin, which implies that every time you have a partition element on the right with, say, a positive value, you get a corresponding partition element on the left with the opposite value. These two will cancel and will give you an integral of 0, because sine of -x is -sine of x. For cosine, we can't quite do the same thing, but we have a symmetry about the y axis, which means that every time you have a partition element on the right, it is balanced by a symmetric partition element with the same value of cosine. Therefore, we get a doubling. Because cosine of -x = cosine of x, we can reduce this integral to 1 from 0 to L and double it. This simple example has a more general pattern. We say that sine is an odd function, and cosine is an even function. An odd function is one that has this symmetry about the origin or function for which f(-x) is -f(x). For such a function, the definite integral over a symmetric domain from -L to L is always 0. Likewise for an even function, when f(-x) is f(x), then the integral from -L to L is twice the integral from 0 to L. Another way to think about odd and even functions is that the odd ones have an odd Taylor series, and the even ones have an even Taylor series. All about 0. Now, in general, you're gonna have to be careful. Definite and indefinite integrals are not the same type of object, even though they have similar notation. A definite integral is a number and a limit of sums. The indefinite integral is an anti-derivative in a class of functions. We'll soon see what they have in common. So what do you think of the definite integral? It's not so easy to compute, is it? The definition of the definite integral is like the definition of a derivative. It's crucial, it's complex, and it's quickly forgotten. Don't forget this definition. It's important. Fortunately, we won't have to use the definition to do computations because of what we'll learn in our next lesson, the fundamental theorem of integral calculus.