0:00

Welcome to calculus. I'm Professor Greist.

We're about to begin Lecture 6 on Expression Points.

We've seen that the Taylor series provides a good approximation to a

function near zero. What happens if we wish to focus our

attention at some other point. In this lesson, we'll consider changing

the expansion point. This will lead us to a broader definition

and interpretation of Taylor series. As we have seen, Taylor expansion gives

an excellent way to approximate a function for inputs near zero.

However, in many applications, zero is not the most interesting input that you

can think of. Many examples of this, besides finance

and economics, where we care about nonzero values.

What would be nice is a way to Taylor-expand a function that works well

for inputs that are not necessarily close to zero.

There certainly is. This is an important definition.

The Taylor series of f at x equals a, is the sum, k goes from 0 to infinity, the

kth derivative of f, evaluated at a, divided by k factorial times quantity x

minus a to the k. This is not a polynomial in x, but rather

a polynomial in the quantity x minus a, where the coefficient in front of each

monomial term is the kth derivative of f evaluated at a, and then divided by k

factorial. A little bit of terminology is in order.

The constant term is called the zeroth order term.

Next, comes the first order term, the second order term, the third order term,

and so on. These correlate to the degree of the

quantity x minus a. If we were to change things, perform the

change of variables defining h to be the quantity x minus a.

Then the above series would become a polynomial series in h.

However, it would not be giving you f of h, but rather f of x, which is a plus h.

This series in h is telling you that if you want to know values of f that are

close to a, then you can substitute in a small value for h.

And if you're looking for an approximation, you can ignore some of the

higher order terms. Let's look at an example.

Compute the Taylor series of log of x. Now, we know that we cannot do that about

x equals 0. So let us do it about a more natural

value. Let's say about x equals 1.

To compute this Taylor expansion, we're going to need to start taking some

derivatives. If we looked at the function log of x,

the zeroth order term is obtained by evaluating at x equals 1, and that gives

us, simply 0. The derivative of log of x, you may

recall is one over x. Evaluating that of x equals 1 gives us a

coefficient 1. Now, taking the derivative of one over x,

gives us minus x to the negative 2. We evaluate that and continue

differentiating. As we go, evaluating each term at x

equals 1. After computing sufficiently many

derivatives, you start to see a pattern. It requires a little bit of thinking, and

a particularly formal way of thinking called induction.

But, with a bit of effort, one can conclude that the kth derivative of log

of x is negative 1 to the k plus 1 times k minus 1 factorial times x to the

negative k. Evaluating that at x equals 1 gives us

simply the coefficient, negative one to the k plus 1 times k minus 1 factorial.

Now, to get the full Taylor Series, we need to divide this coefficient by k

factorial. In so doing, we see a not too unfamiliar

series. For log of x as x minus 1, minus x minus

1 squared over 2, plus x minus 1 cubed over 3, et cetera.

The coefficient in front of the degree k term is negative 1 to the k plus 1, over

k. And our summation goes from 1 to

infinity. Indeed, if we let h be x minus 1, then we

obtain the very familiar series, log of 1 plus h equals sum k goes from 1 to

infinity, negative 1 to the k plus 1, h to the k over k.

We've seen that before with an x instead of an h, but it works the same.

Do keep in mind that Taylor Series are not guaranteed to converge everywhere.

Indeed, if we look at the terms for log of x and take a finite Taylor polynomial,

then the higher and higher degree terms only provide a reasonable approximation

to log of x within the domain of convergence.

We know from our last lesson that that domain is going to be for values of x

between 0 and 2. Outside of that, these terms are

providing worse and worse approximations of log of x.

What do we do if we want to approximate log of x outside of this domain?

Well, you need to do a Taylor expansion about a different point, someplace close

to where you want to approximate. It's a bit easy to get confused with all

of the different notation associated with Taylor series.

So, let's review. One way to think about the Taylor

expansion about x equals a is to write f of x as a series in the quantity x minus

a. Another way to do it is to write it as a

series in the quantity h, where h is equal to x minus a.

In which case, you're coming up with an approximation for f of a plus h.

One way to think about this is that the more derivatives of f you know at a point

a, the better an approximation you get at x or at a plus h.

This is a perspective that you're going to want to keep with you for the

remainder of this course. The principle is that successive

polynomial truncations of the Taylor series approximate increasingly well.

We've seen that before. In this lesson, the point is that where

you do the Taylor expansion matters. If you are expanding way over here and

trying to get information about way over there, then you're going to need a lot of

derivatives to do that. On the other hand, if you approximate

about the correct expansion point, you might not need so many derivatives in

order to get the job done. Let's look at an explicit example.

What would it take to estimate the square root of 10?

Well, we would have to look at the Taylor series of the function square root of x.

If we expand that about a point, x equals a, then I'll leave it to you that the

first few derivatives work out to a Taylor series of root a plus one over 2

root a times quantity x minus a minus 1 over 8, times the square root of a cubed,

times quantity x minus a squared plus some higher-order terms in x minus a.

Well, to compute the square root of 10, what are we going to do?

Let's say I expand about 1 because 1 is a simple value.

I know the square root of 1, that is simply 1.

That makes the coefficients of the Taylor expansion easy to compute.

On the other hand, when I'm actually trying to estimate the square root of 10,

based on this, then I get 1 plus one half times 9 minus one eighth times 9 squared

or 81, plus higher order terms. How good of an approximation is that?

Well, that gives me a value of negative 4.125.

Now, I know this is not the square root we're looking for.

That is a bad approximation. So, what if we were to compute an

expansion about x equals 9? Then, this is something for which I also

know the square root. The square root of 9 is 3.

The coefficients are easy to work with. And if I plug in a value of x equals 10

into this Taylor series, I get an approximation of 3 plus one sixth minus 1

over 216. This gives an approximation of 3.1620

some other stuff. The true answer agrees with this up to

the first four digits. The expansion point, in this case,

certainly matters. Now, one thing to be cautious of, is that

if you're computing a Taylor series of a composition, you must expand about the

correct values. If you have a function, f composed with

g, and you want to expand it about some input x.

Then, you must expand g about x. But you must expand f, not about x, but

about g of x. And it is that term in particular that

causes problems. In an explicit example, we'll be able to

see how this works. Compute the Taylor series of e to the

cosine of x about x equals zero. Well, even the cosine x is a composition.

What do you what do you do first? First, you take the cosine of x, then you

exponentiate it. If we are to compute the Taylor series

about x equals 0, then we must expand cosine about 0, but not e to the X.

Cosine of X about 0 is very simple. This one, we know.

However, for the second term, the exponential, we must expand that about an

input of 1 because that is what gets fed into it.

Cosine of 0 is 1. Well, the Taylor series of e to the u

about u equals 1 is easy to compute. There's nothing difficult there.

However, what we must then do is substitute into it this series for cosine

of x. That is u is 1 minus x squared over 2

plus x to the 4th over 4 factorial plus higher ordered terms.

That is [COUGH] a little bit too much algebra to fit on this slide, and so I'll

leave you to do that. Though I might point out a little bit of

help here, that this can be rewritten as e times e to the cosine of x minus 1.

You'll find that to be a little bit easier to compute the Taylor series of.