0:04

Okay, next we'll take a look at the basic techniques that we use for

manipulating asymptotic expansions to derive accurate and

concise estimates of the quantities that we're trying to study.

So our goal is to develop an expansion on a standard scale for

any expression that might arise in the analysis.

And these are just examples of the kinds of expressions that might show up.

Some of them are artificial, but some of them actually turn up in our analysis.

So we saw 2 inches, and we studied the catalan numbers.

And the problem with some of these, like 2 inches N,

it might be hard to compute that.

0:46

So imagine a 17th century mathematician

trying to compute that for n=1,000, that's a lot of multiplications.

Or, imagine a computer programmer today given the job that computing has,

it's simple to define, but if you use the basic definition for n = a million,

a million factorials, pretty big number, so you have to deal with it in some way.

On the other hand, using asymptotic expansions,

we're going to get all of these in a kind of coherent form

where we can be confident that we can work with them.

So there's a lot different techniques that we use and each one of them is relatively

simple but it's still worthwhile calling out examples of each.

So, and we'll look at each one of these simplification,

substitution, factoring, multiplication, division, and composition.

1:43

They seem really simple to call out but

on the other hand, when faced with dealing with one of these functions,

you really want to think in terms of there is an easy way to do this,

one of these basic ways, just have to figure out which one it is.

And there'll also be a x log trick is an important one or

technique and I'll show that in a minute.

So again, the whole idea is that a lot of times we have

not just these quantities, but maybe they combine in some way.

So like what's the value of 1 over 4 to the N 2N, choose N.

That's an expression that arises that's of importance in the analysis of algorithms.

And if you ask the program to try to compute that for N equals 10 to the 6th,

it's going to be a challenge.

And we'll actually see that it's relatively simple to express this in terms

of standard functions via asymptotics.

And all of these functions and all of the functions that arise we can do that.

2:50

So, let's take a look at the various techniques.

So the first thing is, that an asymptotic series is only as good as its O-term.

So that means there's no point in carrying around smaller terms,

if you have something that's encompassed by the O, just drop it.

It's only going to make the calculation more complicated and

it's not helpful in any way.

So we don't write log N + gamma + O of 1 where gamma is a constant,

because the O of 1 says the air is bounded by some constant and

so to have that constant might as well be gamma or whatever.

It's better to say that write down this expression as log N + O of 1,

it's more compact.

So, the other idea we already talked about was substitution,

where you can just change variables in a known expansion.

So, the Taylor Series for log of 1 + X is X-X

squared over 2 + X squared of 3 and so forth.

And if we just plug in a value for X, we plug in 1 over n,

then we get the expansion, so that's one technique that we already used.

Factoring, a very common thing to do is to estimate what the leading term is,

then factor that out and expand the rest.

So, look at this function, 1/N squared plus N and we think to ourselves,

what's this going to be close to when N is large, like N is a million.

It's going to be close to 1/N squared, so

we might as well factor out the 1 over N squared.

And then in this case, what we're left with, is 1/1 + 1/N and

that's a geometric series and we have asymptotic expansion for geometric series.

So expand it since it's 1+1/N,

it's 1-1/N or 1/N squared.

So and we could take that to more terms if we wanted to but

that's an asymptotic expansion of that.

If we use that expansion is says that for

N=1 million that value is going to be very close to 1/N squared.

The relative error that we would make

would be 1 over 1,000,000 in that case which is small enough for our purposes.

5:33

Multiplication, so multiplication is just algebra but

also employing simplification when we have smaller terms and we throw them out.

So let's look at the square of the harmonic numbers, so

we did estimate on the harmonic numbers.

We talked about the asymptotic series for the harmonic numbers

is given by approximation with the sum and I'll talk briefly about that.

And we'll talk more about that kind of asymptotics later.

So the harmonic number, is approximated by log N + gamma plus big O,

over 1 over N, where gamma is always constant 0.577.

So, to compute the square, of the harmonic numbers, we have to square those.

So, that's 2 terms with 2 factors with 3 terms each.

So we're going to wind up with six factors when we add all that together.

So ln N times each one of those,

ln N squared + gamma ln N + big O (ln N over N).

Inside big O, we usually don't specify the base of the logarithm since it

only differs by a constant.

So it doesn't matter that it's natural log so

we just write log, this means it's not specified at some constant.

6:51

Then the next row is gamma times each one, gamma log N + gamma squared + O of 1/N.

And the next term is O(1 over N) times all of them, O(1 over N),

log N over N gamma times O(1 over N) is O(1 over N), and

so O(1 over N) times O(1 over N), so O(1 over N) squared.

So that gives us nine terms but,

we can throw a lot of them out, because well first of all,

all the big O terms you just pick the largest one.

So, big O of 1 over N squared is much smaller than log N over N so

it can be [INAUDIBLE] in that.

Same with O of 1 over N, so all the big O terms and

there's five of them, get subsumed in that O of log n over n.

There's the gamma squared, and the gamma log N appears twice, so

that's an asymptotic series in the standard scale for

the square of the harmonic numbers.

Now just taking a look at this series,

it's important to note that there's a difference between

the first couple of terms and the big O that we lost.

So log N squared, so N is a million,

log N it’s going to be some small integer so

log N squared and 2 log N is not there not that far apart.

So it seems like were going to need those and the gamma squared is a constant,

but log N is a small integer is only a slight improvement in precision to carry

on those terms and so probably want those terms.

But when we get a divided by N,

that's when we have a huge improvement in precision.

Now, we can't always tell ahead of time, how far you have to go before you get this

big improvement, but in real problems it usually happens after three or

four terms and that's what happened here.

If you look the tables of these quantities, for N equals 100, I'm sorry,

1,000, 10,000 and 100,000, you can see,

so HN squared is the quantity that we're trying to estimate.

And then if you just try to estimate it with log N squared you can see it will

be off by fair amount 10% for even a 100,000.

If you add the 2 gamma log N you come much closer and add the gamma squared,

actually we're accurate to three decimal places because the next term,

the one that we're missing is going to differ.

It's going to be bounded by a constant times log N over N and for

100,000 that's going to be out in the fourth or

fifth decimal place, assuming the constant's small,

which normally we assume in these asymptotic series is they are.

But we can check as in this case that that's an accurate approximation.

So usually we'll try to carry it out long enough

until we get this big improvement in precision.

10:27

both and then we'll use the geometric series to expand the denominator and

that will give us a multiplication problem, so let's look how that works.

So [COUGH] H of n is log N + gamma + O of 1 over N,

that's the approximation that we've been using for H of N.

Natural log of N + 1, factor out a log N and

it becomes natural log of N + natural log of 1+1 over N,

natural log of 1 plus 1 over N plus big O 1 over N just from Taylor.

So, now we have the ratio of 2 asymptotic series,

and what we'll do is factor out the log N as before.

So, that's divide by numerator and denominator by log N and

so the numerator becomes 1 + N over log N, + 1 over N,

over 1 over N log N, so we're simplifying that,

the denominator becomes 1 + plus O over N.

We could make it 1 over N log N, to make it a little bit bigger and

again that's just to simplify the formulas if we did not get

a sufficiently accurate estimate we could go back and try to correct that.

And now 1 + O of 1 over N, if you expand as a geometric series,

it just becomes 1 + 1 over 1 + O of 1 over N,

is 1 + O of 1 over N, just as a geometric series.

And again, if there are more terms, you could carry more terms out there.

12:14

So now we have a multiplication problem and if we multiply that out,

that's proof that the asymptotic expansion of HN over log

of N + 1 to with 1 over N is 1 + gamma over log N + O of 1.

And again, there's a big improvement in precision when we get

to the 1 over N term.

So this is going to be a very accurate estimate of

that more complicated quantity as N increases.

Composition, so this is similar, so

we're just going to substitute an expansion and figure out what to do.

So if you had to compute e to the H of n,

you plug in the expansion for the H of n and see what you get.

What you get is e to the log N + gamma + big O of 1 over N e to the log N,

that's N, e to the gamma is e to the gamma, that's a constant and

then what's left is e to the big O of 1 over N.

And then that one you expand with Taylor, although it's a little

13:26

more work to actually prove that but you can from now on use that Lemma.

e to the O of 1 over N is 1 + O of 1 over N, and

you can expand it a couple of terms if you need to, and

that gives the simplification that e to the HN is Ne to the gamma

to then 1 over N or Ne to the gamma plus O of 1.

And again Ne to the gamma that grows with N over 1 as a constant so

there is a big change in precision, so

that's going to be a very accurate approximation for large N.

14:31

And again, you can see, for N = a million, this is accurate to within 1,

absolute 1, which is a fine small irrelative error.

Okay, x blog log,

so this is a way to bring us into position where we can apply

the composition in more complicated scenarios.

And all we'll do is write f(x) as

e to the log of f(x), then we can expand the log of f(x), and

we can expand e to that series, as we did before.

And here's a fine example, 1- 1/N to the N and when you see a thing like that,

you have to think, how would you compute that for that log value of N.

Ask a programmer to compute that for N = a million,

maybe not necessarily so easy to do.

It's going to at least take time proportional to N, and

there might be precision problems or

as with factorial there might be problems in needing to carry too many digits.

15:40

Well, actually, the way people compute things like that, is use the X log trick.

So that is, we write that as e to the log of 1- 1 over N to the N-th.

So, in the log of 1- 1 over N to the N-th,

we can bring the N down, it's N times the log of 1- 1 over N.

And so now, if we expand log of 1- 1 over N, again,

using the Taylor Series, we have -1 over N, plus O of 1 over N squared.

And then do the algebra, that's e to the -1 + O(1/N) and

again, e to the O(1/N) = 1 + O(1/N), so that's 1 over e.

It says that that function,

1- 1 over N to the N is going to be very close to 1 over e.

And again, that's the big improvement in precision we always look for and again,

we can check that for N = a million, that's accurate to six decimal places.

16:43

Because the bigger O says that the error is going to be out there around the six

decimal place.

So again, when we're trying to understand what the value of the quantity is,

If we know that it's 1 over e, which is this constant,

0.367879, that's very precise and concise information, as opposed

to that exact expression which maybe is hard to know what it is.

And when we start combining these kinds of formulas, which we do all the time,

we're going to want to work with the precise and concise expressions.

17:24

So that's the exp log technique and we actually use that one quite a bit.

So thinking about these various techniques, here's just a couple

more examples that you might want to try doing,

to make sure that you understand the kinds of techniques that we're using.

So log of N over N- 2 to O 1 over N squared and Hn squared and what's that

constant, times the 1 over N term because we only got it to log n over N.

And that is just to illustrate that if

desired we can go to more asymptotic accuracy.

Maybe we need to do that because we have got a function where we are multiplying

that thing by N squared and so we need more accuracy, just for example.

18:30

And so it's -2 over N plus big O of 1 over N squared, so

that's the solution to that one and this one is just

carrying out the asymptotic approximations to one more term which is

1 over N squared term and then that gives the coefficient of log N over N is 1.

And again, these types of things,

as with any kind of mathematics, they require a little bit of practice.

But the techniques are so simple, it's not difficult

to learn how to do these kinds of expansions.

So that's a quick introduction to manipulating asymptotic expansions.