0:00

In this video, we're going to discover together the famous Euclid's Algorithm.

Which is, once again,

let me remind you,

an efficient algorithm for computing the greatest common divisor of two integers.

It is based on the following,

simple but crucial Euclid's Lemma.

An integer d divides both integers a and b,

if and only if, d divides a minus b and b.

Let's prove it. So assume that d divides a and b.

This actually means that a can be represented as d times p,

and b can be represented as d times q,

where p and q are integers.

But this also means that a minus b is equal to d times b minus q, right.

So, a minus b is divided by d,

just because p minus q is an integer.

a minus b can also be represented as d times some integer,

which is in this case equal to b minus q, right?

For the other direction,

assume that d divides a minus b and b.

This essentially means that a minus b can be represented as d times p,

and b can be represented as d times q.

But in this case,

so what we need to prove in this case,

is that a is also divisible by d,

but a is equal to a minus b plus b, right?

Which means that a is equal to d times p, p plus q.

So, a is also of the form d times some integer number which means nothing else,

that a is divisible by d again, okay?

So, this is a simple enough lemma and still,

let me also show you some visual explanation of this lemma.

So consider some two numbers,

for example, 10 and 6.

So, they are both divisible by 2.

This actually means that each of them, can be,

kind of represented as some number of blocks of lengths 2.

So, and also, the difference between 10 and 6,

is what is shown here on the slide,

is this remaining part of 10, okay?

So, if you align these two numbers,

by the left hand,

then what is left of 10 is exactly,

is the difference between 10 and 6.

So, now if you show these blocks of lengths 2,

then it is visually clear that,

since 10 is composed of some number of blocks of lengths,

2 and 6 is composed of some number blocks of lengths 2,

the difference is also composed of some number of blocks of lengths 2, okay?

So, this already gives us the possibility to implement the following algorithm.

Let's do the following.

So, first of all,

as usual we first check that a and b satisfy the properties of our convention.

So, both of them are non-negative and not both of them are equal to zero, okay?

Then, we'll repeat the following procedure.

If a is greater than b,

we subtract b from a.

If b is greater than a, I'm sorry,

if a is at least b,

we subtract b from a.

On the other hand, if a is smaller than b,

we subtract a from b, okay?

And we do it while both of them are greater than zero.

So, what will happen in the end of

this while loop is that one of these two numbers will become equal to zero.

So at this point we will just return the maximum of these numbers.

So recall that if we have two numbers and one of them is equal to zero,

and the other one is greater than zero,

then the greatest common divisor of these two numbers

is equal to that non-zero number, okay?

So, this is an algorithm and you can run it,

and you will notice that for small numbers it is fast enough.

And moreover, if you have two numbers,

for example with 10 digits,

it can work quite efficiently even on such long numbers.

Still, unfortunately, there are still some numbers,

some pairs of numbers for which this algorithm is not efficient enough.

For example, if you consider the number like this from the small number like this,

then the resulting algorithm is going to perform too many operations.

And it is not so difficult to understand the reason of such bad behavior.

So, what will happen with these two numbers is the following.

So, the algorithm inside the while loop will start to subtracts 7 from this huge number.

So, it will do it again and again and again and again.

So, it will need to subtract 7 billions of times

before this number will become smaller than 7, right?

But let's try to understand what will be

the remaining number when we stop subtracting 7 from it.

So it actually reveals that what is left is

exactly the remainder of this number when divided by 7, okay?

So, this is just by definition of the remainder.

So this is what is left when we subtract and

sufficiently remaining copies of seven from these numbers, okay?

And this suggests us to change or to adjust our algorithm as follows.

So, instead of just subtracting a smaller number from the larger one,

let's just state the remainder.

So, instead of subtracting,

we will compute the remainder of a when divided by b,

in case, if a is at least b.

In the opposite case, when a is smaller than b,

we will replace b by the remainder of b when divided by a.

So in the end,

some of these two numbers will become equal to zero and

we will return the maximum of a and b again.

So this is essentially the Euclid's Algorithm.

So as you see,

this is a pretty simple algorithm.

And what is also fantastic about this algorithm is that it is already efficient.

So, it can already process numbers with 100 digits easily.

Even on your laptop, it will return the results in blink of an eye.

So, to be more precise, if for example,

if your number is 100 digits long then

the number of iterations of the while loop will be no more than 700.

So, it is very easy for modern computers to perform so many number

of operations because they perform roughly one million basic operations per second, okay?

Each iteration of our while loop is a division which is not

a very simple operation but still it can be performed quite quickly, okay?

Now let's, to analyze why this algorithm is so efficient,

let's just do the following.

Let's just add some debug printing to our algorithm.

So, the only difference with the previous algorithm is that,

we added these two lines to the code.

So at the beginning of each while iteration,

we print the two current numbers, a and b.

And in the end, we also print the resulting two numbers and here,

we also print the result.

So if you run this code for these numbers shown at the bottom of the slide,

then what you will see on your screen is the following.

So, to compute the greatest common divisor of these numbers-

The algorithm we'll try to compute the greatest common divisor of these two numbers.

So, this is essentially the remainder of

the first number when divided by the second number, okay?

And, then it will keep repeating the same procedure,

but what is important in this output is that you can

notice that our numbers are getting shorter and shorter and shorter and shorter.

And, this is essentially what happens with any pair of numbers.

So, what we are going to prove is that each iteration of the Y loop,

if you have two numbers, A and B,

and assume for simplicity that A is at least B, okay, then,

after this wide iteration,

A is going to be dropped by at least a factor of two, okay?

So, at each iteration,

the larger number drops by at least a factor of two.

But, to be even more formal and precise, let me state it as a lemma.

If we have two numbers,

A and B such that they are both greater than zero and A is greater than B,

then if you take a remainder of A modular B then what you

get is a number which is strictly less than A divided by two.

Okay? Now, let's prove this.

So, for this consider two situations,

in the first case B is at most half of A.

So, this is our first case.

Then what we know,

is that A modular B is a remainder of A when divided by B.

Just by the definition of the remainder,

the remainder must be smaller than B itself.

So, we know that A mod B is smaller than B and

just by our assumption it is at most A divided by two.

Or pictorially, you can look at this situation as follows,

so this is A, okay?

This is half of A.

And we know that B is smaller than A, is at most A half,

and we know that the remainder is even smaller.

Then, what we know is that A mod B is somewhere here.

It is definitely smaller than A half, okay?

Now, in the remaining case when B is greater than A divided by

two then what we actually know is that when you will divide A by B,

you will actually get one and some remainder.

Why just one?

Well, just because if you multiply this inequality by two,

for example, then what you get is two B is greater than A.

Okay? So, two copies of B is already greater than A,

okay, B plus B is greater than A.

This means that when you divide A by B,

you get one plus some remainder.

And, what is this remainder in this case?

Well, this is just A minus B.

So, the picture is the following, this is A,

this is half of A and we know that B is greater than half of A.

This actually means that B lies somewhere here,

and this in turn means that A mod B which is equal to

A minus B is smaller than half of A, okay?

So, in both situations A mod B is smaller than A half.

And this is extremely good.

So, this means that at each iteration,

like if you can see there's a binary representation of our numbers,

then at each iteration at least one of these numbers,

the number of B as a binary representation decreases at least by one.

Or put it otherwise, at least one of our numbers is dropped by at least a factor of two.

This, in particular, means that if you started with two numbers, A and B,

probably one of them was much larger than the other one, but in any case,

the number of iterations of the wide loop is at most the binary logarithm of

A plus binary logarithm of B.

This is just because at some iterations you

divide A by at least a factor of two. At some iterations you divide B by two.

So, this depends on whether A is greater than B,

or B is greater than A.

But in any case, you cannot divide A by two,

for more than binary logarithm of A iterations.

Okay? I mean, because after so many iterations A is going to

be equal to zero and it will stop our wide loop and the same applies to B.

So, the total number of iterations is at

most binary logarithm of A plus binary logarithm of B,

which is great because even if for example,

A consists of 5,000 decimal digits,

this means actually that A is at most 10 to

the power of 5,000 and the binary logarithm of

A is at most something like 17,000 which is nothing for modern laptops.

So, even if A consists of thousands of digits,

it is perfectly fine for modern computers to find the greatest common divisor

of such two numbers by the Euclid's algorithm so it is very efficient.

So, as a final remark,

let me also show you how to make this beautiful algorithm,

how to make the code of this beautiful algorithm even more compact.

So, essentially it can be written just as one line of code.

So, this is our common check that A and

B satisfy our properties and actually here is a really small change.

Here, we assume that the first number is greater,

is at least the second number.

So, actually we would

like this algorithm to be called with two numbers such as the first one is

at least the second one then which

acts that B is greater than zero and then which acts that they are not

both equal to zero.

Then what we do is the following, so,

if A and B are not equal to zero and we know at this point that A is at least B,

then we replace A with the remainder when divided by B.

So, what we get is some number which is smaller than B.

And for this reason we actually swap here the perimeters,

we make a recursive call for B and A divided by B.

So, at this point, B is greater than the remainder of A divided by B.

So, this is the reason why B here is the first perimeter of

a recursive call and we do this if A and B are non-negative.

On the other hand,

if B is equal to zero then we just return A immediately.

So, this gives the following nice and compact implementation of the Euclid's algorithm.