, So let's now continue our study of using thunks to delay computations we might not need. See how it's, that's a good idea and how that is a less good idea. So, thunks let you skip an expensive computation if you're not going to need it. So this is great in this sort of first code skeleton you see here. If I have some function that takes in a thunk, and I have some if that ends up taking the true-branch, the branch that doesn't use the thunk, this is much better than passing in the result of some expensive computation. If I didn't use a thunk here and I had to pass in the result of the expensive computation, I would have done a lot of unnecessary work. So, that's fine, and in those situations, using thunk is straightforward. But what if you're more in the situation like here at the bottom where I have a bunch of separate conditionals? I don't know if any of them are going to evealuate to true or evaluate to false. I don't know how many. But they do all need, if they're false as in this case, doesn't matter whether they're true or false, they do all need the same result. So should I precompute the results of the expensive computations for all of them? That would be wasteful if none of them need it, but if multiple of them need it, this situation with the thunk is actually worse, because I'm going to reevaluate the thunk every time one of these ifs ends up being false. So that's the trade-off, we're going to end up getting the best of both worlds in a few minutes. But first, I want to show you an actual example it's a silly example, but it's actual code instead of something where I've left out all the interesting parts. So just to make this interesting, this first function is ridiculous, it adds its arguements, but I've put in enough extra code that it takes a long time to evaluate. So if I do something like slow-add 3 4, you see it actually takes a second before it produces 7. that way we can actually see the difference in, in the example I'm about to show you. Okay? Then I have this functi on that does multiplication, but in sort of a strange way. It takes in a number x, and it takes in a thunk y that when you call it, returns a number. And here is how it then multiplies x and the result of calling y. If x is 0, it returns zero without ever executing the thunk, because that's how multiplication works. We don't care what the thunk is. If x is 1, then, we evaluate the thunk and that's our answer. Otherwise, we evaluate the thunk and we recursively call my-mult with x minus 1 and not the result of calling the thunk, but with the thunk itself, because that is what the my-mult expects, it expects a thunk for its second argument. So if you look at this, this is going to end up calling y-thunk once every time for x. So, if, if x is seven, its going to call the thunk seven times, alright? So, indeed, therefore, even though, something like slow-add of 3 and 4 is very slow. Right? If I call my-mult with 0 and a thunk of slow-add 3 and 4, that's very fast. want to see it again? Very fast. But if I were instead multiplying by 1, well, this is sort of unavoidable. I need to add three and four, but that's okay. I mean, what, what else am I going to do? But if I called it with 2, it's actually going to take twice as long, because it's calling that thunk twice. and I don't even want to sit here while I did something like multiply by 20. Okay? So thunking was great in the zero case. It was fine in the one case, we did have to add three and four. but it was terrible if for anything greater than one, because it was actually a net loss compared to the version of multiplication that just took in an x and a y, and therefore, the caller would have to precomputed the two numbers and therefore, it would have already done slow-add. In fact let me show you that version as well. I can actually do it with my-mult. So suppose I passed in zero, but then for the second argument, I precomputed slow-add, put that in a let, and then in my lambda, just look up x. Alright? So, all I'm doing is my second a rgument to my-mult, which is evaluated right away, it evaluates x to slow-add of 3 and 4, and then, creates a thunk that when you call it, will look up x. So, this is going to call slow-add 3 and 4 once before you ever call my-mult. So now, if you call it with 0, it does just take a little bit because we did call slow-add. But two doesn't take any longer, and in fact, 20 doesn't take any longer. They all take about the same amount of time. Okay? So that's all fine, but I gave up zero being fast. It's not fast anymore. Okay? So that's sort of our motivation. So, what if we could get the best of both worlds? What if we could take some computation that always return the same result, had no side effects, didn't matter when it was executed, and we did not compute it until we needed it? But then, once we did need it we remembered the answer, so in the future we, didn't have to compute it again. We would just return that remembered result immediately. Well, this has a name, it's called lazy evaluation, and languages where most constructs work this way, were in fact all function work this way, are called lazy languages and Haskell is the most well-known, successful example today. But what I'm going to do in the next segment in racket, is show you how we could just code this up. Now, racket does not work this way for function argument. Function arguments are all evaluated at the call site. We do not have lazy evaluation in Racket just like we didn't in ML. But we will be able to code it up, just ourselves. Racket does have some built-in support that is very slightly different syntactically in what it's doing. I'd rather show you the implementation, so we will use our own, and then, we will come back and revisit this multiplication example. So, in our implementation, all we're going to need are thunks and mutable pairs, and the cons, two things we've seen from previous segments. So we're going to be able to put together some previous ideas and get these best of both worlds.