All right, congrats on getting it this far. So let's put everything you just learn about StyleGAN together. I'll go over all the major components you just looked at and put it all together. And all these components are fairly important to StyleGAN. Authors did ablation studies to several of them to understand essentially how useful they are by taking them out and seeing how the model does without them. And they found that every component is more or less necessary up until this point. Of course, there are other ways of replacing some of these components to still get the same thing, which you've definitely seen across all of these weeks of learning about GANs. But let's put StyleGAN together and then get to implement all of these components. So first you learned about progressive growing, which essentially grows the generated output over time from smaller outputs to larger outputs. And so that composes the basic blocks of the generator. And then you have the noise mapping network which takes your Z. That's sampled from a normal distribution as usual for every single one of its values. But puts it also through this multilayer perceptron, these eight fully connected layers separated by sigmoids or some kind of activation. And to get this intermediate W noise vector that is then inputted into every single block of your generator except for at the beginning, which is typically how it's done, but it's injected in multiple places. And then you learned about AdaIN, or adaptive instance normalization, which is used to take your W and apply styles at various points, at various points in your network in each of those blocks. And earlier blocks will control those finer styles or inform those finer styles using the statistics from W. Versus later blocks will take those statistics, so scaling and shifting statistics from W to inform finer details. You also learned about style mixing, which samples different Zs to get different Ws, which then puts different Ws at different points in your network. So you can have a W1 be in the first half and a W2 in your second half. And then your generated output will be a mix of the two images that were generated by just W1 or just W2. And finally, you learned about stochastic noise, which informs small detail variations to the output. So a wisp of hair, or the placement of that wisp of hair, how curly your hair is and what types of curls those are. And that gets injected into the network at various points to also inform coarser or finer styles depending on which block you injected it. And that also has a learn scaling parameter of how much this noise matters at each pipe. And that's a wrap, congrats on getting all the way here, and so these are the major components of StyleGAN. Progressive growing, the noise mapping network, adaptive instance normalization, style mixing, and stochastic noise.