Let's dive into a little more detail still on this critical area of test. The first one we have is this unit or small test. These are almost always created by the individual developer as they're writing their code. And while we're going here, we're going for here what we want to have happen with these tests, is that they're very isolated, meaning that they really are just testing this one specific bit of code that it does what it's supposed to do and they run extremely quick. And the reason for that is we want to be able to have developed a push a button, run a whole mess of these, and then know exactly either, a, everything's okay or, b, it's not, and here is exactly where it's not and exactly why it's not okay. That's how we get the rewards from investing in these small tests. And as we move on, this medium or integration test might actually be referred to by a couple of different terms. Integration test, component test, there are even a few others, and our thing here is if we have function a we're looking at how it interacts with function b, and then, in a separate test, how it interacts with function c. As the software, as we expand the kind of boundary of what the software does, we want to make sure that as we cross those boundaries everything is working properly to frequent source of bugs. And then we have larger system test. And you noticed there's even more terms here. Part of that is that there is more variation on how these tests run and what they do. And so that's why I think this idea of small to medium to large is really helpful because it gives us a nice clear view of which test we're actually talking about. So that if one person uses this term and another person uses this term, we know that basically they're talking about a large test. And so, you may want to refer back to these, that would be familiar with these terms, they refer to these large tests. Now, in this area of system test or large test or test that happen at the end of the pipeline usually, you may have even a couple of other types of tests, performance tests, which test how system performs under a lot of load or a lot of inputs and make sure that it's scalable. And also this is where, generally speaking, a lot of security testing will happen to making sure in an automated fashion that the things that we've done to make sure the system stays secure are in place and they're working. Now, what is exactly the small, medium, large thing mean? Well, I borrowed this from a post that a few folks at Google did on test sizes, and I like it because it is so structured and so consistent. And the key thing really is that as we go from small to medium to large, we're getting test that are going to run slower. And we're also getting test that are going to provide less information, less immediately actionable information at least, less isolation back to the developer. Because if a small test in unit test fails, the developer has a much better idea of exactly where that code is that is failing and probably, by virtue of its isolation, more of an intuition about what might be wrong. And what even makes that intuition better in that, that actionability, what really improves it, is that these tests run extremely fast. So the developer feels like, hey, as I'm going along, I'm changing stuff I can push a button and know if everything is okay in a few seconds perhaps or just a few minutes. So rather than having to get up and get a cup of coffee or move onto something else, I can push a button, know whether I'm okay. That's I think how you'll find that most developers get the most rewards from investing in these unit tests, these small tests. And then as we move outward, we do, in many cases, need to test the boundaries between functions and the system as a whole. You'll notice that in these small tests there's a lot of noes, in fact they are all noes. And really, that's what I mean when I say isolation. So, this small test should have no network access, no database or file system access, or access to any kind of external system, it should be very isolated. If function A does in fact talk to function B, then we're going to fake, we're going to take this interaction and we're going to fake it through what you'll often hear referred to as a mock or a stub. So we're going to actually write separate code, and this is part of what the required investment is in unit test. We're going to write code that actually simulates this interaction so that we're really just testing a, and we're controlling for b and just testing a, that's how we get this speed and we get this isolation with these unit tests. So, and you could see down here, they even post the guideline on really how, what is the longest that an individual test should take. In a perfect world you're less in these of course because, again, you're actually doing the job of developing, you don't want to sit around and wait for the output, the result of these, you want to be able to just push them and get feedback immediately. When I say test stack, this is what I was talking about, you'll usually have a whole bunch of these unit tests and relatively fewer integration and relatively fewer system tests. You'll often hear about a test pyramid, well, I guess, technically this is kind of a zigorot because there's steps and not a straight line. And so the reason for this being this way is that you're going to have a lot of individual functions that come together and have these key integration points, and then you'll really want to have just a very small amount of these system tests that simulate the key pathways, the critical things that just absolutely cannot be not working when you push stuff to production. And the reason why is that that the speed and the isolation we talked about. And also these tests are, for various reasons, mostly because simulating these interactions has a lot of inherent variation that can come into play, these tests are very expensive to keep working and maintain. And if you have a whole bunch of them, you may get a lot of false positives, lot of things that show up as not working in the test that are in fact either irrelevant or not really broken at all. It's just the test itself that broke, and that is the opposite of what you want. The best test is, you would never want to catch something up at this level that you could've caught down here, and you would never want to have something bubble up to this level that you could've, for instance, caught with an integration test. As you go through your retrospectives on, are we investing in the right places in our test coverage? And if something bad happen in production, there was a retrospective on a bug or a production issue, why did that happen? And we really dig into that and not to blame people but to understand what how could we have, what is the lowest level tests we possibly could have used to make sure that that didn't happen and improve our focus on where we're doing our testing. So, that is an overview of the test stack. What are these different types of tests and why are they important in their particular role and context?