In the application lesson, the random intermittent failure turned out to be a capacity issue. The services simply couldn't keep up with the demand and you're able to redesign a solution to add a storage service component in addition to the web server component and the thumbnail processing app server component. This leads to another problem in the log aggregation system. The log aggregator needs to account for another source of log entries and the log aggregation servers running out of disk space. Take a closer look at this problem. Your challenge is to modify the log aggregation design to avoid or overcome this issue. Watch the lesson that describes the problem, then come up with your own solution. When you're ready, continue the lesson to see a sample solution. Remember that the sample solution is not the best possible solution. it's just an example and your design might So now it's your turn. Here's our design challenge. Now we're getting a little bit more complicated logging in the structure, because look what's happening now. Now that we've added a third component, these are the data storage logs, so they are generating their own logs. So now our business logic is generating three different kind of log types. Well, in our case here our application can still function the same. We still have to ingest the data. We're going to append different log series together. We'll transform them, output them and this will happen on a daily basis. [COUGH] So [COUGH] here's the business logic. Now what we're doing is we have to have two sets of logs because we've decided that we want to be able to match the web server with the application and then of course the storage logs with the web log, as well. So we understand from the user session, was there an error in the application or was there an error with storage? And in this case here, we should be able to get maybe possibly conflicting, not conflicting, but duel confirming reports. If the application server failed, we should see the equivalent data storage logs. Or if there is a failure in the data storage, it would produce one in the app logs. But maybe not both in the same time. So we're tracking these separately. So now we're generating a lot more log files, but this is going to be very helpful here for troubleshooting. So in this case here, our logging server is outgrowing the disk. We had a simple virtual machine, it was outgrowing the persistent drives. So now what storage service should we take advantage of to output this storage? Well, you might be thinking. I'll pause for a second while you're thinking. All right, that's enough time. You might be thinking that, hey, why not GCS? It certainly worked for the storage server for the images. Why don't we use that for the logging server? Well, that would be a perfectly good idea. However, what is unique about these log files? Let's take a look at this. So the logs that are inputs into the aggregation logging server have outgrown the capacity of the server disk. Now, remember, there are multiple designs that we could be taking advantage of here, but let's go back here and notice the format of these logs. These logs, in a way, are indexable. And so this kind of maybe bodes towards some kind of relational or index based data storage, as opposed to these are just objects, it's actually the contents inside of these objects are what are most necessary. So, in this case, one of the solutions that we came up with was why do we use these and store this data in the Cloud Bigtable. Because what are the characteristics? The characteristics are, we want very fast data ingest, which will be great for future scales these logs continue to grow. It's going to be very low latency and it's indexable. We're going to have a key value data store, so now we can actually use, we can transform our data and then actually create a key or a session key that will combine all this data into one log entry. Now, the logging server, it's going to still exist, but it's going to be processing the ingest, the transformation and then outputting its storage to the Bigtable. So the other difference is now that we're writing with Bigtable, again, we're not writing to local store, so we really have to use the Cloud Bigtable APIs. So this introduces a little complexity, but we could have used things, and I've heard suggestions like, what about data flow? Well, that would be a complete rewrite of the entire process. Our guy doesn't have very much time, so he just simply took his existing Python script, it included the Bigtable APIs for the right portion of that. Now, maybe later, we can do something more fancy, but this is only one solution that we came up with. All right, so with that, that concludes this module. Thank you so much for watching.