Welcome to module one. We will be talking about Anthos overview. In this module, we will talk about a hybrid cloud overview. So generally, what is the Hybrid Cloud? How is it in different from a multi-cloud for instance? Then we will talk about modern solution for that architectural design, and then we will talk about the Anthos technology stack. So generally speaking, a hybrid cloud overview, so just to keep in contexts, a hybrid cloud is the idea of creating a bridge or a connection between your on-premise and the Cloud environment rather than multi-cloud, which is having multi-cloud environments. So let's say Amazon, Azure, and Google. So creating a hybrid connection between the different public Clouds. The main topic of our discussion or our training is the hybrid cloud, is the connection between your Cloud environment and your on-premises. Generally speaking, companies today would really want to modernize their applications in their on-premise in their own pace, and they would like to be able to find a way to do that inside of their own premises rather than feel the pressure to maybe move things to the Cloud, which sometimes can be risky and they would want to basically be able to have a more of a granular step-by-step towards modernizing their application, getting some of the benefits of running in the Clouds while maintaining some of the workloads in their on-premise, and that flexibility provides companies with a lot of business value rather than just trying to put everything in the Cloud as quickly as possible. So that is generally speaking a trend that we see now in India and the hybrid environment basically when you have an extended environment across different places across the Cloud and your on-premise, you all of a sudden have a lot of things that you have to think about. The first one is that you want to write once and deploy in any Cloud, or any other place in that matter. So I want to be a developer and my dream is to be able to basically focus on the business logic on the code that I want to run and then deploy it anywhere, really. I want to maybe deploy that workload in my on-premise, and then maybe move it to the Cloud when I'm ready to. Maybe I would like to have this workload running on-premise and maybe during holiday season I would like to extend it to the Cloud and with a single click, I would be able to do that and then have that expanded there. Just I don't have to buy more servers just for Christmas for instance. Accelerate developer velocity the idea behind the fact that developers are very scarce talent and they need to be very productive and we would like to make sure that they have the tools and the simplicity that they need to not worry about the lower levels of. So we need some technology stack that allows them to increase their productivity and their velocity as a result because today you don't really differentiate with technology anymore. You differentiate with innovation with the velocity of your developers. Therefore, we would like to make sure that there are no obstacles for the developers to do their job and really iterate on the business-logic on what is core for your business. Consistency across environments is another thing that you have to think about when you extend your cloud environment to your on-premise. You want to make sure that your environments are in sync or as similar as possible so that there is less failure points and less variables that you have to worry about. If I'm running the same environment across the Cloud in my on-premise, I can worry less about compatibility or any other issues that can come up interoperability with legacy workloads. Maybe you have a mainframe in your on-premise. Maybe you have some workloads that you're running in some bespoke way that you created in your on-premise, and you want that extension from the Cloud into your on-premise to really work nice with those things. Because at the end of the day you are trying to enhance your on-premises presence or your workloads that are already running there. So this is another thing that we have to think about when we do this hybrid connectivity. You want increased observability and SLO, the idea here is that you want to have some formal way to monitor both of these environments and be able to do that in one tool perhaps so that you will be able to see some continuity between the two environments. If there is one problem here, maybe there's a problem there, where did it start, where did it end etc. So we want to have increased visibility, we want to decouple across critical components, increase workload mobility, so we can just move things across different places if we need to, disaster recovery, high availability, maybe we want to have some of our workloads at the edge of our network, at the edge of the Cloud so that it will be as close as possible to our users while maintaining some of the independence and keeping some things on-premise for data sovereignty or any other technical requirements, and of course, avoid vendor lock-in because we don't want to be tied to anything. We want to maintain our flexibility. This is really important and this is why we use open source as much as possible.