In step 4, we use a network to distribute the server software and the administration while preserving separation of duty. In step 3, everything was on a single computer. Step 4, moves the web service to a dedicated host. It's easier to scale performance and to match the user load. If the load increases, we add more resources. Our service now takes advantage of cloud technology. In this case the enterprise owns the hardware, installs it on its own site and deploys the appropriate software for cloud-oriented scalability. We call this a private cloud as opposed to a public cloud. This choice indicates our cloud deployment model. In private cloud deployment, we have computer hardware that's exclusively assigned to our enterprise. The hardware resides on our own site so it's a private on-site deployment. An enterprise can also host a private cloud service at the service provider. That's an outsourced off-site deployment. This is like running a parking space in a garage that might also service your car. Most cloud services share their resources among multiple customers. This is more like running space in a large office building. The owner manages the infrastructure including some of the security. We talk about a public cloud in step 6. Here in step 4, we're looking at a private on-site deployment. Even though we can host cloud services at this step, cloud features don't really affect the cybersecurity, not yet. Modern web server software rarely runs as a single application. Typical sites use a separate database server for the websites' data. Web and database server software are large independent software packages. Sites often run the packages on separate network hosts. This diagram illustrates the network structure we want, instead of what will actually build. The computer host with the web service has two separate network connections. We want to keep our Internet traffic separate from our local network traffic. The administrators don't want to share local network services with the Internet users. A typical site shares a single Internet connection between its servers and the other computers in the enterprise. This is what we get our local network traffic combined with the incoming website traffic. A trust boundary might block physical attacks, but the network connection opens us up to logical attacks. Remote administration service becomes part of our attack surface. Administrators' client computers also add to the the attack surface. Network gateways generally provide a firewall to restrict traffic flow. The firewall uses traffic filtering to control access between different parts of the network. The previous module looked at access permissions in terms of reading and writing. Those terms are fine for RAM or drive storage. They're less effective for describing network access. Let's say an administrator sends a message to the web server. The administrator might call it a write operation, but the web server sees it as a read operation since it receives the data. Traffic filtering permissions end up talking about whether one network can exchange messages with the other network. In our application the filter also restricts the types of traffic. The LAN can exchange administrative traffic with the web server. And the public Internet can exchange web traffic with the web server, but it can't exchange traffic with the LAN. These traffic restrictions rebuild the trust boundaries around our networks. The web server still resides within our enterprise trust boundary, but it's outside the local networks trust boundary. No matter how carefully we manage our laptops, desktops, and mobile devices, there are internet threats we want to avoid. The web server safe zone is often called the DMZ. The acronym was lifted from the politics of warfare. Servers and other computers on the DMZ have special restrictions. The firewall protects servers on the DMZ by blocking most network traffic. It allows Internet traffic to visit the web server. It allows administrative traffic but the traffic must originate from the enterprise LAN and not the Internet. Firewalls are not foolproof. Traffic filtering is always a compromise between traffic enforcement and network performance. Firewalls do the best they can but many attacks can bypass the filters. We expect employees to always act in the enterprise's best interest, but that's not always realistic. We must acknowledge the Insider threat. Security measures often focus on villains, people who do intentional harm. But insider threats aren't all fueled by malice. In this application, operator errors may pose a bigger risk than intentional interference. Even sophisticated users fall victim to malware. Sometimes it just requires the attacker to try often enough. Have you ever clicked on an email you shouldn't have? Once malware infects one computer in a safe zone, it spreads more easily to its neighbors. A sophisticated attack may require several steps like this to reach its ultimate target. Organizations grow, what happens when we add a few more people to our trustworthy network? From a security standpoint, it erodes least privilege, unlike Internet visitors these new employees can reach the server's administrative functions. We often implement least privilege through a balance of safety and convenience. We might feel we can trust the additional ten employees. What if we add another 80 people to our safe zone? It looks a lot less safe. Once in a while an employee may be a villain or a fool on a smaller enterprise Network, but it's much more common in a larger community. We reduce our own attack space. We reduce our attack space again by moving the administrators to their own safer network. We use traffic filtering to put a trust boundary between their network and the larger enterprise network. We add an extra measure of safety by blocking the administrative clients from direct Internet access. We call this a layered defense. Most obvious attacks on the server are administrative functions that come from the administrative network. To reach the administrative network, attackers must first get inside the enterprise network. This brings us almost full circle to our original step 4 plan. The administrative computers reside on a trusted network. Administrative services for the servers must originate on thee trusted network. Internet clients may only retrieve web pages. We retain our least privilege arrangement from step 3. We retain separation of duties. There is at least one practical drawback. It's hard to maintain separate networks dedicated to specific tasks. Step 4 relies on the network structure to enforce our security constraints. The rest of this lesson looks at how we build that structure. First, we look at network addressing, how we distinguish between hosts. Second, we look at how the network protocols are arranged in layers and how that affects network behavior third. Third, we look at networking devices and how those interact with the protocol layers. [MUSIC]