Dedicated interconnect provides direct physical connection between your on-premises network and Google's network. This enables you to transfer large amounts of data between networks, which can be more cost effective than purchasing additional bandwidth over the public Internet. In order to use dedicated interconnect, you need to provision a cross connect between Google network and your own router in a common colocation facility, as shown in this diagram. To exchange routes between the networks, you configure a BGP session over the interconnect between the cloud router and the on-premises router. This will allow user traffic from the on-premises network to reach GCP resources on the VPC network and vice versa. Dedicated interconnect can be configured to offer 99.9% or 99.99% uptime SLA. Creating a dedicated interconnect connection is as simple as these four steps. Order your dedicated interconnect, send LOA-CFAs to your vendor, create VLAN attachments and establish BGP sessions, and test the interconnect. For a demo on how to create a dedicated interconnect, see the link attached to this video. Cloud interconnect supports minimum of ten gigabits per second connection. If you need less bandwidth, you should use Partner Interconnect, which is provided through a carrier or a service provider. In order to use dedicated interconnect, your network must physically meet Google's Network in a supported colocation facility. This map shows the locations where you can create dedicated connections. For a full list of these locations, see the documentation attached to this video. Now, you might look at this map and say, well I'm nowhere near any of these locations. That's when you want to consider Partner Interconnect. Partner Interconnect provides connectivity between your on-premises network and your VPC network through a supported service provider. This is useful if your data center is in a physical location that can't reach a dedicated interconnect colocation facility, or if your data needs don't warrant a dedicated interconnect. In order to use Partner Interconnect, you work with a supported service provider to connect your VPC and on-premises networks. See the documentation linked to this video for a full list of providers. These service providers have existing physical connections to Google's Network that they make available for their customers to use. After you establish connectivity with the service provider, you can request partner interconnect connections from your service provider. Then you establish a BGP session between your cloud router and on-premises router to start passing traffic between your networks via the service provider's network. Partner Interconnect can be configured to offer a 99.9% or 99.99% uptime SLA between Google and the service provider. See the Partner Interconnect documentation linked to this video. Let's compare the interconnect options that we just discussed. All of these options provide internal IP address access between resources in your on-premises network and in your VPC network. The main difference are the connection and capacity and the requirements for using a service. The IPsec VPN tunnel that cloud VPN offers have capacity of 1.5 to 3 gigabytes per second tunnel and requires a VPN device on your on-premises network. You can configure multiple tunnels if you want to scale this capacity. Dedicated interconnect has a capacity of ten gigabits per second per link, and requires you to have a connection to a Google supported colocation facility. You can have up to eight links to achieve multiple of ten gigabits per second, but ten gigabits per second is the minimum capacity. Partner Interconnect has a capacity of 50 megabits per second, up to 10 gigabits per second per connection, and requirements depends on the service provider. Our recommendation is to start with the VPN tunnels. When you need enterprise grade connections to GCP, switch to dedicated interconnect or partner interconnect depending on your proximity to a colocation facility and your capacity requirements.