Fifty percent of new cloud deployments suffer business impacting performance issues due to the network*. Cloud networking still has a long way to go to support future cloud application architectures.
At a high level, there are two ways an enterprise can connect to a Cloud Service Provider (CSP) today:
- Internet – Create a VPN tunnel (usually using IPsec) between enterprise and cloud provider. It is estimated that 95% of enterprise to cloud traffic uses this model. Most IaaS providers (AWS, Azure, Google) charge based on usage such as 5 cents per GB (byte not bit) of traffic, and rates go down as volume goes up.
- Direct – Private network connection, usually 1 or 10Gbps, most commonly using an Ethernet cross connect. Most IaaS providers charge a fixed fee such as $1,000/month for unlimited usage over a 1Gbps link. There are multiple ways to do this:
- Private – If you are a big enough company with enough demand, you can get fiber or very high speed Ethernet services direct
- Carrier Neutral Co-lo – Buy space in Equnix, Coresite, Cologix, or other carrier neutral co-location facility and run high speed connections from enterprise data centers to these and buy cross connects to cloud providers
- Network Service Providers (NSPs) – AT&T Netbond, Verizon Secure Cloud Interconnect are examples of NSPs allowing an enterprise to connect their existing VPLS or MPLS networks to the very large cloud providers.
- Cloud Network Aggregators – Use Megaport or other cloud aggregators to connect with cloud providers on-demand and the option of paying based on consumption/use. Getting a connection from the enterprise to the aggregator takes time and expense to initially build
To date, Cloud 1.0 has enterprises deploying applications that are used to manage the business. Microsoft’s O365 is a great example. If Outlook or SharePoint perform slowly, while this is irritating to employees and has a small impact on productivity, it is not customer facing. Cloud 1.0 networking primarily uses the Internet where QoS can be problematic.
Cloud 2.0 is the migration of critical applications that run the business such as contact center and point of sale applications that are customer facing and where performance issues impact a company’s revenues and reputation. Cloud 2.0 applications have confidential and financial data that needs to be highly secure. Cloud 2.0 applications rely mostly on direct connections today.
The problem with both of these connectivity models is they are point-to-point connections, whether these are Internet tunnel based or layer 2 direct connects. Figure 1 is an high-level overview of a Cloud Interconnect configuration.
Figure 1 – Point-to-Point Cloud Interconnects
Cloud 3.0 is a multi-cloud architecture. Enterprises that are leading the charge of putting mission critical applications in the cloud have found that to keep costs competitive, ensure high reliability, and provide optimal performance, it is best to run these critical applications distributed across multiple cloud providers and many different locations worldwide. Cloud 3.0 will drive a new networking architecture.
Cloud Networking 3.0 inter-networks services by allowing applications using APIs to control network performance and security. Networking and security will run alongside of applications inside the private or cloud data center. Unlike today’s Internet or Direct connections which drop you off at the “front door” of a cloud provider, and then you have to use their proprietary network and security solutions. TLS 1.3 will further ensure that nothing in the middle of the network will be able to see into the packet.
Cloud Networking 3.0 will have the following attributes:
- Highly Scalable – Millions of underlying connections and segments that go from containers hosting an application to other containers or users.
- IP Address Independent – A naming schema that uses words and allows users and services to be mobile (location independent). Routing and security policy’s use words that humans and machines can understand, not fixed cloud or network providers IPv4/6 addresses and TCP/UDP port numbers.
- 999% reliable – Deterministic & dynamic routing that is transport agnostic with sub-second mid-session rerouting based on link performance to the best path
- Auto-segmentation – Which users or services can talk to which is based on the a directory structure which defines identity and access controls.
- Zero Trust – No broadcast domains or default routes. A packet getting on the network does not go anywhere unless there is an explicit policy to allow it.
- Application Centric Network Enhancements – Higher level network functions that further ensure the performance and security specific to an application. Multi-cast and content distribution, accelerated file transfers, Session Border Controllers for voice apps, and API performance and security controls are a few examples of higher level network functions that IoT, AR/VR, and other advanced applications will require.
Figure 2 is an example Cloud Internetworking where routing and security are not only running at the edge of the data center, but running in the compute areas with the applications.
Figure 2 – Cloud Internetworking
Where can you buy a cloud internetworking solution today? There is no one turnkey solution, yet…. Many of the SD-WAN vendors are pivoting to play in this market, since having the branch office talk directly to the cloud based application, and avoid network backhauling is one of the core areas of business value.
Today’s SD-WAN vendors have three inherent challenges in the future cloud inter-networking. First, is they rely on IPsec tunnels. IPsec tunnels add a lot of overhead doing encryption and do not scale well. With every critical application using TLS, the need for doing network encryption goes away. Plus with TLS 1.3, their ability to do any type of firewall or security functions in the middle of a network path is severely limited. Second is that they rely on VxLAN for their segmentation. Layer 2 segmentation is not as robust as layer 3 for network performance or security. Third is that most do not offer higher level application centric networking services.
So, who will thrive in the cloud inter-networking market?
- Enterprise IT – Low probability – They are inherently slow and risk adverse, and while networking and security is the last thing enterprise IT fully controls, most app teams want the ownership and control that the cloud provides them. 95% of SD-WAN deployments are appliance based because enterprise IT does not have a virtualization strategy for the branch office and corporate politics makes it easier for the network team to own/control an appliance than hardware that can be shared for other functions such as a local print server.
- NSP – Low probability – Each has their own proprietary solution and no enterprise can afford to nor desires to lock into one single NSP. vCPE is something NSPs offer but has had very little market adoption to date. NSPs want multi-year contracts, to upsell commodity bandwidth, and to use their network.
- Co-lo – Low probability – While they are making good money selling space and power with a value add of being on the network “super-highways”, everything they do is layer 2 based. The concept of them owning and managing hardware and software for 3rd parties is not part of their business plan.
- Cloud Network Aggregators – Maybe – They are a neutral party and can support different co-lo facilities/owner and many NSPs and CSPs. Their APIs for near real-time provisioning and consumption based pricing without investing in co-lo space or hardware is appealing. They will need to acquire routing software that can run within AWS, Azure, Google, …, not just provide a front door inter-connection.
- Cloud IaaS Providers – Maybe –All the big cloud providers have built their own large private networks between their centers and into co-lo’s. They did not rely on the traditional network vendors like Cisco, and so are open to low cost, new approaches. Microsoft and Google are already licensed telecom providers in the U.S.
- Cloud SaaS Providers – Likely – In order to deliver end-to-end quality user experience and security of their services and/or applications, they need to control the network end-to-end. They can provide application centric networks that add additional features to accelerate and further secure their applications. No-one wants to support an “application gateway” at the edge of the network, but those who do will provide a greater QoE for their users with additional security controls.
The current network architectures of a simple edge, intelligent distribution layer where routing and security occur, and a high speed core will change. Routing and security will reside at the very edge, where users and application are. The middle of the network will be just a simple, high speed, IP data plane that passes encrypted & authenticated packets. Who will control and manage the network edge is TBD, but everyone in-between the edge points will just be a commodity network bit hauler.