The hybrid cloud by definition opens a channel of communication between an organization’s internal network and one or more cloud providers. Whilst each cloud provider takes a different approach to providing management access, usually via a web management console and an API, this blog post focuses on Amazon AWS.
The Hybrid Cloud: Network Infrastructure
The first point to consider when testing a hybrid cloud is networking.
- How are the corporate environment and the cloud environment connected in order to allow two-way communication between internal corporate services and cloud-hosted services?
- What usability and security implications does that have?
The usual method would be an IPsec VPN tunnel, with a hardware VPN concentrator on the corporate side, and an Amazon “virtual private gateway” in the AWS VPC (Virtual Private Cloud). The IP address ranges of each side would then be transparently advertised for routing, so that systems on either end do not have to be aware of the VPN tunnel to send and receive traffic from the other side.
Once the VPN configuration has been determined and understood, the configuration at each end can be reviewed in a similar way to a normal VPN configuration review. Some companies may add another layer on top of the VPN, in order to have one subnet across both sides of the tunnel, making the distinction between hosts on either side even more transparent (the IP address alone would not imply where the host resides). This is sometimes called the “underlay and overlay” model, where each host requires a client and a configuration file to join the overlay network. This adds an extra network interface to every host, which can complicate things and lead to vulnerabilities if services are accidentally exposed on both interfaces when they should only be exposed on one, or if firewall rules are only implemented for one of the interfaces.
In addition to the VPN tunnel, a company may want to use different network routing for their hybrid cloud. An example of this is AWS Direct Connect, which allows traffic to flow from the company’s datacenter to AWS over dedicated Amazon links, bypassing Internet routing. This usually reduces the number of network hops and the network latency, so is primarily a performance optimization, but can be used to satisfy regulatory requirements if data is not allowed to flow over the public Internet (but is allowed in AWS).
To achieve this, the company must be using a non-Amazon datacenter for their corporate hosts which Amazon have Direct Connect links to. A direct fiber cable is then connected between the company’s router and Amazon’s router, with each device advertising their relevant IP address space via BGP. This BGP configuration should be reviewed to determine what parts of the company’s internal network is being advertised to AWS, as this will determine the extent of the internal network that can be reached by AWS cloud hosts and services.
Securing the internal network from the cloud
Careful consideration must be given to the networking configuration of both the VPN and Direct Connect options, as they give the third-party cloud provider (in this case Amazon and the services within the VPC) a direct link into the company’s internal network, either virtual or physical. For this reason, the company’s VPN concentrator or Direct Connect router should be considered an ingress point into the internal network and appropriate firewalls should be installed to control the incoming and outgoing traffic, in addition to the routing advertisements.
Administrative services such as the management interfaces for switches, routers and firewalls which are enabling the Direct Connect link should not be accessible from the corporate network (except to specific management sources) or within the Amazon VPC. Should a VPC host be compromised, an attacker may be able to gain access to these administrative services to perform Man-in-the-Middle or Denial of Service attacks on the Direct Connect link, allowing them to intercept or block all traffic flowing between the two sides.
Attacks from the internet
Many companies using the hybrid cloud model are considering the cloud as an extension of their private internal network, and not using any of the cloud’s public Internet-facing components (for example using private compute resources, but not serving public web sites). However, as public cloud providers are necessarily connected to the Internet (to support other customers who do require Internet-facing services), care should be taken when this is the desired configuration. The VPCs should not have direct inbound or outbound Internet access (such as via Internet Gateways), and the EC2 instances should not have public IP addresses attached to them. However, by having two-way communication between the cloud network and the corporate network, it may be possible for cloud resources to bypass this restriction and access the Internet using gateways or web proxies on the corporate side via the VPN tunnel. If this is not desired, care should be taken to restrict what corporate systems can be accessed by the cloud resources.
Other channels for infiltrating and exfiltrating data also need to be considered in a cloud setup designed not to have Internet access. The cloud DNS setup may allow for DNS tunnelling, or if the cloud DNS requests are forwarded onto the corporate DNS servers via the VPN tunnel, then DNS tunnelling may be possible using the corporate DNS servers.
It is likely that any AWS environment will make use of S3 buckets. They could be used for storing static content related to an application or as part of the cloud deployment to store configuration files, images and logs. Unlike EC2 instances, S3 buckets are inherently accessible to the outside world. Restrictive IAM policies must be setup to ensure content isn’t inadvertently shared over the internet. A number of high profile attacks have occurred in recent years that exploited weak S3 IAM policies to gain access to customer data. In a hybrid cloud, an attack could also result in compromise of the corporate network should content stored in the buckets be imported and trusted by the company. Other such non-VPC services include RDS, and public AMIs. Relying on third-party AMIs to form the basis of a company’s EC2 instances opens up another threat vector which must be assessed.
The entry point to managing AWS resources is the web console. Failure to properly secure web console or API credentials and permissions could permit an attacker to gain control of all resources and data within the AWS account; including setting up further access should the originally compromised credentials be changed and permissions locked down. These credentials and permissions if not appropriately reviewed and removed could allow, for example, an employee to maintain command and control of internal services via the Internet after leaving a company.
Finally, the use of a shared public cloud will, by default, mean that resources are shared with other customers of the cloud provider. For many companies this is not desired, but not every customer is aware that this is the default configuration, thinking that having their own VPC means not sharing hardware with other customers. This brings the threat of attacks via the hypervisor or hardware shared between the virtual machines which underpin the cloud. Real-world attacks have been shown to be successful via shared CPU caches (Meltdown and Spectre). Whilst these vulnerabilities have been widely patched, it isn’t inconceivable that further vulnerabilities exist especially given the increased breadth of functionality provided by modern CPUs. To mitigate this unknown risk it is possible to use dedicated EC2 instances which do not share a hypervisor with other AWS customers, but this needs to be explicitly configured, as well as costing extra.
Even with all of these threats considered, the cloud provider themselves still have ultimate control over the resources. The customer needs to be aware of that, and take as many precautions as is reasonably possible to protect against malicious employees of the cloud provider accessing their data. This includes the use of encryption wherever possible, particularly for data storage such as EBS volumes and snapshots, and S3 buckets.
In the final installment of this three part series, we will look at the next level of abstraction and look at how containerization can impact the security of a cloud environment.