What’s the most common thing you’ve heard from detractors of the public cloud? “It’s insecure!”
This misconception is quickly changing, even among late adopters, with AWS providing a range of products and capabilities to protect your environment, together with complementary third party tools.
Despite this, it’s still very easy to deploy an insecure platform. In this blog, I’ll discuss how you should make use of the ‘Onion Principle’ to leverage the security pillar of the AWS Well Architected Framework, to protect your data with multiple layers of security. This layered approach makes it harder for ‘bad actors’ to steal, with each layer hindering the attack and preventing data loss.
One of the most commonly heard concerns among new customers is: “I have no control as it’s not in my data centre,” and “My data centre(s) are more secure than the cloud.”
AWS works with a shared responsibility model for cloud security, splitting responsibilities for security controls between itself and customers (see Figure 2). Everything you can physically touch is secured, monitored, and run by AWS, while anything involving customer data is secured, monitored, and run by the customer. For more information on the assurance programs AWS align to for physical and logical security, see the AWS Security and Compliance quick reference guide.
There are five core areas of cloud security where Fanatical Support for AWS can help protect your platform, we’ll look at:
- Identity and access management
- Infrastructure protection
- Data protection
- Detective controls
- Incident response
Identity and access management
All resources in an AWS account can be controlled manually via a web console, or programmatically via APIs and SDKs. This is where the first layer of security needs to be implemented. Without this layer of protection, it can quickly lead to potential issues including:
- Hijacking of accounts for free compute resources (Bitcoin mining or botnets)
- Loss of data
- Corruption of data
- Destruction of running platforms
AWS Identity and Access Management (IAM) enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage users and groups and use permissions to allow or deny their access to resources. Securing access to a resource is ultimately controlled by policies you implement. These policies can become very complicated.
Policy in Figure 3 allows listing information for all EC2 objects and launching EC2 instances in a specific subnet via the console only. As certified architects and support engineers, we can take on design and implementation, supporting IAM as part of our Fanatical Support for AWS offering.
Infrastructure protection is a critical layer of security and where most customers running traditional environments focus their attention. It ensures services within your system are protected against unauthorised access and potential vulnerabilities.
AWS provides several products and services. Start by concentrating on network boundaries, for example, replacing a single firewall edge device with a combination of network access control lists (NACLs) at the subnet layer, and security groups at the instance layer. The correct combination of rules will prevent the majority of attacks.
It’s important to understand networking in AWS allows you to use similar constructs to a normal physical network, but comes with one important difference. You can’t monitor network traffic within a VPC. This typically means you’d have to implement an inline Intrusion Detection System (IDS), which goes against many cloud design principles and means implementing potential performance and availability bottlenecks. We can solve this issue you for you by deploying agent-based tools to prevent unwanted intrusions, managed by our own Security Operations Centre.
Another example of infrastructure protection is a well-defined system security configuration and maintenance plan. To assist with this, you can leverage AWS inspector. This will provide you with a way of regularly checking your EC2 instances, meeting benchmarks such as Center for Internet Security (CIS) configuration and patching. Keeping your instances patched can then be managed by Amazon EC2 Systems manager. If this seems like a lot of work, Rackspace Managed Security with compliance assistance covers all this for you.
It’s important to classify your data and focus on the levels of encryption required for the different data, enabling you to manage the cost and performance more effectively.
Protecting your data at rest, for example, is typically done via encryption of your data volumes. AWS provides several ways to encrypt your data and manage the security keys. Customer-provided keys, AWS-managed keys, AWS KMS, and AWS CloudHSM offer rising levels as your data sensitivity increases.
AWS key management service (KMS) is designed for dealing with encryption keys, like CloudHSM. The major difference is that KMS is multi-tenancy and shared across all customers. CloudHSM is a dedicated hardware security module (HSM) appliance deployed in an availability zone with direct connectivity to your VPC. This service helps you meet more stringent corporate, contractual, and regulatory compliance requirements for data security.
Any data transmitted from one system to another, which includes communication between servers within your environment, as well as communication between other services and end users, should be encrypted where possible. The best practice approach secures all communications with Transport Layer Security (TLS) encryption. The most common form of this is SSL over HTTP. AWS now provides a trusted certificate for free use within your account selected services.
Defining a data backup and recovery approach protects you from accidentally deleting data. AWS provides multiple features and capabilities around data backup, including EBS snapshots. We can manage full volume snapshots with defined retention periods in a secure and automated manner. Database storage can also be protected in this way using RDS Snapshots.
You can also leverage private and encrypted Amazon S3 buckets for object level backups. S3 is designed for 11 nines of durability for objects stored in the service. With built-in versioning, MFA authorised deletes, and IAM-based access control lists, S3 can provide a high level of data protection. Cross-region automated copies can be made to further secure your data.
Without a secure and immutable logging system, you are unlikely to detect any compromises to your system or prevent further issues. They not only cover essential components of a governance framework but can be used for other purposes such as quality control, threat identification and response efforts.
With each of the first three layers of protection I covered earlier, comes a level of detective controls.
At the AWS account level, a powerful audit service called CloudTrail monitors and captures Console and API activity globally and centralises data for storage and analysis.
To monitor your network, enable VPC flow logs and access logs for ELBs and CloudFront (if used). Finally, your application log data should also be pushed to a centrally managed system. You could do this by simply using an AWS CloudWatch logs agent to push data from your instances to CloudWatch. This will prevent losing any log data in the event of a scaling or recovery action.
So, how do you detect if your data has been manipulated or stolen? If your data is held in an S3 bucket, it’s imperative you enable both access logs and validate the integrity of the file by checking the MD5 hash of each object when using the s3api cli tool, the –content-md5, and the –metadata arguments with appropriate parameters.
If you are storing data on EFS or an EBS volume you need to revert to third party tools which run within the EC2 instance that mount such volumes/shares. These File Integrity Monitoring (FIM) tools range from open source solutions such as OSSEC, commercial products like Alert Logic Threat manager and fully managed services such as Rackspace Managed Security.
Key to incident response is how automated your platform is. The higher the level of automation, the more scope you have to isolate services when a threat is detected.
You should leverage automation to practice elements of your incident response plan. This includes restoring services to a tested good known state. Doing this on a regular basis also provides valuable reassurance your backups are valid and working. Finally, you should also leverage IAM control to provide access to your security team and automate capturing data and state for forensics.
At Rackspace, we have a team of security-focused experts who can stop bad actors at your doorstep, so you can rest assured your data is safe. For more detail go here or get in touch and drop us a message.