When it comes to getting the most out of AWS — particularly for businesses that are new to the cloud — tapping into Rackspace and our proven best practices can save users time and help them stay focused on their core business. In our whitepaper, Architectural Design on AWS: 3 Commonly Missed Best Practices, we detail some of the best practices AWS users often (and unfortunately) tend to neglect.
We have compiled some of those neglected nuggets here, including advice from Rackspace Senior Solutions Architect Jerry Hargrove about specific scenarios, failures and architectural must-dos. Hargrove is an AWS expert with more than 20 years of architecture and software development experience and multiple AWS certifications.
Architect for failure
The first and most common mistake of new and existing AWS users is to design an architecture that never fails. The reality is there is no such thing and what they have really created is an architecture that cannot handle failure. This generally results in an infrastructure that is not properly backed up and cannot automatically heal itself when problems do arise. More importantly, it means loss of data and downtime.
As we note in the whitepaper, “Building your AWS environment to automatically fix itself in response to failure, dramatically increases the likelihood that your applications will continue to operate even if one or more individual functional component of your system fails, and can prove to be a valuable business decision in terms of both cost reduction and risk avoidance.”
The extent of the damage depends on the type of failure and the type of business, though Gartner typically cites the average cost of network downtime to be $5,600 per minute. How can you sidestep the issue? Assume failures will occur and build plans ahead of time for how they’ll be handled.
AWS best practice recommendation: design, implement and deploy for automated recovery from failure.
Take advantage of the scalability of AWS services
A second common mistake is that users still think in terms of provisioning all required resources at the time of launch. This generally results in one of two scenarios. Either business is booming and they failed to provision enough resources to handle the load or they have over-provisioned and have more resources than needed.
As noted in the whitepaper, “Several AWS services are specifically designed to promote the scalability of your AWS environment […] but customers will often run applications without Auto Scaling enabled, which can limit the ease of scalability, affect their experience within an application and lead to wasted costs associated with applications that are not designed to scale to match demand.”
Why face potential data and customer losses resulting from not architecting for scalability, not to mention the Gartner-estimated $5,600 per minute of network downtime cost? Instead, optimize spend and only pay for the resources you need to use when you need them.
AWS best practice recommendation: implement intelligent elasticity into your architecture wherever possible. Furthermore, take advantage of AWS services like AWS Lambda or Elasticache to offload infrastructure maintenance and scale entirely.
Use VPCs to create separate AWS accounts for separate environments
Successful businesses want to maximize security — which can unwittingly occur at the expense of agility. Perhaps that’s why businesses make another common mistake: utilizing a single AWS account for all workload environments and segmenting the environments within the account by using multiple Virtual Private Clouds (VPCs).
Using multiple VPCs for this purpose does not provide any additional security benefit, but it does have the potential to complicate operations and can even limit the usage capacity of some AWS services.
AWS best practice recommendation: design your AWS account strategy to maximize security and follow your business and governance requirements.
As the whitepaper notes, “Rackspace takes this recommendation a step further and suggests that users create a separate account for each deployment environment (development, testing, staging, production) with a single VPC per account.”
Segmenting the AWS environment this way still meets rigorous security requirements and provides visibility into resource usage, but it eliminates the operational complications that can be associated with segmenting using multiple VPCs (the white paper details specific configurations custom fit to your organization and specific work environments).
This system allows users to create additional headroom they might need as their account scales up, and the “room to grow” reduces any risk of hitting the account maximum in a single resource area (the whitepaper details real-life situations and attendant concerns of not using separate AWS environments with one VPC account, and the potential benefits of doing so).
Building architectures according to your application requirements is not a one-time activity, but a continuous process as your user needs evolve. As AWS evolves their platform, you want to take advantage of new features as well.
Find out more in Architectural Design on AWS: 3 Commonly Missed Best Practices.
Let Rackspace manage AWS for you
Let Fanatical Support for AWS design and build an AWS architecture specific to your application needs. Our broad expertise in infrastructure allows us to architect for any application on AWS and offer a staged approach for your evolving requirements. Our certified experts go beyond the initial architect design to provide recommendations to improve your architecture on a regular basis.
Visit Rackspace to find out more.