Last week, AWS announced that Christmas had come early this year in the form of per-second billing on EC2 instances running Linux and EBS volumes.
You can read the AWS announcement here. Before this announcement, you would have been billed for every hour an instance was up and running, even if you only used a few seconds of that hour. With this change, you are only billed for the actual seconds the instance is in the “running” state.
There are a few things I want to note about per-second billing to help you take full advantage of it. First, the pricing change will only initially apply to Linux instances launched from “free” AMIs. This will exclude RHEL, SLES and Windows, as well as any Marketplace AMI.
Hopefully, once vendors have a chance to work with per-second billing, this will change. The change will apply to on-demand, reserved and spot instances. Secondly, while it is granular to the second, there is a minimum instance time of one minute.
What does this mean to you?
If your consumption patterns are largely static and unchanging, you probably don’t have the ability to take advantage of this change and won’t notice a difference. However, if your consumption is at all variable or spiky, you can take advantage of this and may see a slight reduction in cost.
Beyond the immediate win, there are several areas in which this should affect the way we plan and execute computing on AWS. While this is not an exhaustive list, I wanted to cover a few areas that come to mind immediately.
Amazon Elastic Map Reduce (EMR) and Spot Instances
When talking about temporary instances, EMR is one of the first things that come to mind. When running an EMR cluster, the general pattern is to spin up the required instances, run the job, and then terminate or stop the instances.
However, whether the job took 10 minutes or 50 minutes, you paid for the full hour. Now, with the instances being billed at a per second rate, it changes how you might configure your cluster. First off, it makes sense to run more instances for a shorter amount of time.
Take, for example, a job that is using C4.8XL instances. These instances come in at $1.591 per hour. If my job can be run in an hour with 10 instances, then my cost is $15.91 for the job.
However, with the removal of the hour minimum, I can now look at doubling my compute power to 20 and paying the same price for a half hour return. Or I can get crazy and run 600 instances and get my result in one minute for the same price. Until now, when cost was the limiting factor, data scientists stopped optimization efforts at an hour. This billing change encourages them to revisit their optimization process to take advantage of these smaller billing windows.
Finally, if you are not already doing so, another change would be to use spot instances for your computing power. If your data processing needs have flexibility in schedule, using a greater amount of spot instances for a short amount of time can greatly reduce your bill.
Bootstrapping is another area where per second billing allows us to optimize for price savings. Bootstrapping is the process of preparing an instance upon startup. Some methods start with a basic instance and install all required dependencies, application and configuration at startup.
Other methods start with a baked instance that only needs the latest configuration. In the past, bootstrapping could take anywhere from one to 10 minutes. However, it didn’t matter because we are paying for the hour anyway. As we look at bootstrapping going forward, it is important to factor in the fact that we are paying for the spin-up time of an instance. It’s time to re-assess the process and look at optimizing for faster, cheaper startup times.
Fat fingering may not be the first thing that comes to mind when talking about a change in pricing. First of all, if you aren’t familiar, fat fingering is the art of hitting the wrong key on the keyboard. If you use a command line interface or Amazon CloudFormation to provision infrastructure, fat fingering can be the difference between life or death. For example, if you wanted to change the desired capacity of an Auto Scaling group from 10 to 20 using the command line, it would look like this:
aws autoscaling set-desired-capacity --auto-scaling-group-name my-auto-scaling-group --desired-capacity 20 --honor-cooldown
However, the law of fat fingering says that it occurs at the most critical time possible. You know, the end of the month for a startup waiting for Just-In-Time financing, and the code might look something like this:
aws autoscaling set-desired-capacity --auto-scaling-group-name my-auto-scaling-group --desired-capacity 200 --honor-cooldown
Now, instead of spinning up 10 more machines to meet the desired capacity of 20, you have kicked off 190 more machines to reach the desired capacity of 200. With a 1,000 percent increase in capacity you are sure to meet and drastically exceed your demand. In the old billing model, if these instances hit running status, which they generally would, you would be stuck paying the hourly rate times 200:
200 * hourly rate = possible resume update
In the new pricing model, grace is a little more abundant. If you are alert and catch the issue early enough, you can get it down to paying for just a minute of each instance:
200 * hourly rate/60 = loss of free soda for a week
Amazon EBS blocks
The last thing I would like to mention here is the change in Amazon EBS. Because of the permanent nature of storage, this may not be quite as impactful as EC2, but it is worth noting. Some scenario use of EBS blocks involve creating temporary volumes for one reason or another. As we automate these processes, or even do it manually, it is important to start paying close attention to our cleanup process. If a volume is no longer needed, delete it.
For the Rackspace customer
One of the services we offer as part of our Aviator level managed service for AWS is Passport. Passport manages the provisioning of short-lived, access-limited, fully-audited bastion servers within your AWS account’s VPC that can either be used directly or as a jump host for direct connectivity to other EC2 instances in the same VPC. Passport solves for both network connectivity and authentication into your environment.
Part of the magic behind Passport is its transient nature. When you need it, a bastion is stood up, when you are done, it is torn down. Up until now, you paid a minimum of an hourly rate on the instance. With this change in billing, now you only pay for the actual time it was in use. In the spirit of “every penny counts,” this is a win.
As stated before, this is certainly not an exhaustive look at how a change in pricing can affect the way we compute. From a challenge to optimizing data sets, to making sure we clean up after ourselves, computing is becoming more temporary and ephemeral than ever before.