AWS re:Invent — A Final Recap

Editor’s note: Datapipe was acquired by Rackspace in 2017.


It’s been two weeks since AWS re:Invent 2017 and Rackspace AWS Evangelist Eric Johnson provided great coverage of the action while he was there (check out Day 1, Day 2 and Day 3). Now that we’ve had some more time to digest reflect on everything, we want to offer additional Racker insights about some of our favorite announcements.

I’ll break these sections down by using the various delineations Amazon has divided them into:

Compute

AWS Fargate and Amazon ECS for Kubernetes (EKS)

Amazon has ECS, but EKS and Fargate provide customers more options for running containers. To help choose the best fit, the first important question to ask is whether you want access to the container “control plane.”

With both ECS and EKS, you’re able to use your own reserved instances, or even spot instances. Managing resources at this level offers the most control and ability to customize each instance, but it also increases operational burden — picking instance types, scheduling containers and optimizing utilization — as well as capital expense, should you choose to purchase certain kinds of reserved instances.

AWS Fargate, on the other hand, takes away these control-plane details, freeing you up to focus on the application rather than the infrastructure. With a partner such as Rackspace to help with infrastructure management, this may not be as important to you, but customers at our Navigator service level may find the benefits of reduced management are worth the possibility of paying a bit more for the Fargate managed service, rather than paying for managing ECS or EKS control-plane resources on their own.

In fact, for customers with highly-variable demand or those who are just getting started with containers, Fargate is likely to be the most cost-efficient choice, since billing is at a per-second granularity and you only pay for what you use. As Fargate rolls out over the next several months to additional Amazon regions, Rackspace will be here to help evaluate if it’s the right fit for your needs.

Whether you choose to use Fargate or not, the next decision to make is between ECS and EKS. Despite its high level of abstraction, even Fargate users will need to make this choice, since Fargate will also support EKS in 2018, giving users a choice between these two underlying technologies.

EKS is just upstream Kubernetes, so by choosing EKS, those who want control-plane access rather than Fargate get native integration with the Kubernetes ecosystem of tools. The value that EKS adds is handling the difficult heavy-lifting of setting up a highly available Kubernetes control plane, spanning three availability zones.

If you’re looking to stay cloud-agnostic as much as possible, or if you have a multi-cloud strategy that includes Kubernetes as an enabling technology, EKS is a solid choice, since all three major public clouds — Amazon, Azure and of course GCP – now offer managed Kubernetes services. EKS will be available in 2018, and Amazon is actively working to integrate it more tightly into the AWS ecosystem.

What if I’m already on ECS?

Unless you have a compelling need to run your workloads across clouds or leverage Kubernetes, there is no reason to change. ECS and Fargate can be a good combination for different workloads and they both offer control and flexibility.

New Deployment Options for AWS Lambda Functions

Martin Smith, a principal engineer on the Fanatical Support for AWS team, explains this best:

“Everyone is looking for safer ways to deploy application updates, easily battle test them on a small percentage of users and safely roll back when problems are found. Having CodeDeploy incrementally roll out new Lambda functions, our customers will now have the same mechanisms to safely deploy applications that have been proven by cloud-native juggernauts like Netflix and Google.”

Databases

Amazon Aurora Serverless (Preview)

This announcement is right up there with support for Kubernetes in terms of the excitement it generated among our technical Rackers. With Aurora Serverless, Amazon is maintaining its leadership position over Function as a Service architecture. The beauty of Aurora Serverless is the incredible cost savings that some customers will be able to benefit from by truly paying only for what they use on their MySQL (first half of 2018) and PostgreSQL (second half of 2018) databases hosted in this new product. Customers with unpredictable or variable workloads will benefit from the reduced operational burden of capacity management, and the reduced operational cost of paying for under-utilized resources.

Amazon DynamoDB On-Demand Backup

This is a big step up for DynamoDB users, because backing up data is, well, important! We’re happy to see this feature announcement, and will start working with our DynamoDB customers right away to help them understand whether to and how to start taking advantage of this new capability.

Point in time recovery is coming soon, which will be great news for customers who need better disaster recovery options for the DynamoDB data. The best part of this announcement is that customers who weren’t able to use DynamoDB before due to compliance reasons, or those who had to manage their own complexity of backup solutions will find it much easier to get started with or continue using this impressively scalable data store technology.

Amazon DynamoDB Global Tables

Enabled by Amazon’s new Time Sync Service under the hood, this feature reduces the complexity of handling a global footprint with data shared across regional DynamoDB tables. More importantly for all DynamoDB users, it addresses a whole new level of high-availability by providing fault tolerance at the AWS region level. Customers will need to be careful to understand the implications of eventual consistency across regions, as well as increased storage costs that can come from using tables in multiple regions.

Amazon Neptune (Preview)

Continuing Amazon’s trend of wrapping complicated-to-operate open source technologies with an easy-to-consume managed service, Amazon’s embrace of graph database technology will be exciting for customers with highly connected data that benefits from graph-based queries. Thanks to Amazon, the heavy lifting of security, backup, availability and scalability are all concerns you can stop worrying about with Amazon Neptune.

Machine Learning

Amazon SageMaker

Hiring and training experts in machine learning is expensive and time-consuming. Once these Ph.D. employees start work, they still have to spend time and effort getting the data to the right place, cleaning, merging and analyzing data, training models, tuning model parameters and analyzing results. These tasks are not only compute-intensive but also labor-intensive. Amazon SageMaker promises to streamline a lot of the hard work involved in creating a machine learning model. As AWS CEO Andy Jassy said during his re:Invent 2017 keynote: “This isn’t about skating to where the puck is going to be, it is about hitting the puck in front of you.” We think machine learning and big data analytics is going to be an increasing differentiator for businesses, and SageMaker will give companies a big head-start in the race to the puck.

Now, with that optimism out of the way, here’s our caution: Sagemaker is not a panacea to your big data machine learning needs. There is still a very non-trivial learning curve to the more detailed layers of technology that Sagemaker exposes. Our in-house data scientists are coming up to speed on Sagemaker and would love to talk with you about ways that Rackspace can support your own investigation of this promising technology.

IoT

It’s still the early days for IoT for most of the businesses that might eventually make use of this emerging technology. Across our broad customer base, those who are asking for help with IoT are generally focused on data ingestion and analytics at this point. Amazon says they see the same areas of focus from their own customers.

AWS IoT Analytics is a fantastic example of Amazon listening to what their customers need and solving for it by taking on the undifferentiated heavy lifting of data cleansing, processing and storage of IoT data at scale. We expect Amazon’s support for both a SQL interface and also time-series data structures will be a huge leg-up in getting started on IoT analytics at scale.

AWS IoT Device Management and AWS IoT Device Defender compliment IoT Analytics nicely by offloading some of the trickiest aspects of device management and security management from enterprises focused on IoT.

We think there is huge potential for IoT, and we’d love to understand what problems Rackspace can help you solve. Click here to set up a time to speak with a Rackspace Product Manager.

What’s next

Our teams are in the process of testing the new AWS services and determining how each will be managed. Current Fanatical Support for AWS customers should contact their Technical Account Manager with any questions about the new services. Stay tuned for more information about an EKS Webinar in February!

Ben Truitt is a Principal Architect on the Fanatical Support for AWS team at Rackspace. He creates solutions that proactively provide value to Rackspace customers, such as identifying opportunities to optimize costs or improve security. His previous experience includes enterprise architecture and software engineering. He serves as chairman of the board for the Rackspace Technical Career Track (TCT), our career path for engaging top technical talent on solving problems for Rackspace customers. In his free time, Ben contributes to the community by providing technical guidance to non-profits participating in The Big Give, South Texas’ biggest annual day of online charitable giving.

LEAVE A REPLY

Please enter your comment!
Please enter your name here