Tips to reduce cloud latency when dealing with data gravity

We’ve been revisiting the concept of data gravity of late and how that impacts where applications are placed. As a quick refresher, the concept of data gravity is fairly simple. As organizations adopt and migrate infrastructure to the cloud, data that remains outside of the cloud starts to gravitate to those applications running in the cloud. As data is pulled closer to that infrastructure, it can reduce latency, increase efficiencies and speed, as well as increase application performance, all of which can positively impact the end user’s experience.

However, some in the industry note cloud latency can be a potential concern. In his latest article, Keith Townsend with TechRepublic notes some possible solutions to counter the potential problem:

In multi-data center designs, data center managers place workloads closest to the data that is commonly accessed, minimizing the impact of latency. An application hosted in the cloud has the same considerations. The simplest technical solution is to host workloads requiring cloud-based data in the same cloud service.

Another simple solution is to co-locate your non-cloud workloads in a Cloud Exchange… Switch’s Cloud Exchange is a value-add Switch offers to its cloud provider and enterprise customers hosting equipment in their data center. Switch provides the capability of running cross connects from customer equipment to cloud providers. The closer proximity eliminates the need for dedicated circuits between a cloud provider and a customer.

Another option is to purchase on-premises cloud services… Since the data is local to the customer’s data center, data gravity doesn’t factor into application performance.

Another tip: some cloud providers are evasive when it comes to disclosing the location of their data centers. This can complicate latency issues. Wired notes, to really understand latency, you should know the answers to the following questions:

  • Are your VMs stored on different SANs or different hypervisors, for example?
  • Do you have any say in decisions that will impact your own latency?
  • How many router hops are in your cloud provider’s internal network and what bandwidth is used in their own infrastructure?

The idea of being able to transform one’s operations through the use and placement of data is a revolutionary one. And while some processes and workloads physically are restricted to the cloud, with the likes of AWS, Google, Microsoft, and others continuing to refine what it means to physically store data, it’s making more sense for enterprises to increasingly move more workloads and applications to the cloud.

For other ways to reduce latency here are some favorite insights:

Rich Dolan briefly served as Senior Director of Corporate Marketing at Rackspace. Before that he was Senior Vice President of Marketing for Datapipe, where he spent 16 years developing and driving its world class marketing team as the company grew into one of the leading global managed service providers, from the early days as a start up to a global enterprise player. He built and oversaw digital marketing and demand generation, event management, content management, internal communications, brand management, partner marketing and web properties. Rich managed the consolidation of marketing resources from five acquisitions, and led Datapipe to thirteen Gartner Magic Quadrant recognitions across the globe, including four placements in the Leaders quadrant. Before Datapipe, Rich spent four years with web design firm RareMedium, where he designed and architected web presences and client facing applications for Madison Square Garden, JP Morgan, Atlantis, XM Radio and Credit Suisse First Boston. Rich graduated summa cum laude from the New York Institute of Technology with a BA in computer graphics. He lives in New York with his wife and daughter.