Support: 1-800-961-4454
Sales Chat
In speaking with potential customers for our Managed Cloud Big Data Solution, we heard feedback that many were looking for a fast and easy way to try out our service, with some type of free trial period. In response to these requests, we recently team up with Hortonworks to create a Sandbox Stack, which allows our joint customers to try out HDP and our Managed Cloud Service for up to two weeks free of charge.
The new Rackspace-optimized Spark Stack is now faster and even more productive, with memory optimization and easier data exploration.
In the past, Rackspace Managed Big Data users had to provision their Hadoop or Spark clusters based on a handful of pre-defined stacks we had created.
A decade ago, biomedical scientist Tim El-Sheikh found himself spending countless hours searching through research papers and scientific journals for sources and citations.
In the world of Big Data, bare metal is king. Many companies are seeking an architecture that allows for full utilization of resources like I/O and throughput, but we often hear from you that when it comes to Big Data you are forced to trade the advantages of cloud (elastic, on-demand, flexible) for the consistency and predictability of bare metal. We don’t think you should have to sacrifice one for the other.
So you managed to survive the first post and are still hungering for more? Don’t worry, I got you covered. This time around, we’ll get into more of the peripheral, optional components that might be useful to you. The format will largely be the same as the first post, so let’s get right to it.
So you wanna learn you some Hadoop, eh? Well, get ready to drink from the firehose, because the Hadoop ecosystem is crammed full of software, much of which duplicates efforts, and much of which is named so similarly that it’s very confusing for newcomers.
Data processing platforms do not exist in siloes. That is why it is increasingly important for Rackspace to provide interoperability between datastores. This enables customers to choose the best technology for their data processing needs, without being restricted to a single tool.
The world of data platforms is forging forward with increasing velocity. To stay relevant in today’s Big Data conversation, technologies must implement features and enhancements at a swifter cadence than legacy technology. The only way this is possible is by orchestrating the worldwide execution of an open ecosystem of participants. Consider Apache Hadoop; this level of advancement would not be possible without a broad network of developers and engineers working together to rapidly innovate to solve new problems. In addition to just fixing the issues users have with Hadoop, the community is changing the perception of how users can leverage it. Once a go-to tool for large batch processing jobs, Hadoop is changing to address the needs of multiple workloads simultaneously such as streaming and interactive workloads all done at the same level of scale of the original batch jobs.
Last week, we hosted a live webinar, “Making Choices: What Kind of Relationship are you Seeking with your Database?,” in which we dug into the options available for the database tier of modern applications, particularly in the cloud.
Racker Powered
©2016 Rackspace, US Inc.