Five Trends to Watch in High Performance Computing

High Performance Computing Lab, George Washington University

When most folks hear the term high performance computing, they think supercomputers.

After all, the definition of high performance computing “generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer,” according to the blog insideHPC.

And for decades, HPC and supercomputing were intrinsically linked. Specialized computing resources were necessary to help researchers and scientists extracts insights from massive data sets. The large budgets required for infrastructure kept HPC in the the domain of top-tier research universities, global banks and the energy industry.

In the last few years, however, HPC has become more accessible, with parallel computing on a large number of servers proving to be more efficient than specialized systems. And that’s exciting, because I believe the common definition misses the real power of high performance computing: the ability to quickly generate insights from massive data sets, allowing for breakthroughs like more immediate cancer detection, as well as advancing promising technologies like artificial intelligence.

That’s why it’s worth following these five major HPC trends in 2019:

The democratization of high performance computing

While a majority of HPC work continues be in done in-house, in dedicated or private clouds, HPC workloads in public cloud are growing. More HPC friendly options from large public cloud providers like Amazon Web Services and Microsoft Azure are attracting traditional HPC users, who are now able to use public cloud to extend what they do on-premise. Non-traditional HPC users are also leveraging public cloud HPC solutions, to solve machine learning and artificial intelligence challenges.

Recent innovations in on-demand bare metal computing and networking will continue to make running HPC outside of on-premises data centers more attractive. And while it’s still an honor to have access to the world’s fastest supercomputers, for many users, cloud computing offers enough HPC access to start solving major problems today.

From data collection to insight

It used to take years for a researcher or scientist to collect a data set large enough to merit HPC. Not any longer. Thanks to the rise of the internet, WiFi and mobile devices, not only are we are more connected than ever, we’re also generating massive amounts of data. Our daily interactions on mobile, social media and the Internet of Things are creating more data than at any point in our history. In fact, 90 percent of the data in the world was generated in the past two years.

With the challenge of collecting massive data sets minimized, we now have the opportunity to spend more time refining the data and extracting insights from it. The challenge today is moving from an avalanche of data to a world where we can easily extract insights from the massive data sets our customers and systems are providing. High performance computing is key to that.

Rise in GPU computing

When it comes to high performance computing, it’s all about the graphics processing unit, or GPU. Originally built for high-resolution gaming, GPUs are being used to perform data-intensive work ranging from machine learning to self-driving cars. GPUs have proven to be superior chips for processing HPC workloads due to their singular focus on data computations.

With the rise of GPU computing, Nvidia, the largest maker of GPUs, has become synonymous with artificial intelligence and high performance computing. Never one to sit on the sidelines, Google recently launched its tensor processing unit, or TPU, specifically for machine learning. While a battle between GPU vs TPU may be in the cards for the future, for now, GPU is king. The demand for GPUs grew so strong this year, in fact, that GPU shortages across the industry led to record high pricing.

Artificial intelligence goes mainstream

AI isn’t new; it’s been around for nearly 65 years. So why is it suddenly so hot? Mostly because the ecosystem surrounding AI is finally mature enough for us to do more with it. Massive data sets are now being used to train machine learning models, while computing capacity has increased to train larger and more complex models faster. It is no longer cutting edge to create an AI chatbot that can answer customer questions, build recommendation engines that suggest “what to buy next” or to develop voice recognition software for the masses.

Technologies like Siri and shopping experiences on Amazon.com brought highly sophisticated AI concepts into the mainstream and have upped the game on what customers expect from businesses, and what businesses can offer. Today, startups can easily sign up for a “Free Trial of Google Cloud AI” online or get started on deep learning by purchasing AWS’ DeepLens from Amazon.com for $249.99.

From the data center to the edge

As self-driving cars and Internet of Things devices like Alexa take off, our computing models have shifted from centralization back to decentralization. This is known as edge computing.

With the rise of public cloud and the movement of workloads out of the data center, the majority of workloads are housed in centralized data centers across the globe. But those centralized data centers are too slow for modern applications that need compute, storage and network capacity to be near the application or device to provide the necessary fast response time.

Edge computing speeds up interactions, removing latency and reducing network and compute load back to centralized data centers. The less processing required to be done in the cloud, the faster response time will be. And in high speed computing, speed is always key.

We are in an extremely exciting time, now that the tools available to synthesize and process data have matured to take high performance computing from the research labs to the mainstream.

At Rackspace, we’ve been working to democratize high performance computing for years – from our work in Open Compute  to innovation in both private cloud and public cloud technologies. We’re excited to see what advances HPC enables; we bet it will be one of the top stories to watch in 2019.

Becky Trevino is a Senior Director of Product Marketing at Rackspace where she works with customers to dispel the myth that there is a “one-size-fits-all” journey into the cloud. In her current role, Becky leads the marketing efforts for Managed Hosting and Hybrid Cloud business. Becky’s experience at Rackspace includes roles in marketing as well as in technical customer service for our Openstack Public Cloud Fanatical Support team. Prior to Rackspace, Becky worked at Dell EMC as an Operations Engineer and Product Marketer. Becky earned a BS and an MS in Engineering from The University of Michigan and has an MBA from Northwestern's Kellogg School of Management. You can follow Becky on LinkedIn at linkedin.com/in/btrevino and Twitter @rebecca_trevino.

LEAVE A REPLY

Please enter your comment!
Please enter your name here