This guest post was written by Suzanne Aldrich, a web technologist specializing in security, performance, and usability. She works as a Solutions Engineer at CloudFlare, a Rackspace Marketplace partner and Referral partner, providing a variety of third party capabilities around website optimization, DDoS mitigation, DNS, security and analytics.
Suzanne will be our guest on Office Hours Hangout this Thurs., Feb. 18 at 1 p.m. Central to talk about the development of HTTP/2.
As someone who has been programming websites since the last century, I have a strong tendency to look at many of the latest and greatest web technologies as potential fads that may soon become outmoded. So many acronyms for must-know features and techniques have become a part of my working vocabulary, it has become quite hard to justify making space in my brain for even just one more tech buzzword.
But one standard in websites that has remained almost completely unchanged since I began tapping out my first <HTML> tags with notepad.exe is the HTTP protocol.
Hypertext Transfer Protocol is the method by which web browsers and web servers trade web site data in an organized fashion, and it has remained on version HTTP/1.1 since 1997. Although there have been some adjustments, the HTTP/1.1 protocol has been a dependable standard for communicating between servers and clients for two decades. And while it’s been a steady workhorse, there have been performance and security challenges.
First, downloading files over an HTTP/1.1 connection is a serial transaction, and web browsers can only maintain several simultaneous connections at once. That means with HTTP/1.1, you have to constantly worry about how many total files are going to be downloaded, and the order in which they’re requested, or else you’ll end up with pages that appear to load slowly because the transmission of some resources are blocking the download of other, more visually important elements. Over the years, pro web developers have come up with clever hacks to get around the performance limitations of HTTP/1.1, like concatenating files, inlining assets and sharding domains, but these measures take extra time to implement and felt more like temporary kludges than permanent solutions.
The time had clearly arrived to innovate a new and better web communication protocol.
With the web rapidly gaining global popularity, often superseding packaged software as the medium of choice for application delivery, the need to produce highly interactive and rich experiences for clients increased dramatically. Simultaneously, concerns about the secrecy, integrity and authenticity of the data we transmit have raised the bar for developing standards that encrypt our communications. Therefore, it became necessary to innovate new protocols that could enable that kind of functionality.
For that reason, Google in 2009 developed SPDY as a means to optimize and secure HTTP transactions over TLS/SSL for more efficient use of bandwidth. As the experimental specification took off in popularity, in 2012 the Internet Engineering Task Force working group began taking proposals for a new standard, based off lessons learned from SPDY, which became the official HTTP/2 specification in 2015.
The main goal of HTTP/2, as described in High Performance Browser Networking, is to reduce latency. This is achieved in four primary ways:
Request and response multiplexing — binary framing over single TCP connection
Compression of HTTP header fields — HPACK encoding reduces overhead
Request prioritization — setting stream priorities and dependencies
Server push — pushing resources before they’re requested
One of the strongest advantages of HTTP/2 is how much faster websites are served over HTTPS. Reducing the number of connections means fewer expensive TLS/SSL handshakes and better reuse of sessions, so you can maintain secure sites while achieving a high level of performance at the same time. It’s really a win-win situation for everyone. We ran some performance tests, and measured our site loading with HTTP/1.1 at 9.07 seconds, while loading over SPDY took 7.06 seconds, and finally HTTP/2 blew it out of the water with only 4.27 seconds average page load time.
Over the last few months, CloudFlare has been at the forefront of a movement to bring HTTP/2 to the entire world wide web. Rather than worrying about tuning your own TCP stack, you can utilize CloudFlare’s unique Nginx implementation for your reverse proxy, allowing users to connect to your site with either SPDY or HTTP/2, depending on what protocol their browser supports. Since November 2015, we’ve seen the global market share of HTTP/2 capable browsers rise from 27 percent to 57 percent by the end of January 2016.
Meanwhile, CloudFlare’s December 2015 HTTP/2 rollout caused the number of Alexa 1M sites supporting true HTTP/2 to skyrocket from 14,017 to 75,288 sites in that single month alone.
CloudFlare is the first CDN to score green across the board on TLS performance, with support for session identifiers, session tickets, OCSP stapling, dynamic record sizing, ALPN, forward secrecy, and of course our new friend, HTTP/2.
In the future, we may look back on these times as fleeting glimpses of the far more immersive experience we’re bound to achieve, once high speed networking and distributed application design evolve and spread globally. If the history of the web has anything to prove, it’s that HTTP/2 is just the kind of technology with the power to transform our communication into something a lot greater than the sum of its parts.
For more information, keep up with the CloudFlare blog on HTTP/2.