34

Why Turning on HTTP/2 Was a Mistake

 5 years ago
source link: https://www.tuicool.com/articles/hit/uIruEvE
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Why Turning on HTTP/2 Was a Mistake.

HTTP/2 is a great leap forward for HTTP. It increases bandwidth efficiency by using a binary compressed format for headers, decreases latency by multiplexing requests on the same TCP connection, and allows the client to specify priorities for requests. In many cases moving to HTTP/2 is the right thing to do, but for some applications, HTTP/2 can cause significant problems.

Last year, at Lucidchart, we enabled HTTP/2 on the load balancers for some of our services. Shortly thereafter we noticed the servers behind our HTTP/2 load balancers had higher CPU load and slower response times than our other servers. At first, the cause was unclear because the server traffic appeared normal and the problems seemed unrelated to any code changes. On closer inspection, we realized that the average number of requests was the same as usual, but the actual flow of requests had become spikier. Instead of a steady flow of requests, we were getting short bursts of a lot of requests. Although we had overprovisioned capacity based on previous traffic patterns, it wasn’t enough to deal with the new request spikes, so responses to requests were delayed and timed out.

What was happening? Well, most browsers limit the number of concurrent connections to a given origin (a combination of scheme, host, and port), and HTTP/1.1 connections must be processed in series on a single connection. This means HTTP/1.1 browsers effectively limit the number of concurrent requests to that origin, meaning our user’s browser throttles requests to our server and keeps our traffic smooth.

Remember how I said HTTP/2 adds multiplexing? Well, with HTTP/2, the browser can now send all HTTP requests concurrently over a single connection. From the web client’s perspective, this is great. In theory, the client should get all the resources it needs quicker since it no longer has to wait for responses from the server before making additional requests. However, in practice, multiplexing substantially increased the strain on our servers. First, because they received requests in large batches instead of smaller, more spread-out batches. And secondly, because with HTTP/2, the requests were all sent together—instead of staggered like they were with HTTP/1.1—so their start times were closer together, which meant they were all likely to time out.

How to fix it

Fortunately, it is possible to solve this problem without having to increase computing capacity. In fact, there are a few potential solutions, although they take a little more effort than just enabling HTTP/2 in the load balancer.

Throttle at the load balancer

Perhaps the most obvious solution is to have the load balancer throttle requests to the application servers, so the traffic patterns from the application servers’ point of view are similar to what it was using HTTP/1.1. The required level of difficulty depends on your infrastructure. For example, AWS ALBs don’t have any mechanism to throttle requests with the load balancer (at least not at the moment). Even with load balancers like HAProxy and Nginx, getting the throttling right can be tricky. If your load balancer doesn’t support throttling, you can still put a reverse proxy between the load balancer and your application server and do the throttling there.

Re-architect the application to better handle spiky requests

Another (and perhaps better) option is to change the application so that it can handle traffic from a load balancer that accepts HTTP/2 traffic. Depending on the application, this probably involves introducing or tweaking a queueing mechanism to the application so that it can accept connections, but only process a limited number of them at a time. We actually already had a queuing mechanism in place when we switched to HTTP/2, but because of some previous code decisions, we were not properly limiting concurrent request processing. If you do queue requests, you should be careful not to process requests after the client has timed out waiting for a response—no need to waste unnecessary resources.

Before you turn on HTTP/2, make sure that your application can handle the differences. HTTP/2 has different traffic patterns than HTTP/1.1, and your application may have been designed specifically for HTTP/1.1 patterns, whether intentionally or not. HTTP/2 has a lot of benefits, but it also has some differences that can bite you if you aren’t careful.

Postscript

Another “gotcha” to look out for is that software supporting HTTP/2 is not fully mature yet. Although it is getting better all the time, software for HTTP/2 just hasn’t had enough time to become as mature and solid as existing HTTP/1.1 software.

In particular, server support for HTTP prioritization is spotty at best. Many CDNs and load balancers either don’t support it at all, or have buggy implementations , and even if they do, buffering can limit its effectiveness .

Also, several applications only support HTTP/2 over HTTPS. From a security point of view, this is great. However, it complicates development and debugging inside secure networks that don’t need or want TLS between different components. It means you need to manage a CA and certificates for localhost for local development, log session secrets in order to inspect HTTP/2 requests with Wireshark or similar tools, and may require additional compute resources for encryption.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK