6

Where's the fastest place to put my server? How much does it matter?

 3 years ago
source link: http://calpaterson.com/latency.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Where's the fastest place to put my server? How much does it matter?

February 2021

Using my own web server accesslogs and public latency data to get a quantitative answer and why roundtrips are such a pain.

...

As network latencies grow, strange things can happen: "fat" sites can become fast (especially if served completely from CDN) and "thin" sites that use APIs can become slow. A typical latency for a desktop/laptop user is 200ms, for a 4G mobile user, 300-400ms.

I've assumed 40 megabit bandwidth, TLS, latency to CDN of 40ms and no existing connections.

"Origin" here means the primary webserver (as opposed to "edge" CDN caches).

What's the fastest place to put my server? Beyond the time taken for servers to respond to requests it takes time just to traverse the internet, just to get a packet from A to B.

To estimate what the theoretical best physical place to put my own server is I've combined publicly available data on latencies with my own web server accesslogs. I'm aiming to get a rough, quantitative, answer that's based on a real data set.

Why location matters

Time taken to traverse the internet is added to the time taken to respond to a request. Even if your API can respond to a request in 1ms, if the user is in London and your API server is in California the user still has to wait ~130 milliseconds for the response.

It's a bit worse than just 130 milliseconds. Depending on what a user is doing they may end up making a number of those roundtrips. To download a web page usually requires five full roundtrips: one to resolve the domain name via DNS, one to establish the TCP connection, two more to set up an encrypted session with TLS and one, finally, for the page you wanted in the first place.

Subsequent requests can (but don't always) reuse the DNS, TCP and TLS setup but a new roundtrip is still needed each time the server is consulted, for example for an API call or a new page.

130ms sounded fast at first, but the rigmarole of just getting a page and then making an couple of API calls can easily end up taking most of a second just in terms of time waiting for the network. All the other time required: for the server to decide what response to send to your request, time downloading the thing and then rendering whatever it is in your browser - that is all extra.

The two kinds of "fast" for networks

One of the confusing things about networking is the inspecific way in which people talk of getting "faster" networking: "faster" residental broadband for example, or "fast ethernet" (100 megabits per second, no longer impressive).

This kind of "faster" is not in fact talking about speed. Greater speed would be reduced latency - so faster roundtrips. Instead "faster" networking is really about greater bandwidth: more bytes per second.

APIs or CDNs

One thing that does make things faster: a Content Distribution Network (or CDN). Instead of going all the way to California perhaps you can retrieve some of the web page from a cache in central London. Doing this saves time - perhaps taking just 50 milliseconds, a saving of 60%. Caches work great for CSS files, images and javascript - stuff that doesn't change for each user. It doesn't work as well for the responses to API calls, for which the responses are different for each user, and sometimes, each time.

A quantitative approach

A happy few can serve everything from their CDN. News sites, for example, show the exact same thing to everyone. Others are less lucky and can make only limited, or no, use of caching. These poor people have to pick a location for their main server to help them get their bytes to the users who want them as fast as possible. If they want to make that choice with the sole aim of reducing latency, where should they pick?

Here's what I did:

  1. I took my own accesslogs for a two week period in September just after I'd published something new. I got about a million requests during this period from 143k unique IPs. I excluded obvious robots (which was ~10% of requests).
  2. I used Maxmind's GeoIP database to geocode each IP address in those accesslogs to geographic co-ordinates.
  3. I then used WonderNetwork's published latency data for internet latencies between ~240 world cities.
  4. I mapped those cities (semi-manually, which was pretty painful) from their names to Geonames ids - which gave me co-ordinates for the cities.
  5. Then I loaded all of the above into a Postgres database with the PostGIS extension installed so I could do geographical queries.
  6. I queried to estimate how long, by percentile, requests would have taken if I'd had my server in each of the 200 cities.

The results

In the table below I've recorded the outcome: how long users would take to complete a single roundtrip to my server if it were in each city. I've done this by percentiles so you have:

  • the average ("p50")
  • for three quarters of requests ("p75")
  • and for 99% of requests ("p99")

All numbers are in milliseconds.

See full results as a table (click to expand)

You can also download the full results as a csv, if that's easier.

The result: east coast of North America good, right on the Atlantic better

The best places are all in North America, which is probably not a total surprise given that it's a pretty dense cluster of English speakers with another cluster not all that far away (in latency terms) in the UK/ROI and then a lot of English-as-a-second-language speakers in Europe. Being right on the Atlantic is best of all: New Jersey and New York state have many of the best places for p99 and it doesn't vary too much, at the top, between p50 and p99.

If you're wondering why small New Jersey towns like Secaucus and Piscataway are so well connected - they have big data centres used by America's financial sector.

As it stands, my server is currently in Helsinki. That's because, unusually for Finland, it was the cheapest option. I only pay about three quid a month for this server. If I moved it to somewhere in New Jersey, and spent more, users would definitely save time in aggregate: half of roundtrips would be completed in 75ms rather than 105ms, a saving of 30%. Over several roundtrips that would probably mount up to around a sixth of a second off the average of first-time page loads, which is not too bad. In case you can't tell, this website isn't hugely taxing for web browsers to render so cuts in the network wait time would make it considerably quicker.

Since I don't dynamically generate anything on this site, the truth is that I'd be best off with a CDN. That would really save a lot of time for everyone: it's nearly twice as good to be served from a CDN (~40ms) than to be in the fastest place (71ms).

How this might change over time

Latencies aren't fixed and they might improve over time. Here's a table of roundtrip latencies from London to other world cities with more than 5 million people, comparing against the theoretical maximum speed, the speed of light:

City name Distance (km) Real latency Theoretical max Slowdown factor New York 5,585 71 37 1.9 Lima 10,160 162 68 2.4 Jakarta 11,719 194 78 2.5 Cairo 3,513 60 23 2.6 St Petersburg 2,105 38 14 2.7 Bangalore 8,041 144 54 2.7 Bogota 8,500 160 57 2.8 Buenos Aires 11,103 220 74 3.0 Lagos 5,006 99 33 3.0 Moscow 2,508 51 17 3.0 Sao Paulo 9,473 193 63 3.1 Bangkok 9,543 213 64 3.3 Hong Kong 9,644 221 64 3.4 Istanbul 2,504 60 17 3.6 Lahore 6,298 151 42 3.6 Tokyo 9,582 239 64 3.7 Hangzhou 9,237 232 62 3.8 Shanghai 9,217 241 61 3.9 Mumbai 7,200 190 48 4.0 Taipei 9,800 268 65 4.1 Dhaka 8,017 229 53 4.3 Seoul 8,880 269 59 4.5

(Please note, a correction: the above table previously compared real roundtrips with theoretical straight line journeys - this has now been corrected, for more details see these two comments for discussion and more details - like how part of this is due to the nature of fibre optic cables and submarine cable curvature.)

As you can see, New York's latency is within a factor of 2 of the speed of light but routes to other places like Dhaka and Seoul are much slower: being 4 times the speed of light. There are probably understandable reasons why the London to New York route has been so well optimised though I doubt it hurts that it's mostly ocean between them, so that undersea cables can run directly. Getting to Seoul or Dhaka will be a more circuitous route.

I should probably mention that new protocols promise to reduce the number of round trips. TLS 1.3 can create an encrypted session with one round trip rather than two and HTTP3 can club together the HTTP round trip with the TLS one, meaning you now only need three: one for DNS, one single roundtrip for both a connecton and an encrypted session and then finally a third for the subject of your request.

One false hope some people seem to have is that new protocols like HTTP3 do away with the need for Javascript/CSS bundling. That is based on a misunderstanding: while HTTP/3 will remove some initial roundtrips it does not remove subsequent roundtrips for extra Javascript or CSS. So bundling is sadly here to stay.

Data weaknesses

While I think this is an interesting exercise - and hopefully indicative - I should be honest and say that the quality of the data I'm using is solidly in the "medium-to-poor" category.

Firstly, the GeoIP database's ability to predict the location of an IP address is mixed. Stated (ie: probably optimistic) accuracy ranges up to about 1000 kilometers in some cases, though for my dataset it thinks the average accuracy is 132km with a standard deviation of 276km - so not that accurate but I think still useful.

My source of latency data, WonderNetwork, are really reporting point-in-time latency from when I got it (30th November 2020) as opposed to long term data. Sometimes the internet does go on the fritz in certain places.

WonderNetwork have a lot of stations but their coverage isn't perfect. In the West it's excellent - in the UK even secondary towns (like Coventry) are represented. Their coverage worldwide is still good but more mixed. They don't have a lot of locations in Africa or South America and some of the latencies in South East Asia seem odd: Hong Kong and Shenzhen are 140ms away from each other when they're only 50km apart - that's a slowdown factor compared to the speed of light of more than a thousand times. Other mainland China pings are also strangely bad, though not on that scale. Perhaps the communists are inspecting each ICMP packet by hand?

The other problem with the latency data is that I don't have the true co-ordinates for the datacentres that the servers are in - I had to geocode that myself with some scripting and a lot of manual data entry in Excel (I've published that sheet on github to save anyone from having to redo it). I've tried hard to check these but there still might be mistakes.

By far the biggest weakness, though, is that I'm assuming that everyone is starting right from the centre of their closest city. This isn't true in practice and bias this adds can vary. Here in the UK, residental internet access is a total hack based on sending high frequency signals over copper telephone lines. My own latency to other hosts in London is about 9ms - which sounds bad for such a short distance but is still 31ms better than average. Many consumer level routers are not very good and add a lot of latency. The notorious bufferbloat problem is also a common source of latency, particularly affecting things that need a consistent latency level to work well - like videoconferencing and multiplayer computer games. Using a mobile phone network doesn't help either. 4G networks add circa 100ms of lag in good conditions but of course are much worse when the signal is poor and there are a lot of link-level retransmissions.

I did try assuming the global average latency per kilometer (about 0.03ms) to compensate for distance from the closest city but I found this just added a bunch of noise to my results as for many IPs in my dataset this is an unrealistic detour: the closest city I have for them isn't that close at all.

Generality

It's fair to wonder to what extent my results would change for a different site. It's hard to say but I suspect that the results would be approximately the same for other sites which are in English and don't have any special geographical component to them. This is because I reckon that people reading this blog are probably pretty uniformly distributed over the English speaking population of the world.

If I was writing in Russian or Italian the geographic base of readers would be pretty different and so the relative merits of different cities from a latency point of view would change.

It wasn't too hard for me to run this test and I've released all the little bits of code I wrote (mostly data loading and querying snippets) so you could easily rerun this on your own accesslogs without too much effort. Please write to me if you do that, I'd love to know what results you get.

Gratuitous roundtrips

Picking a good spot for your server only goes so far. Even in good cases you will still have nearly a hundred milliseconds of latency for each roundtrip. As I said above there can be as many as five roundtrips when you visit a page.

Having any unnecessary roundtrips will really slow things down. A single extra roundtrip would negate a fair chunk of the gains from putting your server in a fast place.

It's easy to add roundtrips accidentally. A particularly surprising source of roundtrips are cross-origin (CORS) preflight requests. For security reasons to do with preventing cross-site scripting attacks, browsers will "check" certain HTTP requests made from Javascript. This is done by sending a request to the same url beforehand with the special OPTIONS verb. The response to this will decide whether the original request is allowed or not. The rules for when exactly preflighting is done are complicated but a surprising number of requests are caught up in the net: notably including JSON POSTs to subdomains (such as api.foo.com when you're on foo.com) and third party webfonts. CORS preflighting checks use a different set of caching headers to the rest of HTTP caching which are rarely set correctly and anyway are only applicable for subsequent requests.

A lot of sites these days are written as "single page apps", where you load some static bundle of Javascript (hopefully from a CDN) and which then makes a (hopefully low) number of API requests inside your browser to decide what to show on the page. The hope is that this is faster after the first request as you don't have to redraw the whole screen when a user asks for a second page load. Usually, it doesn't end up helping much because a single HTML page tends to get replaced with multiple chained API calls. A couple of chained API calls to an origin server is almost always slower than redrawing the whole screen - particularly over a mobile network.

I always think it's a bit rubbish when I get a loading bar on a web page - you already sent me a page, why didn't you just send the page I wanted! One of the great ironies of the web is that while Google don't do a good job of crawling these single page apps they certainly produce a lot of them. The "search console" (the website formerly known as "webmaster tools") is particularly diabolical. I suppose Google don't need to worry overly about SEO.

Bandwidth improves quickly but latency improves slowly

Internet bandwidth just gets better and better. You can shove a lot more bytes down the line per second than you could even a few years ago. Latency improvements, however, are pretty rare and as we get closer to the speed of light the improvement will drop off completely.

100 megawhats per second is less compelling when you still have to wait the same half a second for each page to load.

Contact/etc

Please do feel free to send me an email about this article, especially if you disagreed with it.

If you liked it, you might like other things I've written.

You can get notified when I write something new by email alert or by RSS feed.

If you have enjoyed this article and as a result are feeling charitable towards me, please test out my site project, Quarchive, a FOSS social bookmarking style site, and email me your feedback!

See also

Last year APNIC analysed CDN performance across the world, and concluded that 40ms is typical. I wish they'd included percentile data in this post but I can still get the vague impression that CDNs perform best in the West and less well in South American, China and Africa which is a problem given that most servers are based in the West.

While I was writing this post a number there was an outbreak of page-weight-based "clubs", like the "1MB club" and the, presumably more elite, 512K Club. I suppose I approve of the sentiment (and it's all in the name of fun I'm sure) I think they're over-emphasising the size of the stuff being transferred. If you're in London, asking for a dynamically generated page from California, it will still take a most of a second (130ms times 5 round trips) regardless of how big the thing is.

The submarine cable map is always fun to look at. If you want to see a sign of the varying importance of different places: the Channel Islands (population 170 thousand) have 8 submarine cables, including two that simply connect Guernsey and Jersey. Madagascar (population 26 million) has just four. I also think it's funny that even though Alaska and Russia are pretty close there isn't a single cable between them.

If you want to reproduce my results I've published my code and data on Github. I'm afraid that does not include my accesslogs which I can't make public for privacy reasons. Please don't expect me to have produced a repeatable build process for you: that takes a lot more time and effort so it's provided on a "some assembly required" basis. :)


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK