High performance dynamic maps with open layers, mapserver and nginx

ps2015_directors-cut.gifDynamic maps are maps with underlying data that changes over time. Election results are a good example of such maps. You can see such a map here: http://maps.nrc.nl/ps2015/ps2015-100pct.php and during this article i'll describe our solution to the problem of the changing data, thus recreating map tiles and the need for a reliable cache. The cache is of essence in our example, because the vast majority of users will use the map while it is in flux. And that means extreme high server load. As long as a tile is in cache, the impact of a client hitting the server is minimal, we can serve the still valid tile to the user and spare our limited resources.

3 layers of cache

But before we can solve the cache riddle (keep a tile in cache as long as possible while keeping the clients view as fresh as possible) we first need to establish a good understanding of which types of cache we actualy have to manage. Because, this is interesting, there are three layers of cache involved, and each needs to be invalidated as soon as new data arrives. We have webserver cache, this is the most important one, because all clients come here to request tiles.
Second there is the browser cache. The browser will refuse to request the tile from the server when it´s own cache is still valid. To keep the clients as fresh as possible we allow the browser to cache the tiles for 30 seconds. A direct refresh will not hit the server, but every click on the map will result in a request to the server, which is what we want.
The last layer of cache was my biggest puzzle+ openlayers, the javascript library we use to show the map in the browser, we use version 3 and i noticed that once a tile has been requested, openlayers keeps it´s own cache of tiles, with the URL as a key. At the end of this article i´ll show you how to overcome this cache without too much impact on the server.

First we have the webserver, nginx in our case, which holds the shared cache of all the tiles viewed at least once. This is the most important cache, because every tile is created only once by the server and the other clients are served the cached version, creating a tile costs 1 to 5 seconds in our case, serving cache? negligible. To be on the safe side cache is automatically invalidated after 24 hours, just in case. Experiments showed that we needed at least 4GB of disk to hold the tiles and 30MB of key-size. Final consideration, we kept the nginx defaults for the cache-key and keep the accept-encoding in the key. Downside is that for different browser it's possible to have different cache keys while still generating the same tile, but it's solving problems with browsers that don't accept gziped content and other quirks. Final thought, we have two locations inside our nginx configuration; /cgi-bin is where the tiles are generated here tiles can stay in cache for 24 hours, all other locations keep their response in cache for only 5 minutes.

Our nginx configuration for tile cache is thus as follows:

proxy_cache_path /var/www/nginx_cache levels=1:2 keys_zone=main:20m max_size=10g inactive=7d;
set $cache_key $scheme$host$uri$is_args$args$http_accept_encoding;
location /cgi-bin {
        proxy_ignore_headers "Set-Cookie";
        proxy_ignore_headers "Cache-Control";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass <a href="http://localhost:8888;<br />
" title="http://localhost:8888;<br />
">http://localhost:8888;<br />
</a>    proxy_cache main;
        proxy_cache_key $cache_key;
        proxy_cache_valid 24h; # 200, 301 and 302 will be cached.
        add_header X-Cached-cgi $upstream_cache_status;
        expires 30s;

The last line tells the browser to keep the tile in it´s own cache for maximum 30 seconds. This eliminates the browser cache for 95%, which is good. We rely on the brute force of nginx which can handle gazillions of connections, this was proven right at election night.

Having these two caches configured right the last puzzle proved the hardest, there was no easy way to tell openlayers that it had to refresh the tiles. The best solution I found, and which I used is to add some 'random' parameters to the query string sent to the server to catch the tile. When drawing another layer, or when refreshing a layer one easily tells his WMS-layer to update it's parameters like so:

wmsSource.updateParams({'LAYERS': activeLayer); // where activeLayer holds the name off the layer to be shown

But, this only works the first time. The second time a layer becomes active openlayers uses it's cache, no matter what the expires-header of the image. So we need to fool openlayers into thinking it's handling a new tile. Adding some random string to the query solves this problem. So this:
wmsSource.updateParams({'LAYERS': activeLayer, 'RAND': Date.now());

makes openlayers go back to the server to get a fresh tile. The problem with this solution is that it invalidates the server cache as well, almost all requests have a unique timestamp. Not ideal, but we're almost done.

Putting it all together

There is one thing I did not mention before, that is, that the server knows when the tiles need to be refreshed. So instead of letting the client insert some random value I decided to generate a fixed value on the server that will be valid as long as there is no new result available. The only thing that has change is that we poll the server every 10 seconds to see if there is a new election result available, and if so, refresh the map.

nmt.mapserver.check_tile_lastupdate = function() {
  jQuery.ajax( {
    dataType  : 'json',
    url       : '/ps2015/ps2015-last-update.json',
    data      : 'v='+ Math.ceil(Date.now() / 10000),
    success   :
      function (json) {
        nmt.mapserver.CURRENT_TILE_VERSION = nmt.mapserver.TILE_VERSION;
        nmt.mapserver.TILE_VERSION = json.RAND;
        if (nmt.mapserver.TILE_VERSION !=
          nmt.mapserver.TILE_VERSION = nmt.mapserver.TILE_VERSION;
            'LAYERS': actieveLaag,
            'RAND': nmt.mapserver.TILE_VERSION

- the line: data : 'v='+ Math.ceil(Date.now() / 10000); will add every 10 seconds some new random string to the request, because we have a 5 minute cache on the normal location. We want to make sure we poll the tile version every 10 seconds. -
And there you have it three layers of cache, all reliable working together with a snappy map, until the server-value of TILE_VERSION changes. On that moment all the clients (within a timeframe of 10 seconds) refresh their viewport, all tiles get recreated by mapserver an nginx stores the result in it's cache for as long as possible.
The result? Well, I can only show you how it ended, the dynamic part is over, the results are in, but I have this gif that shows exactly what happened between 21:00 en 13:00 the next day.
The server(s) held just fine, during the election server load could get as high as 200, but it became stable at 1 or 2 within 5 miutes and all the requested tiles were generated and available from cache
We served our map to over 100.000 viewers in 12 hours, having a total of 10 servers with 20 CPU's and 20GB a RAM.
We've been making these kind of election maps for a decade and this was the first time we did not loose the site due to populair demand.

previous item: next item:
thank you for watching  Creative Commons License