Guides 11792 Published by

The guide explains how to relieve an overloaded backend by configuring Nginx’s built‑in proxy cache to serve static assets directly, using both disk‑based and memory‑only caches. It walks through creating a shared‑memory cache zone with proxy_cache_path, enabling the cache in a location block, setting appropriate TTLs with proxy_cache_valid, and verifying hits via the X-Cache header. It also shows how to mount a RAM cache under /dev/shm for ultra‑fast reads while warning against common mistakes such as caching dynamic API responses or oversizing memory zones. Finally, it provides a quick sanity checklist (directory permissions, shared‑memory allocation, hit/miss headers) to ensure the configuration works after each reload.



Nginx optimization tuning with caching – get static files off the backend

If you’ve ever watched a busy site grind to a halt because every image, CSS file and JS bundle is still being pulled from an upstream app server, this guide will show you how to make Nginx actually cache those assets. We’ll walk through setting up both disk‑based and RAM‑only caches, explain the knobs you really need to turn, and point out the pitfalls that waste CPU cycles.

Why the default reverse‑proxy setup feels slow

I’ve seen this happen after a major WordPress update: the site’s theme loads fine, but every request for /wp-content/uploads/* still hits PHP‑FPM. The result is higher latency and an overloaded backend that could have been sleeping on those static files. Nginx can intercept those requests and serve them from its own cache, but most people leave the feature disabled because the docs look like a novel.

Enable a disk cache for general static content

  1. Create a cache zone – This tells Nginx where to store cached responses and how much space it may use.

    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=STATIC:100m max_size=10g inactive=60m use_temp_path=off;

    Why? keys_zone reserves shared memory for the cache index; 100 MiB is enough for a few hundred thousand entries. inactive=60m purges objects that haven’t been accessed in an hour, keeping the disk tidy.

  2. Activate caching for the location block – Only turn it on where you really need it.

    location /static/ {
    proxy_pass http://backend;
    proxy_cache STATIC;
    proxy_cache_valid 200 302 10m;
    proxy_cache_valid 404 1m;
    add_header X-Cache $upstream_cache_status;
    }

    Why? proxy_cache_valid defines how long different status codes stay fresh; a short TTL for 404s prevents caching bogus URLs forever. The custom header lets you verify hits in the browser dev tools.

  3. Test it – Request a known static file twice and watch the X-Cache header change from MISS to HIT. If you still see MISS, double‑check that the upstream response includes proper cache‑control headers or add proxy_ignore_headers Set-Cookie; if necessary.

RAM‑only cache for ultra‑fast assets

If your server has a generous amount of memory, you can keep hot objects in RAM and skip the disk altogether:

proxy_cache_path /dev/shm/nginx levels=1:2 keys_zone=MEMORY:200m max_size=5g inactive=30m;

Mounting the cache under /dev/shm leverages Linux’s tmpfs, turning every hit into a memory read. The trade‑off is that a reboot flushes the entire cache, but for CDN‑like static assets that’s acceptable.

Common missteps and when to skip caching

  • Caching dynamic API responses – If you blindly enable proxy_cache on /api/, you’ll start serving stale JSON to users. Unless the backend sets explicit Cache-Control: max-age=… headers, keep those endpoints uncached.
  • Over‑sizing the memory zone – I once set keys_zone=STATIC:2g on a 4 GiB VPS; Nginx tried to allocate more shared memory than the kernel allowed and failed to start. Keep the zone size realistic for your RAM budget.
  • Using third‑party cache managers – Tools that claim “auto‑tune Nginx caches” often add extra cron jobs and log noise without measurable gain. A well‑tuned proxy_cache_path does the job.

Quick sanity checklist

CheckCommand
Cache directory writable?ls -ld /var/cache/nginx
Shared memory allocated?`ipcs -m
Header showing hit/miss?curl -I http://example.com/static/style.css

Run through the table after each config reload; if anything looks off, revert the last change and try again.

That’s it – a lean set of directives that squeeze every ounce of speed out of Nginx’s built‑in cache. Give it a spin, watch your backend CPU drop, and enjoy those snappy page loads.