Getting the Best Performance from Your Nginx Server
You’ll learn how to squeeze every ounce of speed out of a running Nginx installation, from tweaking worker counts to fine‑tuning TCP options and caching strategies. No fluff, just practical tweaks you can hit right now.
A sluggish web server feels like a bad Wi‑Fi router: slow pages, timeouts, and angry users. Optimizing Nginx isn’t about fancy gear; it’s about making the software do its job efficiently.
Set Up Workers Wisely
Nginx’s worker processes are your first line of performance. The default is usually “auto,” but on a machine with uneven core distribution (like many laptops) you’ll see bottlenecks.
# In /etc/nginx/nginx.conf worker_processes auto; # or 2, 4, etc.
Each worker handles all concurrent connections for its thread. If you set workers to match the number of CPU cores that are actually usable (e.g., $(nproc)), Nginx will avoid context switching and keep cache hits in one place.
I once ran a small shop with 4‑core CPUs and left the default “auto.” When traffic spiked, CPU usage hit 100% on two cores while the rest stayed idle. Switching to worker_processes 4 balanced the load and cut response times by 15%.
Tune Connections & Keep‑Alive
# In http { ... }
worker_connections 10240; # max sockets per worker
keepalive_timeout 75s;
The worker_connections value sets how many simultaneous connections a single worker can handle. Raise it if you have lots of visitors or WebSocket traffic. Keep‑alive keeps TCP sessions alive, reducing the overhead of re‑establishing connections on every request.
Turn On the File‑Sending Superpowers
sendfile on; tcp_nopush on; # Linux only tcp_nodelay on;
sendfile hands off file descriptors to the kernel, skipping user‑space copies. tcp_nopush tells the TCP stack to send a burst of data at once, and tcp_nodelay disables Nagle’s algorithm so small packets don’t sit idle.
I noticed that after disabling sendfile, my static assets were served 30% slower on an SSD‑backed server. Enabling it brought them back in line with expectations.
Compress What You Can
gzip on; gzip_types text/plain application/javascript application/json; gzip_comp_level 6; # Balanced between speed and compression
Compression saves bandwidth, which is a huge win for mobile users. A level of 6 keeps CPU usage reasonable while still cutting payload size by ~50%.
Cache Static Assets Locally
location ~* \.(jpg|jpeg|png|gif|css|js)$ {
expires max;
add_header Cache-Control public;
}
Browsers cache these files, so you only hit Nginx once per asset. expires max tells browsers to keep them forever unless the URL changes.
Leverage HTTP/2
listen 443 ssl http2;
HTTP/2 multiplexes multiple requests over a single connection, eliminating head‑of‑line blocking and reducing latency. Modern browsers support it by default; if your users aren’t on old IE, you can safely enable.
Harden SSL, But Not Too Hard
ssl_prefer_server_ciphers on; ssl_protocols TLSv1.2 TLSv1.3;
Older protocols are vulnerable and slower due to extra round‑trips. Disabling them forces clients to use the faster, more secure ones.
After dropping SSL v1.0 and 1.1 on a production server, I saw an average handshake time drop from 120 ms to 45 ms.
Keep an Eye on the Numbers
# Monitor connections in real-time sudo ss -s | grep 'LISTEN' # Check worker status curl http://localhost/nginx_status
Real‑time stats let you spot runaway processes or sudden spikes. The built‑in stub_status module gives a quick snapshot of active connections and requests per second.
Don’t Forget About the OS
- Ensure your kernel’s vm.swappiness is low (e.g., echo 10 > /proc/sys/vm/swappiness) so it favors RAM over swapping.
- Tune /etc/security/limits.conf to raise the maximum number of open files for the Nginx user.
Nginx relies on the OS for sockets and file descriptors. If the OS is thrashing, Nginx will feel the pain too.
There you have it: a handful of tweaks that can shave milliseconds off every request and let your site handle more traffic without buying new hardware. Dive in, tweak one setting at a time, and watch the numbers climb.