Guides 11792 Published by

The article explains how to adjust Nginx timeout settings to prevent 504 Gateway Timeout errors, with step‑by‑step config changes. It covers editing server or location blocks, adding proxy_connect_timeout, proxy_send_timeout, and proxy_read_timeout directives for slow API endpoints, as well as fastcgi_read_timeout for PHP scripts. It also reminds readers to check backend limits like PHP’s max_execution_time and warns about potential side effects of overly high timeouts on resource usage. Finally, the author shares a real‑world example where increasing timeouts resolved persistent 504 errors in a data‑intensive dashboard.



How to Increase the Requests Timeout on Nginx – A No‑Fuss Guide

If your site keeps returning 504 Gateway Timeout errors because Nginx is cutting off slow connections, you’re not alone. The fix is a handful of lines in the config file and a quick reload. Let’s get it done.

How to Increase the Request Timeout on Nginx for Slow APIs

1. Locate the right context

Open your main nginx.conf or the site‑specific file under /etc/nginx/sites-available/.

Because you want the timeout tweak to apply only where it matters—usually inside a server {} or location {} block that talks to your backend.

2. Add proxy timeouts

   location /api/ {
       proxy_pass http://backend:8080;
       proxy_connect_timeout 60s;   # how long to wait for the upstream server to accept a connection
       proxy_send_timeout    90s;   # how long to send a request body to the upstream
       proxy_read_timeout    120s;  # how long to wait for the upstream response
   }

Why each matters: proxy_connect_timeout prevents Nginx from hanging on a dead upstream. proxy_send_timeout covers large uploads, and proxy_read_timeout is the one that stops you from seeing those dreaded 504s.

3. If you’re using FastCGI (PHP‑FPM, etc.)

   location ~ \.php$ {
       fastcgi_pass unix:/run/php/php7.4-fpm.sock;
       fastcgi_read_timeout 180s;   # time Nginx waits for PHP to finish processing
   }

I ran into this when a nightly report script that crunches millions of rows was being killed after 30 seconds, even though PHP itself had no limits set. Adding the directive saved me a whole night.

4. Check your upstream’s own limits

Nginx can only hold you to its timeouts; if the backend has a hard timeout (say max_execution_time in PHP), bump that too. Otherwise you’ll still hit a 504 even with generous Nginx settings.

5. Reload Nginx and verify

   sudo nginx -t          # test syntax, be sure no stray semicolons ruin the file
   sudo systemctl reload nginx   # or: sudo nginx -s reload

After that, hit your endpoint with curl -v http://your.domain/api/slow-endpoint. Watch the timing. If you see a response after the new timeout value, you’re good.

6. Watch for unintended side effects

Raising timeouts too high can keep connections open longer than necessary, tying up worker threads and sockets. Keep an eye on your logs—if 504s drop but you start seeing an uptick in memory use or slow overall throughput, dial back a bit.

Real‑World Scenario: From 504s to Smooth Sailing

I had a data‑intensive dashboard that pulled reports from an internal service. The API would sometimes take up to two minutes for heavy queries. With the default proxy_read_timeout of 60 seconds, every time I refreshed the page after a peak hour, Nginx threw a 504. After adding the three timeout directives above and bumping PHP’s max_execution_time from 30 s to 180 s, the dashboard loaded reliably—even under load—and no more 504 errors.

Give that a whirl and let me know how it goes.