Nginx Load Balancer Example Setup and Config |Video upload date:  · Duration: PT7M24S  · Language: EN

Step by step Nginx load balancer setup for distributing HTTP traffic with upstream groups health checks and reload testing

Why use NGINX as a load balancer

Because you want more than a single server fainting under load. NGINX is a battle tested reverse proxy and load balancer that plays nice with web apps. It is lightweight, fast, and familiar to most sysadmins who survived college infrastructure projects. This guide shows how to create an upstream pool, proxy traffic with proxy_pass, tune health checks and timeouts, and reload without breaking anything important.

Install NGINX on Linux

Install NGINX with your favorite package manager on Debian and Ubuntu like this. Yes it is boring but it works.

sudo apt update
sudo apt install -y nginx

Make sure you have at least two backend servers running real content. If you only have one backend you do not have a load balancer, you have optimism.

Create the upstream backend pool

Define an upstream group in the main nginx.conf or a site file under sites available. Give the pool a logical name and list your backend servers so NGINX can juggle requests.

Example upstream block

upstream myapp_backend {
    server 10.0.0.11 weight=5 max_fails=3 fail_timeout=30s;
    server 10.0.0.12 weight=3 max_fails=3 fail_timeout=30s;
    server 10.0.0.13 backup;
    keepalive 16;
}

Weights let you steer more traffic to beefier machines. The backup server stays quiet until things go wrong. Keepalive here reduces TCP churn between NGINX and backends.

Configure the proxy server block

Now add a server block that listens on port 80 or 443 and forwards requests to the upstream pool. Preserve important headers so your backend can log real client IPs and behave like a civilized app.

server {
    listen 80;
    server_name example.com;

    location / {
        proxy_pass http://myapp_backend;
        proxy_http_version 1.1;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        proxy_connect_timeout 5s;
        proxy_send_timeout 30s;
        proxy_read_timeout 30s;

        proxy_buffering on;
        proxy_buffers 16 4k;
        proxy_busy_buffers_size 8k;
    }
}

proxy_pass is the key. The rest makes behavior predictable and keeps slow backends from holding connections forever.

Keep headers and timeouts

  • Preserve Host so application level routing still works.
  • Forward client IPs so logs and rate limits are accurate.
  • Tune connect and read timeouts to match backend behavior and expected latency.
  • Use keepalive between NGINX and backends for performance.

Health checks and failure handling

NGINX open source uses passive health checks with max_fails and fail_timeout. That means a backend is marked down after repeated failures. If you need active probes look at NGINX Plus or an external probe service. Passive checks are simple and effective for most setups.

Passive failure settings example

server 10.0.0.11 max_fails=3 fail_timeout=30s;

That will mark a server as down after three failed attempts within the timeout window. It is not magic. It is pragmatic.

Test reload and verify

Never reload NGINX blindfolded. Test the config and then reload gracefully so worker processes pick up the change.

sudo nginx -t
sudo systemctl reload nginx

Verify distribution by curling the load balancer multiple times and checking backend logs or a custom response header that returns server identity. If you are feeling fancy add a small endpoint on each backend that returns a hostname for easy checking.

Recap and tips

  • Set up an upstream group to define your backend pool.
  • Use proxy_pass with preserved headers so the app behaves correctly.
  • Tune max_fails and fail_timeout for passive health checks.
  • Consider NGINX Plus or external probes for active health checks.
  • Enable keepalive for better throughput between NGINX and backends.

There you have it. NGINX as a load balancer that shares traffic across backends with minimal drama and very mild heroics. For high availability pair this with virtual IPs or a cluster manager and you will be far less likely to wake up at 3 a m to answer pager tickets.

I know how you can get Azure Certified, Google Cloud Certified and AWS Certified. It's a cool certification exam simulator site called certificationexams.pro. Check it out, and tell them Cameron sent ya!

This is a dedicated watch page for a single video.