Reverse Proxying

Decoupling the public interface from application processes

What Is a Reverse Proxy?

A reverse proxy sits between clients and your application servers, forwarding requests and responses. Clients connect to the proxy; they never communicate directly with your application.

   Internet                    Your Infrastructure
       │
       │                  ┌─────────────────────────────────────┐
       │                  │                                     │
       ▼                  │   ┌─────────────┐                   │
┌────────────┐            │   │   Node.js   │ :3000             │
│            │            │   │   App #1    │◄──────┐           │
│   Client   │────────────┼──►│             │       │           │
│            │            │   └─────────────┘       │           │
└────────────┘            │                    ┌────┴─────┐     │
                          │   ┌─────────────┐  │          │     │
                          │   │   Node.js   │  │  Nginx   │     │
                          │   │   App #2    │◄─┤  :443    │     │
                          │   │             │  │          │     │
                          │   └─────────────┘  └────┬─────┘     │
                          │                         │           │
                          │   ┌─────────────┐       │           │
                          │   │   Static    │◄──────┘           │
                          │   │   Files     │                   │
                          │   └─────────────┘                   │
                          │                                     │
                          └─────────────────────────────────────┘
				
Nginx reverse proxy routing to different backends

Why Use a Reverse Proxy?

Basic Proxy Configuration

The simplest proxy forwards all requests to a backend server:

server { listen 80; server_name app.example.com; location / { # Forward all requests to Node.js on port 3000 proxy_pass http://127.0.0.1:3000; # Pass along useful headers proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
<VirtualHost *:80> ServerName app.example.com # Enable proxy modules ProxyPreserveHost On # Forward all requests to Node.js ProxyPass / http://127.0.0.1:3000/ ProxyPassReverse / http://127.0.0.1:3000/ # Pass client info RequestHeader set X-Forwarded-Proto "http" </VirtualHost>

Header Forwarding

When proxying, the backend sees the proxy's IP, not the client's. You need to forward client information via headers:

Header Purpose Example Value
X-Real-IP Client's actual IP address 203.0.113.50
X-Forwarded-For Chain of IPs (client, proxies) 203.0.113.50, 10.0.0.1
X-Forwarded-Proto Original protocol (http/https) https
X-Forwarded-Host Original Host header app.example.com
Host Destination host (for routing) app.example.com

Trust but verify: Your application should only trust these headers if the request came from your proxy. Otherwise, attackers can spoof their IP. Configure your app to only accept forwarded headers from trusted proxy IPs.

Load Balancing

Distribute requests across multiple backend servers for scalability and redundancy:

# Define backend servers upstream app_servers { # Load balancing method (default: round-robin) # least_conn; # Send to server with fewest connections # ip_hash; # Same client always goes to same server server 127.0.0.1:3001 weight=3; # Gets 3x traffic server 127.0.0.1:3002; server 127.0.0.1:3003 backup; # Only if others down } server { listen 80; server_name app.example.com; location / { proxy_pass http://app_servers; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } }
<Proxy balancer://app_servers> BalancerMember http://127.0.0.1:3001 loadfactor=3 BalancerMember http://127.0.0.1:3002 BalancerMember http://127.0.0.1:3003 status=+H # Hot standby # Load balancing method ProxySet lbmethod=byrequests # Round-robin # ProxySet lbmethod=bybusyness # Least connections </Proxy> <VirtualHost *:80> ServerName app.example.com ProxyPass / balancer://app_servers/ ProxyPassReverse / balancer://app_servers/ </VirtualHost>

Load Balancing Methods

Method How It Works Best For
Round-robin Rotate through servers sequentially Uniform servers, stateless apps
Least connections Send to server with fewest active connections Long-running requests, varied load
IP hash Same client IP always goes to same server Session affinity without sticky cookies
Weighted Proportional distribution by server weight Servers with different capacities

Health Checks

Automatically remove unhealthy servers from the pool:

upstream app_servers { server 127.0.0.1:3001; server 127.0.0.1:3002; # Passive health checks (included in Nginx OSS) # Mark as down after 3 failures, retry after 30s } server { location / { proxy_pass http://app_servers; # Passive health check settings proxy_connect_timeout 5s; proxy_next_upstream error timeout http_502 http_503; proxy_next_upstream_tries 2; } }

Active vs Passive: Passive checks detect failures when proxying real requests. Active checks (Nginx Plus, or use a separate tool) periodically probe backends regardless of traffic. Active checks catch problems before users do.

WebSocket Proxying

WebSocket connections require special handling due to the HTTP Upgrade mechanism:

server { listen 80; server_name app.example.com; location / { proxy_pass http://127.0.0.1:3000; proxy_http_version 1.1; # WebSocket support proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; # Increase timeouts for long-lived connections proxy_read_timeout 86400s; proxy_send_timeout 86400s; } }

Path-Based Routing

Route different URL paths to different backends:

server { listen 80; server_name example.com; # API requests → API server location /api/ { proxy_pass http://127.0.0.1:3000/; proxy_set_header Host $host; } # WebSocket connections → real-time server location /ws/ { proxy_pass http://127.0.0.1:3001/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } # Static files → serve directly location /static/ { alias /var/www/static/; expires 30d; } # Everything else → frontend app location / { proxy_pass http://127.0.0.1:3002; } }

Proxy Buffering and Timeouts

Tune proxy behavior for your workload:

location / { proxy_pass http://backend; # Buffering (on by default) proxy_buffering on; proxy_buffer_size 4k; # For response headers proxy_buffers 8 16k; # 8 buffers of 16k each proxy_busy_buffers_size 32k; # Disable buffering for streaming responses # proxy_buffering off; # Timeouts proxy_connect_timeout 5s; # Time to establish connection proxy_send_timeout 60s; # Time to send request to backend proxy_read_timeout 60s; # Time to read response from backend }

When to disable buffering: Disable proxy_buffering for Server-Sent Events (SSE), streaming responses, or when you need immediate response delivery. With buffering on, Nginx waits to accumulate data before sending to the client, which adds latency for real-time data.

What's Next

Reverse proxying is often combined with static file serving for optimal performance. The next tutorial covers how servers efficiently serve static assets with caching, compression, and byte-range requests.