How to Install and Configure Nginx as a Reverse Proxy on Linux
A reverse proxy sits between client requests and your backend servers, providing load balancing, SSL termination, caching, and security. Nginx is the most popular choice for this role due to its performance and flexibility. This guide covers installing Nginx and configuring it as a reverse proxy on Ubuntu, Debian, RHEL, and CentOS.
📑 Table of Contents
- What is a Reverse Proxy?
- Installing Nginx
- On Ubuntu/Debian
- On RHEL/CentOS/AlmaLinux
- Basic Reverse Proxy Configuration
- Step 1: Create Configuration File
- Step 2: Add Reverse Proxy Configuration
- Step 3: Enable the Site
- Proxy Configuration for Different Applications
- Node.js/Express Application
- Python Flask/Django Application
- PHP-FPM with FastCGI
- Adding SSL with Let’s Encrypt
- Load Balancing Multiple Backend Servers
- Caching for Better Performance
- Security Headers
- WebSocket Support
- Rate Limiting
- Monitoring and Logging
- Testing Your Configuration
- Troubleshooting Common Issues
- 502 Bad Gateway
- 504 Gateway Timeout
- Complete Production Configuration Example
- Conclusion
What is a Reverse Proxy?
A reverse proxy accepts requests from clients and forwards them to one or more backend servers. Unlike a forward proxy (which clients use to access the internet), a reverse proxy handles incoming traffic to your infrastructure.
Benefits of using Nginx as a reverse proxy:
- Load balancing – Distribute traffic across multiple backend servers
- SSL termination – Handle HTTPS at the proxy, reducing backend load
- Caching – Cache static content to reduce backend requests
- Security – Hide backend server details and add security headers
- Compression – Compress responses to reduce bandwidth
Installing Nginx
On Ubuntu/Debian
# Update package index
sudo apt update
# Install Nginx
sudo apt install nginx -y
# Start and enable Nginx
sudo systemctl start nginx
sudo systemctl enable nginx
# Verify installation
nginx -v
On RHEL/CentOS/AlmaLinux
# Install EPEL repository (if needed)
sudo dnf install epel-release -y
# Install Nginx
sudo dnf install nginx -y
# Start and enable Nginx
sudo systemctl start nginx
sudo systemctl enable nginx
# Open firewall ports
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
Basic Reverse Proxy Configuration
Let’s configure Nginx to proxy requests to a backend application running on port 3000 (common for Node.js apps).
Step 1: Create Configuration File
# Create a new site configuration
sudo nano /etc/nginx/sites-available/myapp
Step 2: Add Reverse Proxy Configuration
server {
listen 80;
server_name example.com www.example.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
Step 3: Enable the Site
# Create symbolic link to enable site
sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/
# Test configuration
sudo nginx -t
# Reload Nginx
sudo systemctl reload nginx
Proxy Configuration for Different Applications
Node.js/Express Application
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Python Flask/Django Application
location / {
proxy_pass http://127.0.0.1:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
PHP-FPM with FastCGI
location ~ \.php$ {
fastcgi_pass unix:/var/run/php/php8.2-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
Adding SSL with Let’s Encrypt
Secure your reverse proxy with free SSL certificates from Let’s Encrypt:
# Install Certbot
sudo apt install certbot python3-certbot-nginx -y # Ubuntu/Debian
sudo dnf install certbot python3-certbot-nginx -y # RHEL/CentOS
# Obtain and install certificate
sudo certbot --nginx -d example.com -d www.example.com
# Verify auto-renewal
sudo certbot renew --dry-run
After running Certbot, your configuration will be automatically updated with SSL settings:
server {
listen 443 ssl;
server_name example.com www.example.com;
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
Load Balancing Multiple Backend Servers
Configure Nginx to distribute traffic across multiple backend servers:
# Define upstream servers
upstream backend_servers {
least_conn; # Load balancing method
server 192.168.1.10:3000 weight=3;
server 192.168.1.11:3000 weight=2;
server 192.168.1.12:3000 backup;
}
server {
listen 80;
server_name example.com;
location / {
proxy_pass http://backend_servers;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
Load balancing methods:
round-robin(default) – Distributes requests evenlyleast_conn– Sends to server with fewest active connectionsip_hash– Routes based on client IP (session persistence)hash– Custom hash based on specified variable
Caching for Better Performance
Enable caching to reduce backend load:
# Add to http block in nginx.conf
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m;
# Add to server block
server {
listen 80;
server_name example.com;
location / {
proxy_cache my_cache;
proxy_cache_valid 200 60m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating;
add_header X-Cache-Status $upstream_cache_status;
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
}
# Bypass cache for dynamic content
location /api/ {
proxy_pass http://localhost:3000;
proxy_no_cache 1;
proxy_cache_bypass 1;
}
}
Security Headers
Add security headers to protect your application:
server {
listen 443 ssl;
server_name example.com;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
add_header Content-Security-Policy "default-src 'self';" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
# Hide Nginx version
server_tokens off;
location / {
proxy_pass http://localhost:3000;
# ... other proxy settings
}
}
WebSocket Support
For applications using WebSockets (real-time apps, chat, etc.):
location /socket.io/ {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_read_timeout 86400;
}
Rate Limiting
Protect your backend from abuse with rate limiting:
# Add to http block
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
server {
listen 80;
server_name example.com;
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://localhost:3000;
}
}
Monitoring and Logging
Configure detailed logging for troubleshooting:
# Custom log format
log_format proxy_log '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'upstream: $upstream_addr response_time: $upstream_response_time';
server {
access_log /var/log/nginx/proxy_access.log proxy_log;
error_log /var/log/nginx/proxy_error.log warn;
location / {
proxy_pass http://localhost:3000;
}
}
Testing Your Configuration
# Test Nginx configuration syntax
sudo nginx -t
# Reload Nginx (graceful)
sudo systemctl reload nginx
# Check Nginx status
sudo systemctl status nginx
# View error logs
sudo tail -f /var/log/nginx/error.log
# Test reverse proxy
curl -I http://example.com
Troubleshooting Common Issues
502 Bad Gateway
- Backend server not running
- Wrong proxy_pass URL or port
- SELinux blocking connections (RHEL/CentOS)
# Check if backend is running
sudo ss -tlnp | grep 3000
# Fix SELinux (if needed)
sudo setsebool -P httpd_can_network_connect 1
504 Gateway Timeout
- Backend taking too long to respond
- Increase timeout values
proxy_connect_timeout 60;
proxy_send_timeout 60;
proxy_read_timeout 60;
Complete Production Configuration Example
upstream app_servers {
least_conn;
server 127.0.0.1:3000;
keepalive 32;
}
server {
listen 80;
server_name example.com www.example.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com www.example.com;
# SSL Configuration
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers off;
# Security Headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Strict-Transport-Security "max-age=31536000" always;
server_tokens off;
# Logging
access_log /var/log/nginx/example.com.access.log;
error_log /var/log/nginx/example.com.error.log;
# Gzip Compression
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_types text/plain text/css application/json application/javascript;
location / {
proxy_pass http://app_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 90;
}
# Static files (optional)
location /static/ {
alias /var/www/example.com/static/;
expires 30d;
add_header Cache-Control "public, immutable";
}
}
Conclusion
Nginx as a reverse proxy is essential for modern web infrastructure. It provides load balancing, SSL termination, caching, and security features that protect and optimize your backend applications. Start with a basic configuration and gradually add features like caching and rate limiting as your needs grow.
For high-availability setups, consider combining Nginx with tools like Keepalived for failover, and use monitoring tools like Prometheus with the nginx-exporter to track performance metrics.
Was this article helpful?
About Ramesh Sundararamaiah
Red Hat Certified Architect
Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.