Press ESC to close Press / to search

Linux sysctl Performance Tuning: Complete Kernel Optimization Guide for Servers (2026)

🎯 Key Takeaways

  • What Is sysctl?
  • Understanding the Parameter Namespaces
  • Network Performance Tuning
  • Memory Management Tuning
  • File System and I/O Tuning

πŸ“‘ Table of Contents

The Linux kernel ships with conservative defaults that are safe for a wide range of hardware and workloads. For a production server β€” whether it is a web server handling thousands of concurrent connections, a database server processing large amounts of data, or a Kubernetes node running dozens of containers β€” these defaults leave significant performance on the table. sysctl is the interface to the running kernel’s parameter space: over 1,000 tunable values that control everything from how many connections can queue at a listening socket to how aggressively the kernel reclaims memory. This guide covers the most impactful sysctl parameters for Linux server performance, with explanations of what each parameter does and why the change helps.

What Is sysctl?

sysctl reads and writes kernel parameters at runtime through the /proc/sys/ virtual filesystem. Every file under /proc/sys/ is a kernel variable. Reading the file reads the current value; writing to it changes the live kernel behaviour immediately without a reboot. The sysctl command is a clean interface over this filesystem.

# Read a parameter
sysctl net.ipv4.tcp_max_syn_backlog
# or equivalently
cat /proc/sys/net/ipv4/tcp_max_syn_backlog

# Set a parameter (takes effect immediately, lost on reboot)
sysctl -w net.ipv4.tcp_max_syn_backlog=8192
# or equivalently
echo 8192 > /proc/sys/net/ipv4/tcp_max_syn_backlog

# List all parameters
sysctl -a

# List all network parameters
sysctl -a | grep net.ipv4

# Search for a parameter
sysctl -a | grep somaxconn

Making Changes Persistent

Changes made with sysctl -w are lost when the system reboots. To persist them:

# Create a custom configuration file (recommended β€” survives package upgrades)
nano /etc/sysctl.d/99-server-tuning.conf

# Apply all files in /etc/sysctl.d/ immediately
sysctl --system

# Or apply a specific file
sysctl -p /etc/sysctl.d/99-server-tuning.conf

# Verify a value was applied
sysctl net.core.somaxconn

Files in /etc/sysctl.d/ are processed in lexicographic order. The file named 99-server-tuning.conf runs last, overriding distribution defaults from files with lower numbers.

Understanding the Parameter Namespaces

sysctl parameters are organised into namespaces that map to their function:

Namespace Controls
net.core.* Core network stack (buffers, queues, socket limits)
net.ipv4.* IPv4 networking (TCP behaviour, routing, ICMP)
net.ipv6.* IPv6 networking
vm.* Virtual memory β€” memory management, swap, OOM
fs.* Filesystem β€” file handle limits, inotify, AIO
kernel.* Core kernel β€” panic behaviour, scheduler, hugepages
net.netfilter.* Netfilter/firewall connection tracking

Network Performance Tuning

Networking is the most impactful area for server tuning. The kernel’s default socket buffer sizes and queue lengths were designed for modest workloads. A busy web server or database can exhaust these limits, causing dropped connections and latency spikes.

Socket Receive and Send Buffers

Every TCP connection has a send buffer and a receive buffer. The kernel controls the minimum, default, and maximum size of these buffers. Larger buffers mean more data can be in flight β€” critical for high-bandwidth or high-latency links (think: 10Gbps NIC, or connections to distant data centres).

# Default socket receive buffer size (bytes)
# Default: 212992 (~208KB) β€” too small for 10Gbps links
net.core.rmem_default = 1048576       # 1MB

# Maximum socket receive buffer size
# Default: 212992 β€” raise to 128MB for high-throughput servers
net.core.rmem_max = 134217728         # 128MB

# Default socket send buffer
net.core.wmem_default = 1048576       # 1MB

# Maximum socket send buffer
net.core.wmem_max = 134217728         # 128MB

# TCP-specific: min/default/max receive buffer (bytes)
# The kernel auto-tunes within this range
net.ipv4.tcp_rmem = 4096 1048576 134217728

# TCP-specific: min/default/max send buffer
net.ipv4.tcp_wmem = 4096 1048576 134217728

# Total memory available to all TCP sockets (pages)
# Default: kernel calculates based on RAM β€” usually fine
# net.ipv4.tcp_mem = auto

Connection Queue Tuning β€” Critical for High-Traffic Servers

When a client connects to a listening socket, it first lands in the SYN queue (half-open connections during the TCP handshake), then moves to the accept queue (fully established, waiting for accept() syscall from the application). If either queue fills up, new connections are dropped. Under load, this is a common source of “connection refused” or “connection reset” errors.

# Maximum connections in the accept queue per listening socket
# Default: 4096 (kernel), but also limited by application's listen() backlog
# Nginx default listen backlog is 511 β€” change in nginx.conf if needed
net.core.somaxconn = 65535

# Maximum half-open connections in the SYN queue system-wide
# Default: 1024 β€” too low for any busy server
net.ipv4.tcp_max_syn_backlog = 65535

# Maximum connections waiting in the socket receive queue
# (incomplete connections before being processed)
net.core.netdev_max_backlog = 65535

TCP Connection Handling

# Allow TIME_WAIT sockets to be reused for new connections
# Reduces TIME_WAIT accumulation on busy servers (web, proxy, load balancer)
net.ipv4.tcp_tw_reuse = 1

# How many seconds to keep TIME_WAIT sockets before destroying
# Default: 60 seconds β€” reduce on high-connection-rate servers
# WARNING: too low causes issues; 30s is a safe reduction
net.ipv4.tcp_fin_timeout = 30

# Maximum number of TIME_WAIT sockets before they are forcibly destroyed
# Default: 131072 β€” increase for very busy connection termination
net.ipv4.tcp_max_tw_buckets = 1440000

# Maximum number of open TCP connections (all states combined)
# Default: calculated by kernel β€” usually sufficient but verify under load
# net.ipv4.tcp_max_orphans = 65536

# Enable TCP Fast Open β€” reduces handshake overhead for repeat connections
# TFO allows data in the SYN packet (client must also support it)
net.ipv4.tcp_fastopen = 3

# Keepalive β€” detect dead connections faster
# How long idle before sending keepalive probes (seconds)
net.ipv4.tcp_keepalive_time = 300       # Default: 7200 (2 hours) β€” too long

# How many keepalive probes before declaring dead
net.ipv4.tcp_keepalive_probes = 5       # Default: 9

# Interval between keepalive probes (seconds)
net.ipv4.tcp_keepalive_intvl = 15      # Default: 75

TCP Congestion Control

Congestion control is the algorithm that decides how fast to send data through a TCP connection. The default in modern Linux is cubic. For high-bandwidth, high-latency links (wide-area networks, cloud), bbr (Bottleneck Bandwidth and Round-trip propagation time) developed by Google in 2016 can significantly improve throughput and latency:

# Check available congestion control algorithms
sysctl net.ipv4.tcp_available_congestion_control

# Check current algorithm
sysctl net.ipv4.tcp_congestion_control

# Enable BBR (requires Linux 4.9+ β€” all modern distros qualify)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

# Verify BBR is active after applying
sysctl net.ipv4.tcp_congestion_control
# Should output: net.ipv4.tcp_congestion_control = bbr

BBR works especially well for: VPN servers, CDN nodes, video streaming, any server talking to distant clients over variable-quality links.

# Enable SYN cookies β€” when SYN queue overflows, respond without queue entry
# This prevents SYN flood attacks from exhausting your connection queues
net.ipv4.tcp_syncookies = 1    # Default: 1 on most distros (keep enabled)

# Disable ICMP redirects β€” not needed on servers, potential security issue
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0

# Disable source routing β€” packets that specify their own route
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

Memory Management Tuning

Swappiness β€” How Aggressively to Swap

vm.swappiness controls how willing the kernel is to swap application memory to disk. A value of 100 means “swap aggressively, prefer keeping page cache”. A value of 0 means “never swap unless absolutely necessary β€” prefer evicting page cache instead”. For servers where application latency matters, swapping is catastrophic β€” a process waiting for swapped-in memory is frozen:

# Default: 60 β€” too swap-happy for most server workloads
# For database servers, application servers: 10
vm.swappiness = 10

# For servers with no swap (SSD-only, all RAM workloads): 1
# (0 is not recommended β€” kernel may still need to swap in emergencies)
vm.swappiness = 1

# Check current swappiness
sysctl vm.swappiness

# Check current swap usage
free -h
swapon --show

Page Cache and Dirty Memory

The kernel caches disk reads in memory (page cache) and batches disk writes (dirty pages). The dirty page parameters control how much data can be queued for writing before the kernel forces a flush. For write-heavy workloads, allowing more dirty data improves throughput; for safety-critical data, flush more aggressively:

# Maximum % of total memory that can be dirty before writeback starts
# Default: 20% β€” on a 64GB server this is 12.8GB of dirty data
vm.dirty_ratio = 15

# % of memory when background writeback starts (below dirty_ratio)
# Default: 10%
vm.dirty_background_ratio = 5

# Maximum time dirty data can stay in memory before being written (centiseconds)
# Default: 3000 (30 seconds)
vm.dirty_expire_centisecs = 3000

# How often the dirty page writeback daemon runs (centiseconds)
# Default: 500 (5 seconds)
vm.dirty_writeback_centisecs = 500

# For SSDs or NVMe: you can reduce dirty_ratio to flush more frequently
# since the write penalty is lower than spinning disk
# vm.dirty_ratio = 10
# vm.dirty_background_ratio = 3

OOM Killer Tuning

The Out-Of-Memory (OOM) killer activates when the system runs out of memory and must kill a process to free memory. By default it chooses based on a score, but you can influence which processes survive:

# Prevent a critical process from being OOM-killed
# Set its oom_score_adj to -1000 (minimum = never kill)
echo -1000 > /proc/$(pidof nginx)/oom_score_adj

# Make it permanent in a systemd service unit
# Add under [Service]:
# OOMScoreAdjust=-1000

# Enable panic on OOM instead of killing processes
# (useful for cluster nodes where rebooting is safer than running degraded)
# vm.panic_on_oom = 1

# Overcommit memory β€” how much virtual memory to allow beyond physical RAM
# 0 = kernel decides (default β€” heuristic)
# 1 = allow unlimited overcommit (used by Redis, some applications)
# 2 = no overcommit (vm.overcommit_ratio controls how much)
vm.overcommit_memory = 0

# Log memory statistics when OOM kill occurs (essential for debugging)
# Already enabled by default on most distros

Huge Pages β€” For Databases and High-Memory Applications

By default the kernel manages memory in 4KB pages. For applications working with gigabytes of data (databases like PostgreSQL, MySQL, Oracle; JVM applications), the CPU spends significant time managing millions of tiny page table entries. Huge pages (2MB or 1GB pages) dramatically reduce this overhead:

# Check if huge pages are available
grep -i hugepage /proc/meminfo

# Set number of 2MB huge pages at runtime
echo 512 > /proc/sys/vm/nr_hugepages   # 512 Γ— 2MB = 1GB reserved

# Make it persistent
echo "vm.nr_hugepages = 512" >> /etc/sysctl.d/99-hugepages.conf

# Check allocation status
grep HugePages /proc/meminfo

# Transparent Huge Pages (THP) β€” automatic huge page management
# For databases: DISABLE THP (PostgreSQL, MySQL, MongoDB all recommend this)
# For general applications: can leave as "madvise"
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

# Make THP disabled persistent (add to /etc/rc.local or systemd unit):
# echo never > /sys/kernel/mm/transparent_hugepage/enabled

File System and I/O Tuning

File Descriptor Limits

# Maximum number of file descriptors system-wide
# Default: 9223372036854775807 (effectively unlimited on 64-bit) β€” usually fine
# But the per-process limit (ulimit -n) may still be low
fs.file-max = 2097152

# Check current usage
cat /proc/sys/fs/file-nr   # columns: allocated, freed, max

# Maximum number of inotify watches (used by file watching tools, IDEs, systemd)
# Default: 8192 β€” increase if you see "inotify watch limit reached" errors
fs.inotify.max_user_watches = 524288

# Maximum inotify instances per user
fs.inotify.max_user_instances = 256

# Maximum number of async I/O requests in flight system-wide
fs.aio-max-nr = 1048576

Virtual Memory and Memory-Mapped Files

# Maximum number of memory-mapped areas per process
# Default: 65530 β€” increase for Java applications, Elasticsearch, databases
# Elasticsearch requires at least 262144
vm.max_map_count = 1048576

# Check current value
sysctl vm.max_map_count

# If Elasticsearch fails with "max virtual memory areas vm.max_map_count [65530] is too low"
echo "vm.max_map_count = 262144" >> /etc/sysctl.d/99-elasticsearch.conf
sysctl -p /etc/sysctl.d/99-elasticsearch.conf

Kernel and Process Tuning

# Kernel panic β€” automatically reboot after a kernel panic
# Default: 0 (do not reboot β€” kernel hangs waiting for someone to see the message)
# For production servers: auto-reboot after 10 seconds
kernel.panic = 10

# Panic on oops (non-fatal kernel errors) β€” optional, aggressive
# kernel.panic_on_oops = 1

# Enable ASLR (Address Space Layout Randomisation) β€” security
# Default: 1 (partial) or 2 (full) β€” should be 2 on all servers
kernel.randomize_va_space = 2

# Core dump settings
# Disable core dumps for setuid programs (security)
fs.suid_dumpable = 0

# Scheduler tuning β€” how long a process can run before preemption (microseconds)
# Default: varies (usually ~22500 ΞΌs = 22.5ms)
# For latency-sensitive workloads (real-time, audio, trading): reduce
# kernel.sched_latency_ns = 6000000    # 6ms

# Number of CPUs to use for RPS (Receive Packet Steering) β€” set in /sys, not sysctl

Security-Focused sysctl Parameters

# Prevent processes from seeing other users' processes (privacy)
# kernel.yama.ptrace_scope β€” controls ptrace access
# 0 = all processes can be ptraced
# 1 = only parent can ptrace child (default)
# 2 = only root can ptrace
kernel.yama.ptrace_scope = 1

# Disable magic SysRq key (allows privileged ops from keyboard)
# On servers: disable it (no keyboard attached anyway)
kernel.sysrq = 0

# Restrict dmesg to root only (hides kernel pointers from unprivileged users)
kernel.dmesg_restrict = 1

# Hide kernel pointer addresses in /proc
kernel.kptr_restrict = 2

# Prevent unprivileged users from loading kernel modules
kernel.modules_disabled = 0    # Keep 0 β€” setting to 1 prevents ALL module loads

# Disable IPv4 forwarding (unless this is a router/gateway)
net.ipv4.ip_forward = 0

# Ignore ICMP broadcast (prevents being used in Smurf attacks)
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Protect against IP spoofing β€” reverse path filter
# 1 = strict (drop packets where reverse path doesn't match)
# 2 = loose (log but allow asymmetric routing)
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Log martian packets (source/destination addresses that should not appear on this interface)
net.ipv4.conf.all.log_martians = 1

netfilter Connection Tracking Tuning

If you run a firewall (nftables/iptables/firewalld), the netfilter connection tracking table can fill up under high connection rates, causing “nf_conntrack: table full, dropping packet” errors in dmesg. This brings down connectivity completely:

# Check current conntrack usage
cat /proc/sys/net/netfilter/nf_conntrack_count    # Current tracked connections
cat /proc/sys/net/netfilter/nf_conntrack_max       # Maximum allowed

# Check if you are hitting the limit
dmesg | grep "nf_conntrack: table full"
conntrack -C 2>/dev/null    # If conntrack tool is installed

# Increase the maximum connections table size
# Default: varies, often 65536 β€” increase for busy servers/gateways
net.netfilter.nf_conntrack_max = 524288

# Reduce connection tracking timeouts to free entries faster
# TIME_WAIT connections β€” how long to track after TCP close
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 30       # Default: 120s
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 15      # Default: 60s
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 30        # Default: 120s

# Established connections β€” reduce if you have short-lived connections
# net.netfilter.nf_conntrack_tcp_timeout_established = 3600  # Default: 432000 (5 days)

Complete Production Server Configuration

This is a production-ready sysctl configuration for a Linux web/application server. Copy to /etc/sysctl.d/99-production.conf:

cat > /etc/sysctl.d/99-production.conf << 'EOF'
# ── NETWORK: Socket Buffers ────────────────────────────
net.core.rmem_default = 1048576
net.core.rmem_max = 134217728
net.core.wmem_default = 1048576
net.core.wmem_max = 134217728
net.ipv4.tcp_rmem = 4096 1048576 134217728
net.ipv4.tcp_wmem = 4096 1048576 134217728

# ── NETWORK: Connection Queues ─────────────────────────
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535
net.core.netdev_max_backlog = 65535

# ── NETWORK: TCP Behaviour ────────────────────────────
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp_fastopen = 3
net.ipv4.tcp_syncookies = 1

# ── NETWORK: BBR Congestion Control ───────────────────
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

# ── NETWORK: Security ─────────────────────────────────
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.icmp_echo_ignore_broadcasts = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# ── MEMORY ────────────────────────────────────────────
vm.swappiness = 10
vm.dirty_ratio = 15
vm.dirty_background_ratio = 5
vm.max_map_count = 1048576

# ── FILESYSTEM ────────────────────────────────────────
fs.file-max = 2097152
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 256
fs.aio-max-nr = 1048576

# ── KERNEL ────────────────────────────────────────────
kernel.panic = 10
kernel.randomize_va_space = 2
kernel.dmesg_restrict = 1
kernel.kptr_restrict = 2
kernel.sysrq = 0

# ── NETFILTER ─────────────────────────────────────────
net.netfilter.nf_conntrack_max = 524288
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 30
EOF

# Apply immediately
sysctl -p /etc/sysctl.d/99-production.conf

# Verify key parameters
sysctl net.core.somaxconn net.ipv4.tcp_congestion_control vm.swappiness

Workload-Specific Tuning Profiles

PostgreSQL / MySQL Database Server

cat > /etc/sysctl.d/99-database.conf << 'EOF'
# Large shared memory for database buffer pools
kernel.shmmax = 17179869184       # 16GB β€” must be >= shared_buffers in PostgreSQL
kernel.shmall = 4194304           # Pages (4194304 Γ— 4KB = 16GB)
kernel.shmmni = 4096

# Reduce swappiness to near-zero β€” databases must stay in RAM
vm.swappiness = 5

# Flush dirty pages more aggressively for data safety
vm.dirty_ratio = 10
vm.dirty_background_ratio = 3
vm.dirty_expire_centisecs = 500

# NUMA balancing can hurt database performance
kernel.numa_balancing = 0

# Huge pages for buffer pool (set to cover shared_buffers size)
# vm.nr_hugepages = 4096   # 4096 Γ— 2MB = 8GB β€” adjust to your shared_buffers

# Disable transparent huge pages (PostgreSQL and MySQL recommend this)
# Must be done via rc.local or systemd unit:
# echo never > /sys/kernel/mm/transparent_hugepage/enabled
EOF

High-Traffic Web Server (Nginx/HAProxy)

cat > /etc/sysctl.d/99-webserver.conf << 'EOF'
# Very high connection queue β€” handle bursts of concurrent connections
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535

# Aggressive TIME_WAIT reuse β€” web servers generate many short-lived connections
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_max_tw_buckets = 2000000

# Large backlog for incoming packets before kernel starts dropping
net.core.netdev_max_backlog = 65535

# BBR for better throughput to distant clients
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr

# More sockets per second β€” increase local port range
net.ipv4.ip_local_port_range = 1024 65535
EOF

Kubernetes / Container Node

cat > /etc/sysctl.d/99-kubernetes.conf << 'EOF'
# Required for Kubernetes networking (bridges)
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1

# Required for Kubernetes IP forwarding
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1

# Large inotify watches β€” containers use many files
fs.inotify.max_user_watches = 1048576
fs.inotify.max_user_instances = 8192

# Large map count β€” container workloads use many memory-mapped areas
vm.max_map_count = 1048576

# Large conntrack table β€” many containers = many connections
net.netfilter.nf_conntrack_max = 1048576
EOF

# Load required kernel module for bridge netfilter
modprobe br_netfilter
echo "br_netfilter" >> /etc/modules-load.d/kubernetes.conf

Measuring the Impact of Your Tuning

# Measure TCP throughput before and after tuning
# On server (listening):
iperf3 -s

# On client (sending):
iperf3 -c server-ip -t 30 -P 4    # 4 parallel streams

# Check connection queue overflow (should be 0 after tuning)
# /proc/net/netstat Overflow column β€” counts dropped connections
watch -n 1 'cat /proc/net/netstat | grep -E "^TcpExt" | tr " " "\n" | grep -A1 "ListenOverflows\|ListenDrops"'

# Real-time socket statistics
ss -s                    # Summary
ss -tln                  # Listening TCP sockets with backlog info
ss -tn | wc -l           # Count active connections
ss -tn state time-wait   # Count TIME_WAIT sockets

# Memory pressure indicators
vmstat 1 10              # Virtual memory stats every 1 second
free -h                  # Memory usage
cat /proc/meminfo | grep -E "MemFree|Cached|SwapUsed|Dirty"

# Disk I/O statistics
iostat -x 1 10           # Extended I/O stats
iotop -o                 # Top processes by I/O

# Overall system performance
sar -u 1 5               # CPU
sar -n DEV 1 5           # Network
sar -r 1 5               # Memory

# Check for dropped packets at NIC level
ip -s link show eth0 | grep -A5 "RX\|TX"
ethtool -S eth0 | grep -i drop    # NIC-level drops (if ethtool available)

Common Mistakes and Anti-Patterns

Mistake Why it is wrong Correct approach
Applying changes with sysctl -w and forgetting to persist Lost on reboot β€” false confidence that server is tuned Always write to /etc/sysctl.d/ and run sysctl -p
Setting vm.swappiness = 0 on database servers Kernel may still need to swap in OOM; 0 can cause OOM kills instead of swapping Use vm.swappiness = 5 or 10
Setting tcp_tw_recycle = 1 Removed in Linux 4.12 β€” setting it breaks NAT environments Use tcp_tw_reuse = 1 instead
Increasing buffers without measuring Waste memory that could be used for page cache Profile first with iperf3/netperf, then tune
Disabling transparent huge pages globally Helps databases but hurts general workloads Disable only when running databases (via systemd drop-in for specific services)
Setting net.ipv4.ip_forward = 1 on non-routers Turns the server into a router β€” traffic may be forwarded unexpectedly Leave at 0 unless the server is a gateway/NAT device

Checking Your Current Tuning Against Benchmarks

# Check if your key parameters match recommendations
echo "=== Socket Queue Limits ==="
sysctl net.core.somaxconn net.ipv4.tcp_max_syn_backlog

echo "=== TCP Congestion Control ==="
sysctl net.ipv4.tcp_congestion_control

echo "=== Memory ==="
sysctl vm.swappiness vm.max_map_count

echo "=== File Limits ==="
sysctl fs.file-max fs.inotify.max_user_watches

echo "=== Current file handles in use ==="
cat /proc/sys/fs/file-nr

echo "=== Current TIME_WAIT count ==="
ss -tn state time-wait | wc -l

echo "=== Listen queue overflows (should be 0) ==="
netstat -s | grep -i "listen\|overflow\|drop" 2>/dev/null || \
ss -s | grep listen

Conclusion

sysctl tuning is not magic β€” it is about understanding what the kernel is doing and removing constraints that exist only because the defaults are designed for a wide range of hardware. A server running PostgreSQL has completely different optimal settings from a web proxy or a container node. The approach that works is: profile first (identify the bottleneck), apply targeted changes, measure the impact, and persist only what helps. The production configuration in this guide covers the parameters that matter for the vast majority of Linux server workloads. Apply it as a baseline, then layer on workload-specific settings from the profiles section, and use the measurement commands to verify that your changes are actually making a difference.

Was this article helpful?

Advertisement
R

About Ramesh Sundararamaiah

Red Hat Certified Architect

Expert in Linux system administration, DevOps automation, and cloud infrastructure. Specializing in Red Hat Enterprise Linux, CentOS, Ubuntu, Docker, Ansible, and enterprise IT solutions.

🐧 Stay Updated with Linux Tips

Get the latest tutorials, news, and guides delivered to your inbox weekly.

Advertisement

Add Comment


↑