How to Deploy and Configure Caddy Web Server for Production Applications
Caddy is a modern, open-source web server that automatically handles HTTPS certificates and offers a simplified configuration syntax compared to traditional servers like Apache or NGINX. Built in Go, Caddy excels at serving static files, reverse proxying applications, and managing SSL/TLS certificates without manual intervention.
The key advantage of Caddy lies in its zero-configuration HTTPS approach, while other web servers require complex certificate management workflows, Caddy automatically obtains, renews, and manages Let's Encrypt certificates. This makes it particularly valuable for production environments where security and operational simplicity are priorities.
Why Use It?
The Caddyfile configuration format uses human-readable syntax that's significantly simpler than Apache's virtual hosts or NGINX's server blocks.
Performance-wise, Caddy handles thousands of concurrent connections efficiently while maintaining a smaller memory footprint than many alternatives. It's particularly well-suited for:
- Static site hosting - Perfect for JAMstack applications with built-in compression and caching
- Reverse proxy scenarios - Excellent for containerized applications and microservices
- API gateways - Handles routing, load balancing, and SSL termination seamlessly
- Development environments - Quick setup for local HTTPS testing
Choose Caddy when you prioritize operational simplicity, automatic security, and modern web standards. It's especially valuable for teams that want production-ready HTTPS without certificate management complexity.
Prerequisites and Environment Setup
Before deploying Caddy, ensure your environment meets these requirements:
System Requirements:
- Linux server (Ubuntu 20.04+ or CentOS 8+ recommended)
- Minimum 1GB RAM, 1 CPU core
- Docker and Docker Compose installed
- Domain name pointing to your server's public IP
Configure your firewall to allow necessary ports:
# Allow HTTP, HTTPS, and Caddy admin API
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw allow 2019/tcp # Admin API (restrict to localhost in production)
sudo ufw enable
Verify Docker installation:
docker --version
docker-compose --version
For automatic certificate provisioning, ensure your domain's DNS A record points to your server. Let's Encrypt requires this for domain validation during certificate issuance.
Quick Start: Basic Caddy Installation with Docker
Start with a simple static file server to understand Caddy's core concepts. Create a project directory and basic file structure:
mkdir caddy-demo && cd caddy-demo
mkdir -p www data config
echo "Hello from Caddy!" > www/index.html
Create a basic Caddyfile configuration:
cat > Caddyfile << 'EOF'
localhost:8080 {
root * /srv
file_server
}
EOF
Run Caddy using Docker:
docker run -d \
--name caddy \
-p 8080:8080 \
-v $PWD/Caddyfile:/etc/caddy/Caddyfile \
-v $PWD/www:/srv \
caddy:latest
This command creates a container that:
- Mounts your Caddyfile as the configuration
- Serves files from the
wwwdirectory - Exposes the service on port 8080
Test the deployment by visiting http://localhost:8080. You should see your HTML content served successfully.
Common startup errors:
- Port already in use: Change the host port in the docker run command
- Permission denied: Ensure the
wwwdirectory has appropriate read permissions - Configuration errors: Check Caddyfile syntax using
docker exec caddy caddy validate --config /etc/caddy/Caddyfile
Production Docker Deployment with Docker Compose
For production deployments, Docker Compose provides better configuration management and persistence. Create a comprehensive docker-compose.yml:
version: '3.8'
services:
caddy:
image: caddy:2-alpine
container_name: caddy
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "443:443/udp" # HTTP/3 support
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- ./www:/srv:ro
- caddy_data:/data
- caddy_config:/config
environment:
- CADDY_INGRESS_NETWORKS=caddy
networks:
- caddy
healthcheck:
test: ["CMD", "caddy", "version"]
interval: 30s
timeout: 10s
retries: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
volumes:
caddy_data:
external: true
caddy_config:
networks:
caddy:
external: true
Create the required Docker volumes and network:
docker volume create caddy_data
docker network create caddy
Update your Caddyfile for production with a real domain:
example.com {
root * /srv
file_server
# Enable compression
encode gzip
# Security headers
header {
# Enable HSTS
Strict-Transport-Security max-age=31536000;
# Prevent MIME sniffing
X-Content-Type-Options nosniff
# Clickjacking protection
X-Frame-Options DENY
# XSS protection
X-XSS-Protection "1; mode=block"
}
# Logging
log {
output file /var/log/caddy/access.log {
roll_size 100mb
roll_keep 5
roll_keep_for 720h
}
}
}
Deploy the stack:
docker-compose up -d
Monitor the deployment:
# Check container status
docker-compose ps
# View logs
docker-compose logs -f caddy
# Verify certificate provisioning
docker-compose exec caddy caddy list-certificates
Caddyfile Configuration Deep Dive
The Caddyfile uses a hierarchical structure where site blocks define how Caddy handles requests for specific domains. Understanding this syntax is crucial for effective configuration.
Basic Structure:
# Global options (optional)
{
admin localhost:2019
email your-email@example.com
}
# Site block
example.com {
# Directives go here
root * /var/www
file_server
}
# Multiple domains
app.example.com, api.example.com {
reverse_proxy backend:8080
}
Essential Directives:
The root directive sets the document root:
root * /srv/public
root /api/* /srv/api
The file_server directive enables static file serving with optional parameters:
file_server {
hide .htaccess .git
index index.html index.htm
browse # Enable directory listing
}
The reverse_proxy directive forwards requests to backend services:
reverse_proxy /api/* backend:8080 {
header_up Host {upstream_hostport}
header_up X-Real-IP {remote_host}
}
Environment-Specific Configuration:
Use environment variables for flexible deployments:
{$DOMAIN:localhost} {
root * /srv
file_server
@api path /api/*
reverse_proxy @api {$BACKEND_URL:http://localhost:3000}
tls {$TLS_EMAIL:internal}
}
Wildcard Certificates:
For subdomains, configure DNS challenge:
*.example.com {
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
@app host app.example.com
reverse_proxy @app app-backend:8080
@api host api.example.com
reverse_proxy @api api-backend:3000
}
Automatic HTTPS and Certificate Management
Caddy's automatic HTTPS uses the ACME protocol to obtain certificates from Let's Encrypt. This process happens transparently during startup and continues with automatic renewals.
How It Works:
- Caddy detects HTTPS-eligible domains in your configuration
- Performs domain validation via HTTP-01 or TLS-ALPN-01 challenge
- Stores certificates in the data directory
- Automatically renews certificates before expiration
Certificates are stored in Docker volumes, ensuring persistence across container restarts. Monitor certificate status:
# List all certificates
docker-compose exec caddy caddy list-certificates
# Check specific certificate
docker-compose exec caddy caddy list-certificates --domain example.com
Custom Certificate Authority:
For internal networks or testing, configure a custom CA:
{
acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
# Or for internal CA:
# acme_ca https://internal-ca.company.com/acme/directory
}
internal.company.com {
tls internal # Use Caddy's internal CA
reverse_proxy backend:8080
}
Certificate Backup Strategy:
Backup the certificate data volume regularly:
# Create backup
docker run --rm -v caddy_data:/data -v $PWD:/backup alpine \
tar czf /backup/caddy-certificates-$(date +%Y%m%d).tar.gz -C /data .
# Restore backup
docker run --rm -v caddy_data:/data -v $PWD:/backup alpine \
tar xzf /backup/caddy-certificates-20231201.tar.gz -C /data
Reverse Proxy Configuration for Applications
Caddy excels as a reverse proxy, handling SSL termination while forwarding requests to backend applications. This pattern is essential for containerized deployments.
Basic Reverse Proxy:
api.example.com {
reverse_proxy backend:8080 {
# Preserve original headers
header_up Host {upstream_hostport}
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
}
}
Load Balancing Multiple Backends:
app.example.com {
reverse_proxy backend1:8080 backend2:8080 backend3:8080 {
lb_policy round_robin
# Health checks
health_uri /health
health_interval 30s
health_timeout 5s
# Failover configuration
fail_duration 30s
max_fails 3
unhealthy_status 5xx
}
}
WebSocket Support:
WebSocket connections require specific handling:
chat.example.com {
@websockets {
header Connection *Upgrade*
header Upgrade websocket
}
reverse_proxy @websockets websocket-backend:8080
reverse_proxy * web-backend:8080
}
API Gateway Pattern:
gateway.example.com {
# Authentication service
@auth path /auth/*
reverse_proxy @auth auth-service:8080
# User service
@users path /users/*
reverse_proxy @users user-service:8081 {
# Add authentication headers
header_up X-Auth-Token {header.Authorization}
}
# Orders service
@orders path /orders/*
reverse_proxy @orders order-service:8082
# Rate limiting
rate_limit {
zone static_ip
key {remote_host}
events 100
window 1m
}
}
Security Hardening and Best Practices
Production deployments require comprehensive security measures. Caddy provides built-in security features that should be properly configured.
Security Headers Configuration:
example.com {
header {
# HSTS with preload
Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
# Content Security Policy
Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'"
# Prevent clickjacking
X-Frame-Options "SAMEORIGIN"
# MIME type sniffing protection
X-Content-Type-Options "nosniff"
# XSS protection
X-XSS-Protection "1; mode=block"
# Referrer policy
Referrer-Policy "strict-origin-when-cross-origin"
# Remove server information
-Server
}
# Request size limits
request_body {
max_size 10MB
}
root * /srv
file_server
}
Access Control and IP Restrictions:
admin.example.com {
# Restrict to specific IP ranges
@blocked not remote_ip 192.168.1.0/24 10.0.0.0/8
respond @blocked "Access denied" 403
# Admin interface protection
@admin path /admin/*
basicauth @admin {
admin $2a$14$hashed_password_here
}
reverse_proxy backend:8080
}
Rate Limiting:
api.example.com {
# Global rate limiting
rate_limit {
zone api_global
key static
events 1000
window 1m
}
# Per-IP rate limiting
rate_limit {
zone api_per_ip
key {remote_host}
events 100
window 1m
}
reverse_proxy backend:8080
}
Container Security:
Enhance Docker security in your compose file:
services:
caddy:
image: caddy:2-alpine
user: "1000:1000" # Non-root user
read_only: true
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
tmpfs:
- /tmp
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile:ro
- caddy_data:/data
- caddy_config:/config
Performance Optimization and Caching
Optimize Caddy for high-performance production workloads through compression, caching, and connection tuning.
Compression Configuration:
example.com {
# Enable multiple compression algorithms
encode {
gzip 6
zstd
minimum_length 1024
match {
header Content-Type text/*
header Content-Type application/json*
header Content-Type application/javascript*
header Content-Type application/xml*
}
}
root * /srv
file_server
}
Static Asset Caching:
example.com {
# Cache static assets
@static path *.css *.js *.png *.jpg *.jpeg *.gif *.ico *.woff *.woff2
header @static {
Cache-Control "public, max-age=31536000, immutable"
ETag off
}
# Cache HTML with shorter duration
@html path *.html
header @html Cache-Control "public, max-age=3600"
root * /srv
file_server {
precompressed gzip br
}
}
Connection Optimization:
Configure global options for better performance:
{
# Increase default timeouts for high-load scenarios
timeouts {
read_body 30s
read_header 30s
write 30s
idle 5m
}
# Optimize for many connections
servers {
max_header_bytes 16KB
read_timeout 30s
write_timeout 30s
}
}
CDN Integration:
Configure Caddy to work effectively with CDNs:
example.com {
# Trust CDN headers
@cdn remote_ip 103.21.244.0/22 103.22.200.0/22 # Cloudflare IPs
header @cdn {
# Use real client IP from CDN
X-Real-IP {header.CF-Connecting-IP}
}
# Optimize for CDN caching
@api path /api/*
header @api {
Cache-Control "private, no-cache"
Vary "Accept-Encoding, Authorization"
}
reverse_proxy backend:8080
}
Monitoring, Logging, and Observability
Production deployments require comprehensive monitoring and logging for troubleshooting and performance analysis.
Structured Logging:
example.com {
log {
output file /var/log/caddy/access.log {
roll_size 100MB
roll_keep 10
roll_keep_for 2160h # 90 days
}
format json {
time_format "2006-01-02T15:04:05.000Z07:00"
message_key "msg"
level_key "level"
time_key "timestamp"
}
level INFO
}
reverse_proxy backend:8080
}
Health Check Endpoints:
example.com {
# Health check endpoint
respond /health 200 {
body "OK"
close
}
# Metrics endpoint (restrict access)
@metrics path /metrics
@internal remote_ip 10.0.0.0/8 192.168.0.0/16
handle @metrics {
@allowed expression {remote_ip} in ["10.0.0.0/8", "192.168.0.0/16"]
respond @allowed 200 {
body "metrics data here"
}
respond 403
}
reverse_proxy backend:8080
}
Prometheus Integration:
Add monitoring to your Docker Compose stack:
services:
caddy:
image: caddy:2-alpine
# ... existing configuration
prometheus:
image: prom/prometheus:latest
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
volumes:
prometheus_data:
Troubleshooting Common Issues
Understanding common Caddy issues and their solutions helps maintain reliable production deployments.
Certificate Provisioning Failures:
Check certificate status and debug issues:
# Validate configuration
docker-compose exec caddy caddy validate --config /etc/caddy/Caddyfile
# Check certificate status
docker-compose exec caddy caddy list-certificates
# Force certificate renewal
docker-compose exec caddy caddy reload --config /etc/caddy/Caddyfile
Common certificate issues:
- DNS not propagated: Verify domain points to correct IP with
dig example.com - Port 80 blocked: Ensure firewall allows HTTP for ACME challenge
- Rate limiting: Let's Encrypt has rate limits; use staging environment for testing
Performance Issues:
Debug performance problems:
# Monitor resource usage
docker stats caddy
# Check connection limits
ss -tuln | grep :443
# Analyze access logs
docker-compose exec caddy tail -f /var/log/caddy/access.log | jq '.'
Configuration Debugging:
Use Caddy's admin API for runtime inspection:
# Get current configuration
curl http://localhost:2019/config/ | jq '.'
# Check loaded certificates
curl http://localhost:2019/pki/certificates/local | jq '.'
Migration from Apache/NGINX
Migrating from traditional web servers to Caddy requires understanding configuration equivalents and testing thoroughly.
Apache Virtual Host to Caddy:
Apache configuration:
<VirtualHost *:443>
ServerName example.com
DocumentRoot /var/www/html
SSLEngine on
SSLCertificateFile /path/to/cert.pem
SSLCertificateKeyFile /path/to/key.pem
</VirtualHost>
Caddy equivalent:
example.com {
root * /var/www/html
file_server
# HTTPS automatic - no SSL configuration needed
}
NGINX Server Block to Caddy:
NGINX configuration:
server {
listen 443 ssl;
server_name api.example.com;
location / {
proxy_pass http://backend:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Caddy equivalent:
api.example.com {
reverse_proxy backend:8080 {
header_up Host {upstream_hostport}
header_up X-Real-IP {remote_host}
}
}
Migration Strategy:
- Parallel testing: Run Caddy on alternate ports during testing
- Configuration validation: Test all endpoints and SSL certificates
- Performance comparison: Benchmark both configurations under load
- Gradual rollout: Use DNS or load balancer for traffic splitting
- Rollback plan: Keep original configuration ready for quick reversion
Next Steps and Production Considerations
With Caddy successfully deployed, consider these advanced topics for production excellence:
Scaling Considerations:
- Implement horizontal scaling with multiple Caddy instances behind a load balancer
- Use shared certificate storage for multi-instance deployments
- Consider Caddy clustering for high availability scenarios
Advanced Features to Explore:
- Custom middleware development for specific business logic
- Integration with service discovery systems like Consul
- Advanced traffic routing based on headers, geography, or user agents
- WebAssembly plugins for extending Caddy functionality
Operational Excellence:
- Implement comprehensive monitoring with Grafana dashboards
- Set up automated certificate expiration alerts
- Create disaster recovery procedures for certificate and configuration data
- Establish security scanning and vulnerability management processes
Caddy's combination of automatic HTTPS, simple configuration, and robust performance makes it an excellent choice for modern web applications. Its Docker-friendly architecture and operational simplicity reduce the complexity typically associated with production web server deployments, allowing teams to focus on application development rather than infrastructure management.