A few months ago I read about how great Varnish is for speeding up websites and I wanted to give it a try. The first tutorial I found worked really well, except for one very important detail: HTTPs Traffic. I have no idea why, but it wasn’t easy for me to find a good example of how to use Varnish + Nginx + SSL.
After reading several articles and putting it all together, I came up with a configuration that works well enough for me. It may not be the best, but it works: I can speed up my sites, I can pass the client IP to the backend server for statistics or ACL purposes, and I can bypass the Varnish cache for certain URLs if necessary.
I know there are more fancy solutions out there, but this is just the basic configuration to get you started so you don’t have to spend so much time looking for the same information I’ve already gathered.
For this example I used Varnish 6.6.1 and Nginx as backend.
The idea is to configure Varnish to listen on port 80(HTTP) so it will receive the requests from the internet, the real web server(backend) will listen on a different port, then Varnish will make a request(reverse proxy) on behalf of the client to the backend server and deliver the response. If you have experience with reverse proxies, then you already know how it works. For the HTTPS traffic, I just configured nginx to work as a reverse proxy, redirecting the HTTPS traffic to Varnish.
The traffic would look like this:
- HTTP traffic: request->varnish:80->nginx:[127.0.0.1:8080].
- HTTPS traffic: request->nginx:443->varnish:80->nginx:[127.0.0.1:8080].
Nginx configuration:
/etc/nginx/sites-availables/YOURSITE
server { listen 127.0.0.1:8080; server_name WWW.YOURSERVER.COM; root /var/www/MYSERVER; index index.php index.htm index.html; access_log /var/log/nginx/YOURSERVER.COM_access_log; error_log /var/log/nginx/YOURSERVER.COM_error_log; set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; real_ip_recursive on; add_header Vary X-Forwarded-Proto; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_FILENAME "/var/www/MYSERVER$fastcgi_script_name"; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT /var/www/MYSERVER; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param HTTPS $fastcgi_https; location ^~ /.well-known/ { try_files $uri /; } location ~ "\.php(/|$)" { try_files $uri $fastcgi_script_name =404; default_type application/x-httpd-php; fastcgi_pass unix:/run/php/17356767172661609.sock; } fastcgi_split_path_info "^(.+\.php)(/.+)$"; location / { try_files $uri $uri/ /index.php?$query_string; } } server { listen 127.0.0.1:8080; server_name YOURSERVER.COM; return 301 https:/WWW.YOURSERVER.COM$request_uri; } server { listen 443 ssl; server_name YOURSERVER.COM; ssl_certificate /etc/letsencrypt/live/YOURSERVER; ssl_certificate_key /etc/letsencrypt/live/YOURSERVER; return 301 https:/WWW.YOURSERVER.COM$request_uri; } server { listen 443 ssl http2; server_name WWW.YOURSERVER.COM; access_log /var/log/nginx/YOURSERVER.COM_access_log; error_log /var/log/nginx/YOURSERVER.COM_error_log; ssl_certificate /etc/letsencrypt/live/YOURSERVER; ssl_certificate_key /etc/letsencrypt/live/YOURSERVER; location / { proxy_pass http:/127.0.0.1:80; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Port 443; proxy_set_header Host $host; } }
/etc/nginx/conf.d/varnish_client_ip.php
<?php if( isset( $_SERVER[ 'HTTP_X_FORWARDED_FOR' ] ) ) { $_SERVER[ 'REMOTE_ADDR' ] = $_SERVER[ 'HTTP_X_FORWARDED_FOR' ]; }
Make sure you have this line in /etc/nginx/nginx.conf
include /etc/nginx/conf.d/*.conf;
Don’t forget to restart or reload nginx:
systemctl restart nginx or systemctl reload nginx
Varnish configuration:
/etc/varnish/default.vcl
# 4.0 or 4.1 syntax. vcl 4.1; import proxy; import std; backend default { .host = "127.0.0.1"; .port = "8080"; } sub vcl_recv { #HTTP to HTTPS if ((req.http.X-Forwarded-Proto != "https") || (req.http.Scheme && req.http.Scheme != "https")) { return (synth(750)); } elseif (!req.http.X-Forwarded-Proto && !req.http.Scheme && !proxy.is_ssl()) { return (synth(750)); } # Bypass Varnish if (req.url ~ "/WHATEVER YOU NEED/") { return (pass); } # Forward client's IP to the backend if (req.restarts == 0) { if (req.http.X-Real-IP) { set req.http.X-Forwarded-For = req.http.X-Real-IP; } else if (req.http.X-Forwarded-For) { set req.http.X-Forwarded-For = req.http.X-Forwarded-For + ", " + client.ip; } else { set req.http.X-Forwarded-For = client.ip; } } # Only cache GET and HEAD requests if (req.method != "GET" && req.method != "HEAD") { set req.http.X-Cacheable = "NO:REQUEST-METHOD"; return(pass); } } sub vcl_synth { if (resp.status == 750) { set resp.status = 301; set resp.http.location = "https://" + req.http.Host + req.url; set resp.reason = "Moved"; return (deliver); } } sub vcl_hash { if(req.http.X-Forwarded-Proto) { # Create cache variations depending on the request protocol hash_data(req.http.X-Forwarded-Proto); } } sub vcl_backend_response { # Inject URL & Host header into the object for asynchronous banning purposes set beresp.http.x-url = bereq.url; set beresp.http.x-host = bereq.http.host; # If we dont get a Cache-Control header from the backend # we default to 1h cache for all objects if (!beresp.http.Cache-Control) { set beresp.ttl = 1h; set beresp.http.X-Cacheable = "YES:Forced"; } # If the file is marked as static we cache it for 1 day if (bereq.http.X-Static-File == "true") { unset beresp.http.Set-Cookie; set beresp.http.X-Cacheable = "YES:Forced"; set beresp.ttl = 1h; } # Remove the Set-Cookie header when a specific Wordfence cookie is set if (beresp.http.Set-Cookie ~ "wfvt_|wordfence_verifiedHuman") { unset beresp.http.Set-Cookie; } if (beresp.http.Set-Cookie) { set beresp.http.X-Cacheable = "NO:Got Cookies"; } elseif(beresp.http.Cache-Control ~ "private") { set beresp.http.X-Cacheable = "NO:Cache-Control=private"; } } sub vcl_deliver { # Debug header if(req.http.X-Cacheable) { set resp.http.X-Cacheable = req.http.X-Cacheable; } elseif(obj.uncacheable) { if(!resp.http.X-Cacheable) { set resp.http.X-Cacheable = "NO:UNCACHEABLE"; } } elseif(!resp.http.X-Cacheable) { set resp.http.X-Cacheable = "YES"; } # Cleanup of headers unset resp.http.x-url; unset resp.http.x-host; }
/etc/default/varnish
Look the line DAEMON_OPTS=”-a :6081 …” and replace “-a :6081” for “-a :80“. This is how I configured mine:
DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -p feature=+http2 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,1g"
The “-s malloc,1g” parameter is the cache size, change for something more suitable for you.
/etc/systemd/system/varnish.service
Again, replace “-a :6081” for “-a :80“. This is how I configured mine:
ExecStart=/usr/sbin/varnishd \ -j unix,user=vcache \ -F \ -a :80 \ -p feature=+http2 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,1g
Adjust the cache size parameter here too(-s malloc).
If you don’t have the /etc/systemd/system/varnish.service file you can create it by executing:
sudo cp /lib/systemd/system/varnish.service /etc/systemd/system/
Then:
systemctl daemon-reload systemctl restart varnish
And that’s it. I did my best to include all the information, I’m sorry if I missed any details.
Good luck.
Note: Discussions here.