Improving snac Performance with Nginx Proxy Cache

Improving snac Performance with Nginx Proxy Cache


Some days ago, I migrated my personal Fediverse instance from Akkoma to snac. I appreciate snac a lot and believe it is the best solution available for many use cases.

Akkoma is an excellent tool, but I noticed that even for a small instance like mine, the database grows exponentially, and the database activity remains constant. Despite low load, my disks are continuously "flashing" - which isn't a problem in itself but clearly indicates ongoing activity. In my case, this activity seems unnecessary since it's just a single-user instance.

snac has shown excellent capabilities for managing the FediMeteo project (which I'll write about in detail soon), is lightweight, and has very few dependencies. Moreover, a dedicated snac instance handles sending updates from this blog to the Fediverse.

After successfully transferring my followers from Akkoma to snac without major issues, I started using the new instance. However, as soon as I posted a photo (approximately 4MB), something happened that I somewhat expected but in a different form. My home internet connection (upload speed: 20 Mbit/sec) became saturated, but I also noticed that new connections and smaller entities were resulting in 499 errors - meaning nginx couldn't open new connections to snac. After some investigation, I realized the reason: for every remote instance, Nginx was requesting the multimedia file from snac. Due to saturated connections (snac allows setting the maximum number of active threads), it took several seconds, leading to thread exhaustion in snac. Consequently, subsequent nginx requests resulted in 499 errors as snac could no longer allocate a thread.

To resolve this, I decided to implement direct caching using nginx. My reverse proxy (running in a different FreeBSD jail but this doesn't change the outcome) can cache multimedia files - storing them on first request and serving them directly to everyone who requests them without needing to ask snac every time. This approach is similar to what I use for the media in the BSD Cafe's Mastodon instance, where I employ Varnish to keep everything in RAM.

In snac, images and multimedia files are served from a specific path, such as: https://example.com/user/s/filename.png

The key here is the /s/ segment. By instructing nginx to cache all files containing /s/ in their URL, I can offload some of the work from snac.

I modified my nginx.conf file to include a caching setup:

# Caching configuration for snac
proxy_cache_path /var/cache/nginx/snac_cache levels=1:2 keys_zone=snac:10m max_size=1g inactive=1440m use_temp_path=off;

This creates a section for /var/cache/nginx where I define the caching parameters. It will allocate 10 MB of RAM for metadata, with a maximum cache size of 1 GB (useful if you decide to post some videos). Cached content will be considered invalid and removed after 1440 minutes (24 hours). This ensures that frequently accessed content (like profile avatars and banners) remains in the cache while less frequently accessed content can be replaced with newer content. The goal isn't to have all content cached but rather for nginx to serve files independently during peak times, preventing multiple remote instances from overwhelming snac simultaneously.

In the virtual host configuration for snac, I added this specific override for multimedia content:

# Caching rules for /s/ path
location ~ ^/.+/s/ {
    proxy_cache snac;
    proxy_pass http://snac-jail-ip:8001;
    proxy_set_header Host $host;
    proxy_cache_valid 200 1d;
    proxy_cache_valid 404 1h;
    proxy_ignore_headers "Cache-Control" "Expires";
    proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
    proxy_cache_lock on;
    add_header X-Proxy-Cache $upstream_cache_status;
}

After restarting nginx, the multimedia files will be cached on their first access and served by nginx, leaving snac's threads free to handle everything else. This setup ensures smoother performance and prevents resource exhaustion during periods of high activity or when new content is shared.