<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"><channel xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><title>IT Notes - backup</title><link>https://it-notes.dragas.net/categories/backup/</link><description>Articles in category backup</description><language>en</language><lastBuildDate>Wed, 28 Jan 2026 08:52:00 +0000</lastBuildDate><atom:link href="https://it-notes.dragas.net/categories/backup/feed.xml" rel="self" type="application/rss+xml"></atom:link><item><title>Time Machine inside a FreeBSD jail</title><link>https://it-notes.dragas.net/2026/01/28/time-machine-freebsd-jail/</link><description>&lt;p&gt;&lt;img src="https://unsplash.com/photos/W32yvc0JJjw/download?force=true&amp;w=640" alt="Time Machine inside a FreeBSD jail"&gt;&lt;/p&gt;&lt;p&gt;Many of my clients do not use Microsoft systems on their desktops; they use Linux-based systems or, in some cases, FreeBSD. Many use Apple systems - macOS - and are generally satisfied with them.
While I wash my hands of it when it comes to Microsoft systems (telling them they have to manage their desktops autonomously), I am often able to lend a hand with macOS. And one of the main requests they make is to manage the backups of their individual workstations.&lt;/p&gt;
&lt;p&gt;macOS, thanks to its Unix base, offers good native tools. Time Machine is transparent and effective, allowing a certain freedom of management. APFS, Apple's current file system, supports snapshots, so the backup will be effectively made on a snapshot. It also supports multiple receiving devices, so you can even have a certain redundancy of the backup itself.&lt;/p&gt;
&lt;p&gt;Having many FreeBSD servers, I am often asked to use their resources and storage. To build, in practice, a Time Machine inside one of the servers. And it is a simple and practical operation, quick and "painless". There are many guides, including the excellent one by &lt;a href="https://freebsdfoundation.org/our-work/journal/browser-based-edition/storage-and-filesystems/samba-based-time-machine-backups/"&gt;Benedict Reuschling&lt;/a&gt; from which I took inspiration for this one, and I will describe the steps I usually follow to set it all up in just a few minutes.&lt;/p&gt;
&lt;p&gt;I usually use &lt;a href="https://bastillebsd.org"&gt;BastilleBSD&lt;/a&gt; to manage my jails, so the first step is to create a new jail dedicated to the purpose. Here you have to decide on the approach: I suggest using a VNET jail or an "inherit" jail - meaning one that attaches to the host's network stack. On one hand, the inherit approach is less secure but, as often happens, it depends on the complexity of the situation. If, for example, we are using a Raspberry PI dedicated to the purpose, there is no reason to complicate things with bridges, etc., but we can attach directly to the network card with a creation command like:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille create tmjail 15.0-RELEASE inherit igb0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Where &lt;code&gt;igb0&lt;/code&gt; is the network interface we want to attach to.&lt;/p&gt;
&lt;p&gt;In case we want to attach to the interface but in the form of a bridge, we should use this syntax:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille create -V tmjail 15.0-RELEASE 192.168.0.42/24 igb0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Or, if our server already has a bridge (in this case it's &lt;code&gt;bridge0&lt;/code&gt;, but yours might be named differently):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille create -B tmjail 15.0-RELEASE 192.168.0.42/24 bridge0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At this point, you can choose: do we want to keep the backups inside the jail or in a separate dataset - which can even be on another pool? In some cases, this can be extremely useful: often I have jails running on fast disks (SSD or NVMe) but abundant storage on slower devices. In this example, therefore, I will create an external dataset for the backups (directly from the host) and mount it in the jail. You could also delegate the entire management of the dataset to the jail, which is a different approach.&lt;/p&gt;
&lt;p&gt;Let's create a space of 600 GB - already reserved - on the chosen pool. 600 GB is a small space, but it's ok for an example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs create -o quota=600G -o reservation=600G bigpool/tmdata
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We can also create separate datasets inside for each user and assign a specific space:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs create -o refquota=500g -o refreservation=500g bigpool/tmdata/stefano
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We can enter the jail and install what we need, remembering also to create the "mountpoint" for the dataset we just created:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille console tmjail 

pkg install -y samba419
mkdir /tmdata
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Exit the jail and instruct Bastille to mount the dataset inside the jail every time it is launched:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;exit
bastille mount tmjail /bigpool/tmdata /tmdata nullfs rw 0 0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's go back into the jail and start with the actual configuration. First, for each Time Machine user, we will create a system user. In my example, I will create the user "stefano", giving him &lt;code&gt;/var/empty&lt;/code&gt; as the home directory - this will give an error since we created a Bastille thin jail, but it's not a problem. It happens because in a thin jail some system paths are read-only or not manageable as they are on a full base system, but the user is only needed for ownership and Samba login.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-text"&gt;root@tmjail:~ # adduser
Username: stefano
Full name: Stefano
Uid (Leave empty for default):
Login group [stefano]:
Login group is stefano. Invite stefano into other groups? []:
Login class [default]:
Shell (sh csh tcsh nologin) [sh]: nologin
Home directory [/home/stefano]: /var/empty
Home directory permissions (Leave empty for default):
Use password-based authentication? [yes]: no
Lock out the account after creation? [no]:
Username    : stefano
Password    : &amp;lt;disabled&amp;gt;
Full Name   : Stefano
Uid         : 1001
Class       :
Groups      : stefano
Home        : /var/empty
Home Mode   :
Shell       : /usr/sbin/nologin
Locked      : no
OK? (yes/no) [yes]: yes
pw: chmod(var/empty): Operation not permitted
pw: chown(var/empty): Operation not permitted
adduser: INFO: Successfully added (stefano) to the user database.
Add another user? (yes/no) [no]: no
Goodbye!
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Give the correct permissions to the user:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# If you've not created specific datasets for the users, you'd better create their home directories now
mkdir /tmdata/stefano
chown -R stefano /tmdata/stefano/
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now we configure Samba for Time Machine. The file to create/modify is &lt;code&gt;/usr/local/etc/smb4.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-ini"&gt;[global]
workgroup = WORKGROUP
security = user
passdb backend = tdbsam
fruit:aapl = yes
fruit:model = MacSamba
fruit:advertise_fullsync = true
fruit:metadata = stream
fruit:veto_appledouble = no
fruit:nfs_aces = no
fruit:wipe_intentionally_left_blank_rfork = yes
fruit:delete_empty_adfiles = yes

[TimeMachine]
path = /tmdata/%U
valid users = %U
browseable = yes
writeable = yes
vfs objects = catia fruit streams_xattr zfsacl
fruit:time machine = yes
create mask = 0600
directory mask = 0700
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We have set up Time Machine to support all the necessary features of macOS and to show itself as "Time Machine". Having set &lt;code&gt;path = /tmdata/%U&lt;/code&gt;, each user will only see their own path.&lt;/p&gt;
&lt;p&gt;At this point, we create the Samba user (meaning the one we will have to type on macOS when we configure the Time Machine):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;smbpasswd -a stefano
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The Time Machine is seen by macOS because it announces itself via mDNS on the network. This type of service is performed by Avahi, which we are now going to configure. Although not strictly necessary (we can always find the Time Machine by connecting directly to its IP and macOS will remember everything), seeing it announced will help other non-expert users and ourselves when we have to configure another Mac in the future.&lt;/p&gt;
&lt;p&gt;Recent Samba releases won't need any specific avahi configuration, so we can skip this step.&lt;/p&gt;
&lt;p&gt;We are now ready to enable everything.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service dbus enable
service dbus start
service avahi-daemon enable
service avahi-daemon start
service samba_server enable
service samba_server start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Et voilà. If everything went according to plan, the Time Machine will announce itself on your network (if you have different networks, remember to configure the mDNS proxy on your router) and you will be able to log in (with the smb user you created) and start your first backup.&lt;/p&gt;
&lt;p&gt;I suggest encrypting the backups for maximum security and observing, from time to time, your Mac as it silently makes its backups to your trusted FreeBSD server.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 28 Jan 2026 08:52:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2026/01/28/time-machine-freebsd-jail/</guid><category>freebsd</category><category>timemachine</category><category>apple</category><category>backup</category><category>data</category><category>zfs</category><category>server</category><category>tutorial</category><category>ownyourdata</category></item><item><title>Make Your Own Backup System – Part 2: Forging the FreeBSD Backup Stronghold</title><link>https://it-notes.dragas.net/2025/07/29/make-your-own-backup-system-part-2-forging-the-freebsd-backup-stronghold/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="A hard disk - ready to host our backups"&gt;&lt;/p&gt;&lt;p&gt;With the &lt;a href="https://it-notes.dragas.net/2025/07/18/make-your-own-backup-system-part-1-strategy-before-scripts/"&gt;primary backup strategies and methodologies introduced&lt;/a&gt;, we've reached the point where we can get specific: the Backup Server configuration.&lt;/p&gt;
&lt;p&gt;When choosing the type of backup server to use, I tend to favor specific setups: either I trust a professional backup service provider (like Colin Percival's &lt;a href="https://www.tarsnap.com/"&gt;Tarsnap&lt;/a&gt;), or I want full control over the disks where the backups will be hosted. In both cases, for the past twenty years, my operating system of choice for backup servers has been FreeBSD. With a few rare exceptions for clients with special requests, it covers all my needs. When I require Linux-based solutions, such as the &lt;a href="https://www.proxmox.com/en/products/proxmox-backup-server/overview"&gt;Proxmox Backup Server&lt;/a&gt;, I create a VM and manage it within.&lt;/p&gt;
&lt;p&gt;I typically use both IPv4 and IPv6. For IPv4, I "play" with NAT and port forwarding. For IPv6, I tend to assign a public IPv6 address to each jail or VM, which is then filtered by the physical server's firewall. Unfortunately, every provider, server, and setup has a different approach to IPv6, making it impossible to cover them all in this article. When a provider allows for routed setups, I use this approach: &lt;a href="https://it-notes.dragas.net/2023/09/23/make-your-own-vpn-freebsd-wireguard-ipv6-and-ad-blocking-included/"&gt;Make your own VPN: FreeBSD, WireGuard, IPv6, and ad-blocking included&lt;/a&gt; - assigning a /72 to the bridge for the jails and VMs.&lt;/p&gt;
&lt;p&gt;In my opinion, FreeBSD is a perfect all-rounder for backups, thanks to its ability to completely partition services. You can separate backup services (or specific servers/clients) into different jails or even VMs. Furthermore, using ZFS greatly enhances both flexibility and the range of tools you can use.&lt;/p&gt;
&lt;p&gt;The main distinction is usually between local backup servers (physically accessible, though not always attended, and in locations deemed secure) and remote ones, such as leased external servers. I personally use a combination of both. If the services I need to back up are external, in a datacenter, and need to be quickly restorable, I prefer to always have a copy on another server in a different datacenter with good outbound connectivity. This guarantees good bandwidth for restores, which isn't always available from a local connection to the outside world. However, an internal, nearby, and accessible backup server (even a Raspberry Pi or a mini PC) ensures physical access to the data. Whenever possible, I maintain both an external and an internal copy - and they are autonomous, meaning the internal copy is &lt;em&gt;not&lt;/em&gt; a replica of the external one, but an additional, independent backup. This ensures that if a problem occurs with the external backup, it won't automatically propagate to the internal one. In any case, the backup must always be in a different datacenter from the one containing the production data. When &lt;a href="https://www.reuters.com/article/idUSKBN2B20NT/"&gt;the fire at the OVH datacenter in Strasbourg&lt;/a&gt; caused the entire complex to shut down, many people found themselves in trouble because their backups were in the same, now unreachable, location. I had a copy with another provider, in a different datacenter and country, as well as a local copy.&lt;/p&gt;
&lt;p&gt;Despite it being "just" a backup server, I almost always use some form of disk redundancy. If I have two disks, I set up a mirror. With three or more, I use RaidZ1 or RaidZ2. This is because, in my view, backups are nearly as important as production data. The inability to recover data from a backup means it's lost forever. And it happens often, very often, that someone contacts me to recover a file (or a database, etc.) days or weeks after its accidental loss or deletion. Usually, pulling out a file from a two-month-old backup generates a mix of disbelief, admiration, but above all, a sense of security in the person requesting it. And that is what our work should instill in the people we collaborate with.&lt;/p&gt;
&lt;p&gt;The backup server should be hardened. If possible, it should be protected and unreachable from the outside. My best backup servers are those accessible only via VPN, capable of pulling the data on their own. If they are on a LAN, it's even better if they are completely disconnected from the Internet.&lt;/p&gt;
&lt;p&gt;For this very reason, &lt;strong&gt;backups must always be encrypted&lt;/strong&gt;. Having a backup means having full access to the data, and the backup server is the prime target for being breached or stolen if the goal is to get your hands on that data. I've seen healthcare facilities' backup servers being targeted (in a rather trivial way, to be honest) by journalists looking for health details of important figures. It is therefore critical that the backup server be as secure as possible.&lt;/p&gt;
&lt;p&gt;Based on the type of access, I use two types of encryption:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;If the server is local&lt;/strong&gt; (especially if the ZFS pool is on external disks), I usually install FreeBSD on UFS in read-only mode, as &lt;a href="https://it-notes.dragas.net/2024/05/31/freebsd-tips-and-tricks-native-ro-rootfs/"&gt;I've described in a previous article&lt;/a&gt;, and encrypt the backup disks with &lt;a href="https://man.freebsd.org/cgi/man.cgi?geli(8)"&gt;GELI&lt;/a&gt;. This ensures that in the event of a "dirty" shutdown (more likely in unattended environments), I can reconnect to the host and then reactivate the ZFS pool. This approach makes it nearly impossible to retrieve even the pool's metadata if the disks are stolen, as GELI performs a full-device encryption. For example, an employee of a company I work with stole one of the secondary backup disks (which was located at a different, unmonitored company site) to steal information. He got nothing but a criminal complaint. With this approach, it's also not necessary to further encrypt the datasets, which avoids some issues (which I'll discuss later, in a future post).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;If the server is remote&lt;/strong&gt;, in a datacenter, I usually use ZFS native encryption, encrypting the main backup dataset (and &lt;a href="https://bastillebsd.org/"&gt;BastilleBSD&lt;/a&gt;'s, if applicable). Consequently, all child datasets containing backups will also be encrypted. In this case as well, a password will be required after a reboot to unlock those datasets, ensuring that the data cannot be extracted if control of the disks is lost.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here is an example of how to use GELI to encrypt an entire partition and then create a ZFS pool on it (in the example, the disk is &lt;code&gt;da1&lt;/code&gt; - do not follow these commands blindly, or you will erase all content on the &lt;code&gt;da1&lt;/code&gt; device!):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# WARNING: This destroys the existing partition table on disk da1
gpart destroy -F da1

# Create a new GPT partition table
gpart create -s gpt da1

# Add a freebsd-zfs partition that spans the entire disk
# The -a 1m flag ensures proper alignment
gpart add -t freebsd-zfs -a 1m da1

# Initialize GELI encryption on the new partition (da1p1)
# We use AES-XTS with 256-bit keys and a 4k sector size
# The -b flag means &amp;quot;boot,&amp;quot; prompting for the passphrase at boot time
geli init -b -l 256 -s 4096 da1p1
# You will be prompted for a passphrase: choose a strong one and save it!

# Attach the encrypted partition. A new device /dev/da1p1.eli will be created.
# You will be prompted for the passphrase you just set
geli attach da1p1

# (Optional) Check the status of the encrypted device
geli status da1p1

# Create the ZFS pool &amp;quot;bckpool&amp;quot; on the encrypted device
# We enable zstd compression (an excellent compromise) and disable atime
zpool create -O compression=zstd -O atime=off bckpool da1p1.eli
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this setup, the reference pool for everything related to backups will be &lt;code&gt;bckpool&lt;/code&gt; - and you'll need to keep this in mind for the next steps. Additionally, after every server reboot, you'll need to "unlock" the disk and import the pool:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Enter the passphrase when prompted
geli attach da1p1

# Import the ZFS pool, which is now visible
zpool import bckpool
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;With this method, it's not necessary to encrypt the ZFS datasets, as the underlying disk (or, more precisely, the partition containing the ZFS pool) is already encrypted.&lt;/p&gt;
&lt;p&gt;If, instead, you choose to encrypt the ZFS dataset (for example, if you install FreeBSD on the same disks that will hold the data and don't want to use a multi-partition approach), you should create a base encrypted dataset. Inside it, you can create the various backup datasets, VMs, and the BastilleBSD mountpoint. Due to property inheritance, they will all be encrypted as well.&lt;/p&gt;
&lt;p&gt;To create an encrypted dataset, a command like this will suffice:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Creates a new dataset with encryption enabled.
# keylocation=prompt will ask for a passphrase every time it's mounted.
# keyformat=passphrase specifies the key type.
zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase zfspool/dataset
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this case, after every reboot, you will need to load the key and mount the dataset:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs load-key zfspool/dataset
zfs mount zfspool/dataset
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Keep in mind the setup you choose, as many of the subsequent choices and commands will depend on it.&lt;/p&gt;
&lt;h2&gt;Base System Setup&lt;/h2&gt;
&lt;p&gt;I'll install BastilleBSD - a useful tool for separating services into jails. It will be helpful for isolating our backup services:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install -y bastille
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you used ZFS for the root filesystem, you can proceed directly with the setup. Otherwise (i.e., ZFS on other disks), you'll need to edit the &lt;code&gt;/usr/local/etc/bastille/bastille.conf&lt;/code&gt; file and specify the correct dataset on which to install the jails. Then run:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille setup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once the automatic setup is complete, check the &lt;code&gt;/etc/pf.conf&lt;/code&gt; file - it will be automatically configured to only accept SSH connections. Ensure the network interface is set correctly. When you activate &lt;code&gt;pf&lt;/code&gt;, you will be kicked out of the server, but you can then reconnect.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service pf start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's bootstrap a FreeBSD release for the jails - this will be useful later.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille bootstrap 14.3-RELEASE update
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, we create a local bridge. Jails and VMs can be attached to it, making them fully autonomous. Using VNET jails, for example, will allow the creation of VPNs or &lt;code&gt;tun&lt;/code&gt; interfaces inside them, simplifying potential future setups (and increasing security by using a dedicated network stack).&lt;/p&gt;
&lt;p&gt;Modify the &lt;code&gt;/etc/rc.conf&lt;/code&gt; file and add:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Add lo1 and bridge0 to the list of cloned interfaces
cloned_interfaces=&amp;quot;lo1 bridge0&amp;quot;
# Assign an IP address and netmask to the bridge
ifconfig_bridge0=&amp;quot;inet 192.168.0.1 netmask 255.255.255.0&amp;quot;
# Enable gateway functionality for routing
gateway_enable=&amp;quot;yes&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's also modify &lt;code&gt;/etc/pf.conf&lt;/code&gt; to allow the &lt;code&gt;192.168.0.0/24&lt;/code&gt; subnet to access the Internet via NAT. We will skip packet filtering on &lt;code&gt;bridge0&lt;/code&gt; and enable NAT. This isn't the most secure setup, but it's sufficient to get started:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;#...
# Skip PF processing on the internal bridge interface
set skip on bridge0
#...
# NAT traffic from our internal network to the outside world
nat on $ext_if from 192.168.0.0/24 to any -&amp;gt; ($ext_if:0)
#...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To ensure the new settings are correct, it's a good idea to test with a reboot.&lt;/p&gt;
&lt;p&gt;Since I often configure &lt;a href="https://github.com/freebsd/vm-bhyve"&gt;vm-bhyve&lt;/a&gt; in my setups, I prefer to install it right away, creating the dataset that will contain the VMs and installation templates. Remember that &lt;code&gt;zroot&lt;/code&gt; is only valid if you installed the entire system on ZFS; otherwise, you'll need to change it to your own dataset:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Install required packages
pkg install vm-bhyve grub2-bhyve bhyve-firmware
# Create a dataset to store VMs
zfs create zroot/VMs
# Enable the vm service at boot
sysrc vm_enable=&amp;quot;YES&amp;quot;
# Set the directory for VMs, using the ZFS dataset
sysrc vm_dir=&amp;quot;zfs:zroot/VMs&amp;quot;
# Initialize vm-bhyve
vm init
# Copy the example templates
cp /usr/local/share/examples/vm-bhyve/* /zroot/VMs/.templates/
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At this point, I usually enable the console via &lt;code&gt;tmux&lt;/code&gt;. This means that when a VM is launched, it won't open a VNC port by default, but a &lt;code&gt;tmux&lt;/code&gt; session connected to the VM's serial port. Let's install and configure &lt;code&gt;tmux&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install -y tmux
vm set console=tmux
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's also attach the switch we created (&lt;code&gt;bridge0&lt;/code&gt;) to &lt;code&gt;vm-bhyve&lt;/code&gt; so we can use it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;vm switch create -t manual -b bridge0 public
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, &lt;code&gt;vm-bhyve&lt;/code&gt; is ready.&lt;/p&gt;
&lt;p&gt;The basic infrastructure is complete. We now have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ZFS&lt;/strong&gt; to ensure data integrity, which will also handle redundancy, etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;BastilleBSD&lt;/strong&gt; to manage jails, useful for backing up Linux, NetBSD, OpenBSD, and non-ZFS FreeBSD machines.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;vm-bhyve&lt;/strong&gt; to install specific systems (like Proxmox Backup Server).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Backup Strategies&lt;/h2&gt;
&lt;p&gt;I use various backup tools, too many to list in this article. So I'll make a broad distinction, describing how to use this server to achieve our goal: securing data.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;FreeBSD servers with ZFS&lt;/strong&gt; (hosts, VMs, jails, hypervisors, and their respective VMs), I use an extremely useful, efficient, and reliable tool: &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;zfs-autobackup&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Linux servers (without ZFS), NetBSD, OpenBSD&lt;/strong&gt;, etc. (any non-ZFS OS), I usually use &lt;a href="https://www.borgbackup.org/"&gt;BorgBackup&lt;/a&gt;. There are other fantastic tools like &lt;a href="https://restic.net/"&gt;restic&lt;/a&gt;, &lt;a href="https://kopia.io/"&gt;Kopia&lt;/a&gt;, etc., but BorgBackup has never let me down and has served me well even on low-power devices and after incredibly complex disasters.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Proxmox servers&lt;/strong&gt; (a solution I've used with satisfaction in production since 2013, although I'm recently migrating to FreeBSD/bhyve where possible), I use two possible alternatives (often both at the same time): if the storage is ZFS, I use the &lt;code&gt;zfs-autobackup&lt;/code&gt; approach. In either case, the most practical solution is the Proxmox Backup Server. And the Proxmox Backup Server is one of the reasons I proposed installing &lt;code&gt;vm-bhyve&lt;/code&gt;: running it in a VM and storing the data on the FreeBSD host gives you the best of both worlds. Some time ago, I tried running it in a FreeBSD jail (via Linuxulator), but it didn't work.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Backups using zfs-autobackup&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;zfs-autobackup&lt;/code&gt; is an extremely useful and effective tool. It allows for "pull" type backups, as well as having an intermediary host that connects to both the source and destination, which is useful if you don't want direct contact between the source and destination. I won't describe the latter setup, but the documentation is clear, and I have several of them in production, ensuring that the production server and its backup server cannot communicate with each other.&lt;/p&gt;
&lt;p&gt;I usually create a dataset for each server and instruct &lt;code&gt;zfs-autobackup&lt;/code&gt; to keep that server's backups in that dataset. The snapshots taken and transferred will all be from the same instant, so as not to create a time skew (some tools of this kind snapshot a dataset, then transfer it, which can result in minutes of difference between two different datasets from the same server).&lt;/p&gt;
&lt;p&gt;I've described in detail how I perform this type of backup in a &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;previous post&lt;/a&gt;, so I suggest reading that post for reference.&lt;/p&gt;
&lt;p&gt;Let's install zfs-autobackup on the FreeBSD server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install py311-zfs-autobackup mbuffer
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Backups for other servers using BorgBackup&lt;/h3&gt;
&lt;p&gt;When I don't have ZFS available or need to perform a file-based backup (all or partial), I use a different technique. &lt;code&gt;BorgBackup&lt;/code&gt; backups are primarily "push" based, meaning the client will connect to the backup server. This is not optimal or the most secure approach, as the backup server should, in theory, be hardened. Even when protecting everything via VPN, the risk remains that a compromised server could connect to its backup server and alter or delete the backups. I have seen this happen in ransomware cases (especially in the Microsoft world), and so I try to be careful to minimize this type of problem, mainly through snapshots of the backup server (an operation that will be described later).&lt;/p&gt;
&lt;p&gt;To ensure the highest possible security, I create a FreeBSD jail on the backup server for each server I need to back up. The advantage of this approach is the complete separation of all servers from each other. By using a regular user inside a jail, a compromised server that connects to its backup server would only be able to reach its own backups, as it would be confined to a user account and, even if it managed to escalate privileges, still be inside a jail.&lt;/p&gt;
&lt;p&gt;Let's say, for example, we want to back up a server called "ServerA" (great imagination, I know). We create a dedicated jail on the backup server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Create a new VNET jail named &amp;quot;servera&amp;quot; attached to our bridge
bastille create -B servera 14.3-RELEASE 192.168.0.101/24 bridge0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;BastilleBSD will automatically set the host's gateway for the jail. In our case, this is incorrect, so we need to modify it and set the jail's gateway to &lt;code&gt;192.168.0.1&lt;/code&gt; in the &lt;code&gt;/usr/local/bastille/jails/servera/root/etc/rc.conf&lt;/code&gt; file:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# ...
defaultrouter=&amp;quot;192.168.0.1&amp;quot;
# ...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Restart the jail and connect to it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille restart servera
bastille console servera
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, inside the jail, we install &lt;code&gt;borgbackup&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install py311-borgbackup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;BorgBackup doesn't run a daemon; it's launched by the remote server (which sends its data to the backup server), so it's important that the installed version is compatible with the one on the remote host.&lt;/p&gt;
&lt;p&gt;Since we'll be using SSH, let's enable it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service sshd enable
service sshd start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And create a non-privileged user for this purpose:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# The 'adduser' utility provides an interactive way to create a user.
root@servera:~ # adduser
Username: servera
Full name: Server A
Uid (Leave empty for default): 
Login group [servera]: 
Login group is servera. Invite servera into other groups? []: 
Login class [default]: 
Shell (sh csh tcsh nologin) [sh]: 
Home directory [/home/servera]: 
Home directory permissions (Leave empty for default): 
Use password-based authentication? [yes]: 
Use an empty password? (yes/no) [no]: 
Use a random password? (yes/no) [no]: yes
Lock out the account after creation? [no]: 
Username    : servera
Password    : &amp;lt;random&amp;gt;
Full Name   : Server A
Uid         : 1001
Class       : 
Groups      : servera 
Home        : /home/servera
Home Mode   : 
Shell       : /bin/sh
Locked      : no
OK? (yes/no) [yes]: yes
adduser: INFO: Successfully added (servera) to the user database.
adduser: INFO: Password for (servera) is: JIkdq8Ex
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The user is created and can receive SSH connections. After setting everything up, I suggest disabling password-based login in the jail's SSH configuration, using only public key authentication.&lt;/p&gt;
&lt;p&gt;As mentioned, the biggest risk of a "push" backup is that a compromised client could access the backup server and delete or encrypt the backup history, rendering it useless.&lt;/p&gt;
&lt;p&gt;To drastically mitigate this risk, we can configure SSH to force the client to operate in a special Borg mode called &lt;strong&gt;append-only&lt;/strong&gt;. In this mode, the SSH key used by the client will only have permission to create new archives, not to read or delete old ones. However, this approach could complicate some client-side operations (like &lt;code&gt;mount&lt;/code&gt;, &lt;code&gt;prune&lt;/code&gt;, etc.), forcing them to be done on the server. For this reason, I won't describe it in this setup, "limiting" myself to taking snapshots of the repositories. It can be a very good practice, so I recommend considering it.&lt;/p&gt;
&lt;p&gt;Let's initialize the BorgBackup repository. In this example, for simplicity, I won't set up repository encryption. If the jails are on an encrypted dataset or GELI-encrypted disks, there will still be data encryption on the disks, but there will be no protection against someone who could physically access the server while the disks are mounted. As usual, security is like an onion: every layer helps. Personally, I suggest enabling and using it ALWAYS.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Switch to the new user
su -l servera
# Initialize a new Borg repo named &amp;quot;servera&amp;quot; with no encryption (for this example)
borg init -e none servera
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The jail is ready, but it's unreachable from the outside. There are two ways to make it accessible:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Install a VPN system inside the jail itself.&lt;/strong&gt; Using tools like Zerotier or Tailscale (which don't need to expose ports) will immediately create the conditions to connect to the jail, which will remain inaccessible from the outside. As the jail is a VNET jail, we're free to choose any of the supported VPN technologies.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expose a port on the backup server&lt;/strong&gt;, i.e., on the host, to allow external connections. Many advise against this path as they consider it less secure. It is, but sometimes we don't have the luxury of installing whatever we want on the server we're backing up.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To expose the port, go back to the host and modify the &lt;code&gt;/etc/pf.conf&lt;/code&gt; file, creating the &lt;code&gt;rdr&lt;/code&gt; and &lt;code&gt;pass&lt;/code&gt; rules to let packets in:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# ...
# Redirect incoming traffic on port 1122 to the jail's SSH port (22)
rdr on $ext_if inet proto tcp from any to any port = 1122 -&amp;gt; 192.168.0.101 port 22
# ...
# Allow incoming traffic on port 1122
pass in inet proto tcp from any to any port 1122 flags S/SA keep state
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Reload the &lt;code&gt;pf&lt;/code&gt; configuration:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service pf reload
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The jail will now be reachable on the server's public IP, on port 1122. Obviously, this port number is for illustrative purposes, and I used &lt;code&gt;from any&lt;/code&gt;, but for better security, you should replace &lt;code&gt;any&lt;/code&gt; with the IP address of the server that will be connecting to perform the backup.&lt;/p&gt;
&lt;p&gt;By repeating this process for each server and creating a separate jail for each, you can have isolated jails in separate datasets with their backups, potentially setting space limits using ZFS quotas.&lt;/p&gt;
&lt;p&gt;It's important to remember that backing up a live filesystem (i.e., without a snapshot or dumps) has a very high probability of being impossible to restore completely. Databases hate this approach because files will change while being copied and tend to get corrupted. Of course, it depends on the nature of the data (a backup of a static website will have no issues, but a WordPress database probably will), but it's crucial to think about a technique to snapshot the filesystem before proceeding. For example, I have already written about how to create snapshots on FreeBSD with UFS in a previous article: &lt;a href="https://it-notes.dragas.net/2024/06/04/freebsd-tips-and-tricks-creating-snapshots-with-ufs/"&gt;FreeBSD tips and tricks: creating snapshots with UFS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I will cover other operating systems in a future, dedicated post.&lt;/p&gt;
&lt;h3&gt;Proxmox Backup Server in a Dedicated VM&lt;/h3&gt;
&lt;p&gt;Starting with version 4.0 (which is still in beta at the time of this writing), Proxmox Backup Server (PBS) supports storing its data in an S3 bucket. This is excellent news as it decouples the server from the data. There are great open-source S3 implementations, like &lt;a href="https://min.io/"&gt;Minio&lt;/a&gt; or &lt;a href="https://github.com/seaweedfs/seaweedfs"&gt;SeaweedFS&lt;/a&gt;, which allow for clustering, replication, etc. In this "simple" case, we will install Proxmox Backup Server in a small VM, while for the data, we'll install Minio in a native FreeBSD jail. The advantage is undeniable: the VM will only serve as an "intermediary", but the data will rest directly on the FreeBSD host's dataset, natively. It will also be possible to specify other external endpoints, other repositories, etc.&lt;/p&gt;
&lt;p&gt;As a philosophy, I tend not to use external providers unless for specific needs, so installing Minio in a jail is a perfect solution to manage this situation.&lt;/p&gt;
&lt;p&gt;Let's install PBS by downloading the ISO from their website (https://enterprise.proxmox.com/iso/) - at this moment, the version that supports this setup is 4.0 Beta.&lt;/p&gt;
&lt;p&gt;The directory to download to is the &lt;code&gt;vm-bhyve&lt;/code&gt; ISOs directory. It's not strictly necessary, but it's useful for not "losing" it somewhere. So, go to the directory and download it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cd /zroot/VMs/.iso
fetch https://enterprise.proxmox.com/iso/proxmox-backup-server_4.0-BETA-1.iso
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now let's create a VM with &lt;code&gt;vm-bhyve&lt;/code&gt;. We can start from the Debian template, but we'll make some modifications to optimize performance. In this example, I'm giving it 30 GB of disk space, 2 GB of RAM, and 2 cores.&lt;/p&gt;
&lt;p&gt;If you want to store all backups inside the VM, you'll need to size the virtual disk correctly (or create and attach another one). In this case, I will focus on the "clean" VM that will store its data on a dedicated jail with Minio.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;vm create -t debian -s 30G -m 2G -c 2 pbs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once the empty VM is created, let's modify its options:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;vm configure pbs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We will modify the VM to be UEFI and to use the NVME disk driver - bhyve &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;performs significantly better on NVME than virtio, as previously tested&lt;/a&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;loader=&amp;quot;uefi&amp;quot;
cpu=&amp;quot;2&amp;quot;
memory=&amp;quot;2G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;nvme&amp;quot;
disk0_name=&amp;quot;disk0.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Fortunately, the Proxmox team has provided for the installation of the Backup Server on devices without a graphical interface, so the boot menu will allow installation via serial console. Let's launch the installation and connect to the virtual serial console:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cd /zroot/VMs/.iso
vm install pbs proxmox-backup-server_4.0-BETA-1.iso
vm console pbs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Select the installation via &lt;strong&gt;Terminal UI (serial console)&lt;/strong&gt; and proceed normally as if it were a physical host, assigning an IPv4 address from the &lt;code&gt;192.168.0.x&lt;/code&gt; range (in this example, I'll use &lt;code&gt;192.168.0.3&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;This way, the Proxmox Backup Server will run in a VM, with the ability to take snapshots before updates, etc., without any worries.&lt;/p&gt;
&lt;p&gt;Once the installation is complete, PBS will reboot and listen on port 8007 of its IP. Again, as with the jails, we have two options: install a VPN system within the VM itself (thus exposing it automatically only on that VPN - generally a more secure operation) or expose port 8007 on the server's public IP.&lt;/p&gt;
&lt;p&gt;In the latter case, add the relevant lines to the &lt;code&gt;/etc/pf.conf&lt;/code&gt; file on the FreeBSD backup server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# ...
# Redirect incoming traffic on port 8007 to the PBS VM's web interface
rdr on $ext_if inet proto tcp from any to any port = 8007 -&amp;gt; 192.168.0.3 port 8007
# ...
# Allow that traffic to pass
pass in inet proto tcp from any to any port 8007 flags S/SA keep state
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Reload the &lt;code&gt;pf&lt;/code&gt; configuration:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service pf reload
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The PBS VM configuration is complete. If you chose to use the PBS's internal disk as a repository, no further operations are necessary (other than the normal repository creation, etc., within PBS).&lt;/p&gt;
&lt;p&gt;In this case, however, we will use a different approach.&lt;/p&gt;
&lt;h4&gt;Creating a Minio Jail as a Data Repository for PBS&lt;/h4&gt;
&lt;p&gt;This approach, in my opinion, has a number of important advantages. The first is that Minio will run in a dedicated jail on the host, at full performance, and will store the data directly on the physical ZFS datapool, thus removing any other layer in between. This jail could potentially be moved to other hosts (by connecting PBS and the jail via VPN or public IP), made redundant thanks to all of Minio's features, etc. Another solution I am successfully testing (in other setups) is SeaweedFS.&lt;/p&gt;
&lt;p&gt;Let's create a dedicated jail with Minio and put it on the bridge, so that PBS can access it on the LAN.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille create -B minio 14.3-RELEASE 192.168.0.11/24 bridge0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If not configured directly, BastilleBSD will use the host's gateway for the jail, which is incorrect in this case. So let's go modify it and restart the jail. Enter the jail with:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille console minio
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And modify the &lt;code&gt;/etc/rc.conf&lt;/code&gt; file to have the correct gateway (following the example addresses):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# ...
ifconfig_vnet0=&amp;quot; inet 192.168.0.11/24 &amp;quot;
defaultrouter=&amp;quot;192.168.0.1&amp;quot;
# ...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Exit the jail and restart it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille restart minio
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Enter the jail and install Minio:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille console minio
pkg install -y minio
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Minio is already able to start, but PBS, even on the LAN, wants an encrypted connection. Fortunately, there's a handy tool that can generate the certificates for us:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Download the certgen tool
fetch https://github.com/minio/certgen/releases/latest/download/certgen-freebsd-amd64

# Make it executable and run it for our jail's IP
chmod a+rx certgen-freebsd-amd64
./certgen-freebsd-amd64  -host &amp;quot;192.168.0.11&amp;quot;

# Create the necessary directories and set permissions
mkdir -p /usr/local/etc/minio/certs
cp private.key public.crt /usr/local/etc/minio/certs/
chown -R minio:minio /usr/local/etc/minio/certs/
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's view the certificate's fingerprint. Since it's self-signed, we'll need it for PBS later. For security reasons, PBS will ask for the fingerprint of non-directly verifiable certificates. Run the following command and take note of the result:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;openssl x509 -in /usr/local/etc/minio/certs/public.crt -noout -fingerprint -sha256
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At this point, enable and configure Minio in &lt;code&gt;/etc/rc.conf&lt;/code&gt;. 
&lt;strong&gt;WARNING&lt;/strong&gt;: The username and password (access key and secret) used in this example are insecure and for testing purposes only. It is strongly recommended to use different values:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Enable Minio service
minio_enable=&amp;quot;YES&amp;quot;
# Set the address for the Minio console
minio_console_address=&amp;quot;:8751&amp;quot;
# Set the root user and password as environment variables
minio_env=&amp;quot;MINIO_ROOT_USER=testaccess MINIO_ROOT_PASSWORD=testsecret&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Start Minio:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service minio start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If everything went correctly, Minio is now running (with its certificates) and ready to receive connections.&lt;/p&gt;
&lt;p&gt;It's now time to create the bucket(s) that PBS will use. There are several ways to do this, but to test that everything is working and to configure PBS, I suggest connecting via an SSH tunnel.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Create an SSH tunnel from your local machine to the backup server
# Port 8007 is forwarded to the PBS web UI
# Port 8751 is forwarded to the Minio console
ssh user@backupServerIP -L8007:192.168.0.3:8007 -L8751:192.168.0.11:8751
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This way, we'll create a tunnel between the FreeBSD backup server and our workstation, mapping &lt;code&gt;127.0.0.1:8007&lt;/code&gt; to &lt;code&gt;192.168.0.3:8007&lt;/code&gt; (the PBS web interface) and &lt;code&gt;127.0.0.1:8751&lt;/code&gt; to &lt;code&gt;192.168.0.11:8751&lt;/code&gt; (the Minio console port).&lt;/p&gt;
&lt;p&gt;Now, connect to &lt;code&gt;https://127.0.0.1:8751&lt;/code&gt;, enter the credentials specified in &lt;code&gt;/etc/rc.conf&lt;/code&gt;, and create a bucket.&lt;/p&gt;
&lt;p&gt;Once the bucket is created, you can configure PBS to use it. Connect to PBS via &lt;code&gt;https://127.0.0.1:8007&lt;/code&gt; and go to &lt;strong&gt;S3 Endpoints&lt;/strong&gt;. Set a name, use &lt;code&gt;192.168.0.11&lt;/code&gt; as the IP and &lt;code&gt;9000&lt;/code&gt; as the port, enter the access and secret keys, and the certificate fingerprint we generated earlier. &lt;strong&gt;Enable "Path Style"&lt;/strong&gt; or it will not work.&lt;/p&gt;
&lt;p&gt;Then go to &lt;strong&gt;Datastores&lt;/strong&gt; and add it, as you would for any other S3 datastore, by specifying the created bucket and a local directory where the system will keep its cache.&lt;/p&gt;
&lt;p&gt;If everything was set up correctly, PBS will create its structure in the bucket, and from that moment on, you can use it. Always keep in mind that this is still a "technology preview", so there may be issues, but from my tests, it is sufficiently reliable.&lt;/p&gt;
&lt;h3&gt;Taking Local Snapshots of Backups&lt;/h3&gt;
&lt;p&gt;One of the most common techniques used in ransomware attacks is to also delete or encrypt backups. They often use automated methods, relying on the fact that many (too many!) consider a "backup" to be a simple copy of files to a network share. However, it's not impossible that, in specific cases, they might compromise the machine and connect to the backup server. This is nearly impossible with a "pull" type backup (like the one managed by &lt;code&gt;zfs-autobackup&lt;/code&gt;) but is still possible with the "push" approach, which involves using BorgBackup or similar tools.&lt;/p&gt;
&lt;p&gt;This happened to one of my clients once - in that case, the problem originated internally, from an employee who wanted to cover up his mistake, inadvertently creating a disaster - but that will be material for another post.&lt;/p&gt;
&lt;p&gt;Fortunately, the client had a system that solved the problem: thanks to ZFS, we can have local snapshots on the backup server, which are invisible and inaccessible to the production server. Since we have already installed &lt;code&gt;zfs-autobackup&lt;/code&gt;, it's easy to use it for this purpose as well. I've already talked about this in a &lt;a href="https://it-notes.dragas.net/2024/08/21/automating-zfs-snapshots-for-peace-of-mind/"&gt;previous article&lt;/a&gt; and won't rewrite the steps here. Just consult that article, keeping in mind that in this case, it's not advisable to snapshot all the datasets on the backup server (the space would grow exponentially), but only those at risk. In the cases analyzed in this post, this applies only to the &lt;code&gt;push&lt;/code&gt; part, as PBS will also be accessible only from the Proxmox servers and not from the VMs they contain. If, in this case too, you don't trust those who manage the Proxmox servers, just set up snapshots for the Minio jail as well.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This long post aims to analyze, in a general way, how I believe one can manage reasonably secure backups of their data. Obviously, there are many variables, additional precautions, possible optimizations, hardening, etc., that must be studied on a case-by-case basis. There are old rules, new rules, old and new philosophies. Recently, many people who have embraced the cloud have often stopped thinking about backups, only to realize it when something happens and the data has, indeed, vanished... into the clouds.&lt;/p&gt;
&lt;p&gt;In this post, I have generically covered the setup of the backup server, and this demonstrates how FreeBSD, thanks to its features, can be considered an ideal platform for this type of task.&lt;/p&gt;
&lt;p&gt;In the next articles in this series, I will examine the client side, i.e., how to structure them for a sufficiently reliable backup, and how to monitor everything - because I've seen this too: people resting easy because the backup was supposedly running every night, but in fact, the backup had been failing every night for more than 4 years.&lt;/p&gt;
&lt;p&gt;Stay Tuned and stay...backupped!&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 29 Jul 2025 08:00:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/07/29/make-your-own-backup-system-part-2-forging-the-freebsd-backup-stronghold/</guid><category>backup</category><category>freebsd</category><category>jail</category><category>bhyve</category><category>borg</category><category>data</category><category>server</category><category>vps</category><category>filesystems</category><category>proxmox</category><category>snapshots</category><category>sysadmin</category><category>virtualization</category><category>ownyourdata</category><category>zfs</category><category>series</category><category>tutorial</category></item><item><title>Make Your Own Backup System – Part 1: Strategy Before Scripts</title><link>https://it-notes.dragas.net/2025/07/18/make-your-own-backup-system-part-1-strategy-before-scripts/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="A hard disk - ready to host our backups"&gt;&lt;/p&gt;&lt;h3&gt;Backup: Beyond the Simple Copy&lt;/h3&gt;
&lt;p&gt;For as long as I can remember, backup is something that has been underestimated by far too many people. Between flawed techniques, "Schrödinger's backups" (i.e., never tested, thus both valid and invalid at the same time), and conceptual errors about what they are and how they work (RAID is not a backup!), too much data has been lost due to deficiencies in this area.&lt;/p&gt;
&lt;p&gt;Nowadays, backup is often an afterthought. Many rely entirely on "the cloud" without ever asking how - or if - their data is actually protected. It's a detail many overlook, but even major cloud providers operate on a shared responsibility model. Their terms often clarify that while they secure the infrastructure, the ultimate responsibility for protecting and backing up your data lies with you. By putting everything "in the cloud", on clusters owned by other companies, or on distributed Kubernetes systems, backup seems unnecessary. When I sometimes ask developers or colleagues how they handle backups for all this, they look at me as if I'm speaking an archaic, unknown, and indecipherable language. The thought has simply never crossed their minds. But data is not ephemeral; it must be preserved in every way possible.&lt;/p&gt;
&lt;p&gt;I've always had a philosophy: data must always be restorable (and as quickly as possible), in an open format (meaning you shouldn't have to buy anything to restore or consult it), and consistent. These points may seem obvious, but they are not.&lt;/p&gt;
&lt;p&gt;I have encountered various scenarios of data loss:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://www.reuters.com/article/idUSKBN2B20NT/"&gt;Datacenters destroyed by fire&lt;/a&gt; – I had 142 servers there, but they were all restored in just a few hours.&lt;/li&gt;
&lt;li&gt;Server rooms flooded.&lt;/li&gt;
&lt;li&gt;Servers destroyed in earthquakes, often due to collapsing walls.&lt;/li&gt;
&lt;li&gt;Increasing incidents of various ransomware attacks.&lt;/li&gt;
&lt;li&gt;Intentional damage by entities seeking to create problems.&lt;/li&gt;
&lt;li&gt;Mistakes made by administrators, which can happen to anyone.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The risk escalates for servers connected to the internet, like e-commerce and email servers. Here, not only is data integrity crucial, but so is the uninterrupted operation of services. This series of posts will revisit some of my old articles to explain my core ideas on the subject and describe, at least in part, my primary techniques.&lt;/p&gt;
&lt;p&gt;Many consider a backup to be a simple copy. I often hear people say they have backups because they "copy the data", but this is often wrong and extremely dangerous, providing a false sense of security. Copying the files of a live database is an almost useless operation, as the result will nearly always be impossible to restore. It is essential to at least perform a proper dump and then transfer that file. Yet, many people do this and will only realize their mistake when they face an emergency and need to restore.&lt;/p&gt;
&lt;h3&gt;The Backup Plan: Asking the Right Questions&lt;/h3&gt;
&lt;p&gt;Before touching a single file, you must start with a plan, and that plan starts with asking the right questions:&lt;/p&gt;
&lt;p&gt;"How much risk am I willing to take? What data do I need to protect? What downtime can I tolerate in case of data loss? What type and amount of storage space do I have available?"&lt;/p&gt;
&lt;p&gt;The first question is particularly critical. A common but risky approach is to store a backup on the same machine that requires backing up. While convenient, this method fails in the event of a machine failure. Even relying on a classic USB drive for daily backups is not foolproof, as these devices are as susceptible to failure as any other hardware component. And contrary to popular belief, even high-end uninterruptible power supplies (UPS) are not immune to catastrophic failures.&lt;/p&gt;
&lt;p&gt;Thus, the initial step is to establish a management plan, balancing security and cost. The safest backup is often the one stored farthest from the source machine. However, this approach introduces challenges related to space and bandwidth. While local area network (LAN) backups are relatively straightforward, off-network backups involve additional connectivity considerations. This might lead to a compromise on the amount of data backed up to maintain operational speed during both backup and recovery processes.&lt;/p&gt;
&lt;p&gt;Safety doesn't always equate to practicality. For instance, with a 200 MBit/sec connection and 2 TB of backup data, the recovery time could be significant. However, if the goal is not rapid restorability but simply ensuring the data is available, the safest backup is often the one closest to us. That is, a backup we can "touch", disconnect, and consult even when offline.&lt;/p&gt;
&lt;p&gt;Therefore, it is essential to develop a backup policy tailored to specific needs, keeping in mind that no 'perfect' solution exists.&lt;/p&gt;
&lt;h3&gt;The Core Decision: Full Disk vs. Individual Files&lt;/h3&gt;
&lt;p&gt;When planning a backup strategy, one key decision is whether to back up the entire disk or just individual files. Or both of them. Each approach has its pros and cons.&lt;/p&gt;
&lt;h4&gt;Entire Disk (or Storage) Backup&lt;/h4&gt;
&lt;p&gt;Advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Complete Recovery: Restoring a full disk backup can quickly revert a system to its exact previous state, boot loader included.&lt;/li&gt;
&lt;li&gt;Integration in Virtualization Systems: If your VM is a single file on a filesystem like ZFS or btrfs, you can simply copy that file (after taking a snapshot) to get a complete backup of the VM. Solutions like Proxmox offer easy management of full disk backups, accessible via command line or web interface.&lt;/li&gt;
&lt;li&gt;Flexibility in Virtual Environments: Products like the Proxmox Backup Server offer the ability to recover individual files from a full backup, combining the benefits of both approaches.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Disadvantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Downtime for Physical Machines: Often, it's necessary to shut down the machine to create a full disk backup, leading to operational interruptions. A hybrid approach, if the physical host is running FreeBSD for example, is to take a snapshot and copy all the host's datasets. The restore process, however, will require some manual operations.&lt;/li&gt;
&lt;li&gt;Large Space Requirements: Full disk backups can consume substantial space, including unnecessary data.&lt;/li&gt;
&lt;li&gt;Potential Slowdowns and Compatibility Issues: The backup process can be slow and may encounter issues with non-standard file system configurations.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Individual File Backup&lt;/h4&gt;
&lt;p&gt;While it might seem simpler, backing up individual files can get complicated.&lt;/p&gt;
&lt;p&gt;Advantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Basic Utility Use: Possible with standard system utilities like tar, cp, rsync, etc.&lt;/li&gt;
&lt;li&gt;Granular Backups: Allows for backing up specific files and comparing them to previous versions.&lt;/li&gt;
&lt;li&gt;Delta Copying: Only modified parts of the files are backed up, saving space and reducing data transfer.&lt;/li&gt;
&lt;li&gt;Portability and Partial Recovery: Files can be moved individually and partially restored as needed.&lt;/li&gt;
&lt;li&gt;Compression and Deduplication: These features are often available at the file or block level.&lt;/li&gt;
&lt;li&gt;Operational Continuity: Can be done without shutting down the machine.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Disadvantages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Storage Space Requirements: Simple copy solutions might require significant storage.&lt;/li&gt;
&lt;li&gt;Need for File System Snapshot: For efficient and consistent backups, a snapshot (like native ZFS snapshots, btrfs, LVM Volume snapshots, or Microsoft's VSS) is highly recommended before copying.&lt;/li&gt;
&lt;li&gt;Hidden Pitfalls: Issues may not become apparent until a backup is needed. And by then, it may be too late.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;The Key to Consistency: The Power of Snapshots&lt;/h3&gt;
&lt;p&gt;Backing up a "live" file system involves a "start" and an "end" moment. During this time, the data can change, leading to fatal inconsistencies. I've encountered such issues in the past: a large MySQL database was compromised, and I was tasked with its recovery. I confidently took the client's last file-based backup and restored various files (not a native dump). Unsurprisingly, the database failed to restart. The large data file had changed too much between the start and end of the copy, rendering it inconsistent. Fortunately, I also had a proper dump, so I managed to recover from that.&lt;/p&gt;
&lt;p&gt;The issue is evident: backing up a live file system is risky. An open database, even a basic one like a browser's history, is highly likely to get corrupted, making the backup useless.&lt;/p&gt;
&lt;p&gt;The solution is to create a snapshot of the entire file system before beginning the backup. This approach freezes a consistent "point-in-time" view of the data. To date, using snapshots, I have managed to recover everything.&lt;/p&gt;
&lt;p&gt;Here are the methods I've explored over the years:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Native File System Snapshot (e.g., &lt;a href="https://it-notes.dragas.net/2020/06/28/btrfs-automatic-snapshots-and-remote-backups/"&gt;BTRFS&lt;/a&gt; or &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;ZFS&lt;/a&gt;): If your file system inherently supports snapshots, it's wise to use this feature. It's likely to be the most efficient and technically sound option.&lt;/li&gt;
&lt;li&gt;LVM Snapshot: For those using LVM, creating a snapshot of the logical volume is a viable approach. This method can lead to some space wastage and, while I still use it, has occasionally caused the file system to freeze during the snapshot's destruction, necessitating a reboot. This has been a rare but recurring issue across different hardware setups, especially under high I/O load.&lt;/li&gt;
&lt;li&gt;DattoBD: I've tracked this tool since its inception. I used it extensively in the past, but I sometimes faced stability issues (kernel panics or the inability to delete snapshots, forcing a reboot). For snapshots with Datto, I often use UrBackup scripts, which are convenient and efficient.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;The Architecture: Push or Pull?&lt;/h3&gt;
&lt;p&gt;A longstanding debate among experts is whether backups should be initiated by the client (push) or requested by the server (pull). In my view, it depends.&lt;/p&gt;
&lt;p&gt;Generally, I prefer centralized backup systems on dedicated servers, maintained in highly secure environments with minimal services running. Therefore, I lean towards the "pull" method, where the server connects to the client to initiate the backup.&lt;/p&gt;
&lt;p&gt;Ideally, the backup server should not be reachable from the outside. It should be protected, hardened, and only be able to reach the setups it needs to back up. The goal is to minimize the possibility that the backup data could be compromised or deleted in case the client machine itself is attacked (which, unfortunately, is not so rare).&lt;/p&gt;
&lt;p&gt;This is not always possible, but there are ways to mitigate this problem. One way is to ensure that machines that must be backed up via "push" (i.e., by contacting the backup server themselves) can only access their own space. More importantly, the backup server, for security reasons, should maintain its own filesystem snapshots for a certain period. In this way, even in the worst-case scenario (workload compromised -&amp;gt; connection to backup server -&amp;gt; deletion of backups to demand a ransom), the backup server has its own snapshots. These server-side snapshots should not be accessible from the client host and should be kept long enough to ensure any compromise can be detected in time.&lt;/p&gt;
&lt;h3&gt;My Guiding Principles for a Good Backup System&lt;/h3&gt;
&lt;p&gt;Over the years, I've favored having granular control over backups, often finding the need to recover specific files or emails accidentally deleted by clients. A good backup system, in my opinion, should possess these key features:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Instant Recovery and Speed: The system should enable quick recovery and operate at a high processing speed.&lt;/li&gt;
&lt;li&gt;External Storage: Backups must be stored externally, not on the same system being backed up. Still, local snapshots are a good idea for immediate rollbacks.&lt;/li&gt;
&lt;li&gt;Security: I avoid using mainstream cloud storage services like Dropbox or Google Drive for primary backups. Own your data! &lt;/li&gt;
&lt;li&gt;Efficient Space Management: This includes features like compression and deduplication.&lt;/li&gt;
&lt;li&gt;Minimal Invasiveness: The system should require minimal additional components to function.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Conclusion: What's Next&lt;/h3&gt;
&lt;p&gt;The approach to backup is varied, and in this series, I will describe the main scenarios I usually face. I will start with the backup servers and their primary configurations, then move on to the various software and techniques I use.&lt;/p&gt;
&lt;p&gt;But that will begin with &lt;a href="https://it-notes.dragas.net/2025/07/29/make-your-own-backup-system-part-2-forging-the-freebsd-backup-stronghold/"&gt;the next post, where I'm talk about building the backup server which, of course, will be powered by FreeBSD&lt;/a&gt; - like all my backup servers for the last 20 years.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Fri, 18 Jul 2025 09:00:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/07/18/make-your-own-backup-system-part-1-strategy-before-scripts/</guid><category>backup</category><category>data</category><category>server</category><category>vps</category><category>filesystems</category><category>ownyourdata</category><category>series</category><category>tutorial</category></item><item><title>Automating ZFS Snapshots for Peace of Mind</title><link>https://it-notes.dragas.net/2024/08/21/automating-zfs-snapshots-for-peace-of-mind/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="Automating ZFS Snapshots for Peace of Mind"&gt;&lt;/p&gt;&lt;p&gt;One feature I couldn't live without anymore is snapshots. As system administrators, we often find ourselves in situations where we've made a mistake, need to revert to a previous state, or need access to a log that has been rotated and disappeared. Since I started using ZFS, all of this has become incredibly simple, and I feel much more at ease when making any modifications.&lt;/p&gt;
&lt;p&gt;However, since I don't always remember to create a manual snapshot before starting to work, I use an automatic snapshot system. For this type of snapshot, I use the &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;excellent &lt;code&gt;zfs-autobackup&lt;/code&gt; tool&lt;/a&gt; - which I also &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;use for backups&lt;/a&gt;. The goal is to have a single, flexible, and configurable tool without having to learn different syntaxes.&lt;/p&gt;
&lt;h2&gt;Why zfs-autobackup?&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;zfs-autobackup&lt;/code&gt; has several advantages that make it perfect (or nearly so) for my purpose:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;It operates based on "tags" set on individual datasets. I don't have to specify the dataset; I just assign a specific tag (of my choice) to the datasets, and &lt;code&gt;zfs-autobackup&lt;/code&gt; will operate on those datasets, transparently with respect to others. This ensures it will work even on datasets in different zpools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It's extremely flexible in management. For example, by setting the correct tag to "zroot", it will automatically manage all underlying datasets. However, it's possible to exclude some for more granular snapshot management.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It works well on both FreeBSD and Linux - I use it with satisfaction on both platforms.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Different tags allow for different levels of data retention and operation. For example, the "mylocalsnap" tag will be for local snapshots, while "backup_offsite" will be for backups that will be copied off-site. The two tags (and related snapshots) will be independent, even though they operate on the same datasets.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Installation&lt;/h2&gt;
&lt;p&gt;On FreeBSD, installation is straightforward, as there's a ready-made package:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install py311-zfs-autobackup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On Linux, it will depend on the specific distribution. Being in Python, it will always be possible to &lt;a href="https://github.com/psy0rz/zfs_autobackup/wiki"&gt;install it using pip&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Once installed, you just need to assign the tag to the dataset. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs set autobackup:mylocalsnap=true zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will set a tag called "mylocalsnap" on zroot and underlying datasets, i.e., on the entire main file system of FreeBSD.&lt;/p&gt;
&lt;h2&gt;Usage&lt;/h2&gt;
&lt;p&gt;Now, you just need to run &lt;code&gt;zfs-autobackup&lt;/code&gt;, specifying both the tag and the snapshot retention criteria:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;/usr/local/bin/zfs-autobackup mylocalsnap --keep-source 5min1h,1h1d
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this case, it will take a (recursive) snapshot of the datasets that have the "mylocalsnap" tag set to true, keeping one snapshot every 5 minutes for an hour, and one every hour for a day.&lt;/p&gt;
&lt;p&gt;On subsequent executions of &lt;code&gt;zfs-autobackup&lt;/code&gt;, snapshots that don't meet the previous retention criteria will be deleted.&lt;/p&gt;
&lt;p&gt;After running this command, here's the result:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;root@fbsnap:~ # zfs list -t snapshot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
zroot@mylocalsnap-20240820150115                  0B      -    96K  -
zroot/ROOT@mylocalsnap-20240820150115             0B      -    96K  -
zroot/ROOT/default@mylocalsnap-20240820150115     0B      -  1.00G  -
zroot/home@mylocalsnap-20240820150115             0B      -    96K  -
zroot/tmp@mylocalsnap-20240820150115              0B      -   104K  -
zroot/usr@mylocalsnap-20240820150115              0B      -    96K  -
zroot/usr/ports@mylocalsnap-20240820150115        0B      -    96K  -
zroot/usr/src@mylocalsnap-20240820150115          0B      -    96K  -
zroot/var@mylocalsnap-20240820150115              0B      -    96K  -
zroot/var/audit@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/crash@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/log@mylocalsnap-20240820150115          0B      -   144K  -
zroot/var/mail@mylocalsnap-20240820150115         0B      -    96K  -
zroot/var/tmp@mylocalsnap-20240820150115          0B      -    96K  -
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As you can see, snapshots have been taken of all datasets with the tag set.&lt;/p&gt;
&lt;h2&gt;Automation&lt;/h2&gt;
&lt;p&gt;To automate the process, simply modify the &lt;code&gt;/etc/crontab&lt;/code&gt; file and add a line like this:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;*/5     *       *       *       *       root    /usr/local/bin/zfs-autobackup mylocalsnap --keep-source 5min1h,1h1d
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, wait a few minutes and check again:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;root@fbsnap:~ # zfs list -t snapshot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
zroot@mylocalsnap-20240820150115                  0B      -    96K  -
zroot/ROOT@mylocalsnap-20240820150115             0B      -    96K  -
zroot/ROOT/default@mylocalsnap-20240820150115   212K      -  1.00G  -
zroot/ROOT/default@mylocalsnap-20240820151000   128K      -  1.00G  -
zroot/home@mylocalsnap-20240820150115             0B      -    96K  -
zroot/tmp@mylocalsnap-20240820150115             72K      -   104K  -
zroot/tmp@mylocalsnap-20240820151000              0B      -   104K  -
zroot/usr@mylocalsnap-20240820150115              0B      -    96K  -
zroot/usr/ports@mylocalsnap-20240820150115        0B      -    96K  -
zroot/usr/src@mylocalsnap-20240820150115          0B      -    96K  -
zroot/var@mylocalsnap-20240820150115              0B      -    96K  -
zroot/var/audit@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/crash@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/log@mylocalsnap-20240820150115         64K      -   144K  -
zroot/var/log@mylocalsnap-20240820151000         60K      -   144K  -
zroot/var/mail@mylocalsnap-20240820150115         0B      -    96K  -
zroot/var/tmp@mylocalsnap-20240820150115         64K      -    96K  -
zroot/var/tmp@mylocalsnap-20240820151000          0B      -    96K  -
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If everything went as it should, you'll notice that unmodified datasets won't have new snapshots, while those that have been modified since the previous manual execution (e.g., zroot/var/log) will contain both the previous snapshot and the automatic one.&lt;/p&gt;
&lt;h2&gt;Recovering Files from Snapshots&lt;/h2&gt;
&lt;p&gt;There are various ways to recover a file from the previous snapshot. One option is to restore the entire snapshot to the current dataset, but this may not be the best option as it will perform a complete restore.&lt;/p&gt;
&lt;p&gt;Another alternative is to go to the hidden snapshot directory. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;root@fbsnap:~ # cd /var/log/.zfs/snapshot/mylocalsnap-20240820151000/
root@fbsnap:/var/log/.zfs/snapshot/mylocalsnap-20240820151000 # ls -l
total 39
-rw-------  1 root wheel     872 Aug 20 15:06 auth.log
-rw-r--r--  1 root wheel   79079 Aug 20 13:19 bsdinstall_log
-rw-------  1 root wheel    5401 Aug 20 15:10 cron
-rw-r--r--  1 root wheel      63 Aug 20 13:20 daemon.log
-rw-------  1 root wheel      63 Aug 20 13:20 debug.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 devd.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 lpd-errs
-rw-r-----  1 root wheel      63 Aug 20 13:20 maillog
-rw-r--r--  1 root wheel   17043 Aug 20 15:00 messages
-rw-r-----  1 root network    63 Aug 20 13:20 ppp.log
-rw-------  1 root wheel      63 Aug 20 13:20 security
-rw-r--r--  1 root wheel     197 Aug 20 15:00 utx.lastlogin
-rw-r--r--  1 root wheel     187 Aug 20 15:00 utx.log
-rw-------  1 root wheel      63 Aug 20 13:20 xferlog
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;From here, you can read and recover any file present in the individual snapshot. The snapshots are read-only, so you won't be able to write to them.&lt;/p&gt;
&lt;h2&gt;Creating a Writable Copy of a Snapshot&lt;/h2&gt;
&lt;p&gt;Should you need a read-write copy of a specific snapshot, you can use the zfs clone command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;root@fbsnap:~ # zfs clone zroot/var/log@mylocalsnap-20240820151000 zroot/recover
root@fbsnap:~ # ls -l /zroot/recover/
total 39
-rw-------  1 root wheel     872 Aug 20 15:06 auth.log
-rw-r--r--  1 root wheel   79079 Aug 20 13:19 bsdinstall_log
-rw-------  1 root wheel    5401 Aug 20 15:10 cron
-rw-r--r--  1 root wheel      63 Aug 20 13:20 daemon.log
-rw-------  1 root wheel      63 Aug 20 13:20 debug.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 devd.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 lpd-errs
-rw-r-----  1 root wheel      63 Aug 20 13:20 maillog
-rw-r--r--  1 root wheel   17043 Aug 20 15:00 messages
-rw-r-----  1 root network    63 Aug 20 13:20 ppp.log
-rw-------  1 root wheel      63 Aug 20 13:20 security
-rw-r--r--  1 root wheel     197 Aug 20 15:00 utx.lastlogin
-rw-r--r--  1 root wheel     187 Aug 20 15:00 utx.log
-rw-------  1 root wheel      63 Aug 20 13:20 xferlog
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This creates a new dataset zroot/recover that is a writable copy of the snapshot. You can now modify these files as needed, without affecting the original snapshot or the live filesystem.&lt;/p&gt;
&lt;h2&gt;Cleaning Up&lt;/h2&gt;
&lt;p&gt;Sometimes you may want to delete all snapshots generated by &lt;code&gt;zfs-autobackup&lt;/code&gt;. There are various ways, but the quickest can be to use a simple pipe:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs list -t snapshot -o name | grep -i mylocalsnap | xargs -n 1 zfs destroy -vr
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;By implementing automatic ZFS snapshots, you can work with peace of mind, knowing that you can always revert changes or recover lost files. This setup provides an excellent balance between data protection and system performance.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 21 Aug 2024 08:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/08/21/automating-zfs-snapshots-for-peace-of-mind/</guid><category>zfs</category><category>freebsd</category><category>linux</category><category>backup</category><category>data</category><category>filesystems</category><category>snapshots</category><category>recovery</category></item><item><title>From Cloud Chaos to FreeBSD Efficiency</title><link>https://it-notes.dragas.net/2024/07/04/from-cloud-chaos-to-freebsd-efficiency/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/datacenter.webp" alt="From Cloud Chaos to FreeBSD Efficiency"&gt;&lt;/p&gt;&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;A few months ago, a client asked me to take care of their Kubernetes cluster (hosted on AWS and GCP). In their opinion, the costs were exorbitantly high for relatively simple and lean websites. Sure, they had many visits, but nothing too excessive development-wise.&lt;/p&gt;
&lt;p&gt;I kindly declined. Unfortunately, their situation is all too common these days: they hired developers accustomed to working that way, convinced that a system administrator is now unnecessary because "the cloud has infinite potential." They were used to considering optimization as secondary because "we have infinite power" (and this is already a spoiler for the ending).&lt;/p&gt;
&lt;p&gt;Being open to dialogue and new experiences, they asked for my opinion on the matter. We talked for a while, and I explained that, in my view, for the type of setup they had (standard, with various replicas and variants, but primarily based on two platforms), it didn't make sense. I saw it as complicating things. An over-engineering of something simple. Like taking a cruise ship to cross a river.&lt;/p&gt;
&lt;p&gt;They then asked me to create something simple that would serve as a development server and for backups, to understand what kind of solution I had in mind.&lt;/p&gt;
&lt;h2&gt;The Solution&lt;/h2&gt;
&lt;p&gt;So, I started building everything. I began with FreeBSD 13.2-RELEASE, but in the meantime, 14.0-RELEASE came out, so that’s the version I delivered.&lt;/p&gt;
&lt;p&gt;I installed the operating system on a physical server, leased from one of the main European providers. Benefiting from one of their auctions (good deals can be found on weekends), they found a sufficiently powerful machine, with 128GB of RAM, 2 NVMe drives of 1TB each, and two spinning disks of 2TB each for less than 100 euros per month. They also took another, less powerful one for additional backups and to back up the first one.&lt;/p&gt;
&lt;h2&gt;Implementation&lt;/h2&gt;
&lt;p&gt;I decided to keep the host as clean as possible and concentrated the services in jails (managed by BastilleBSD) and VMs. The machine was divided as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A series of bridges - to be used for different projects. Jails of the same project and/or type use the same bridge and can communicate with each other, sharing some resources (MariaDB, etc.).&lt;/li&gt;
&lt;li&gt;A bhyve VM with &lt;a href="https://alpinelinux.org/"&gt;Alpine Linux&lt;/a&gt; - in my opinion, the best distribution for running Docker containers. Do we really need systemd just to launch Docker? They mainly use it as a pre-production test bench, connected via VPN to their company LAN. It is the core of their "online" development, i.e., outside their computers. It has 32GB of RAM, 200GB of disk (obviously bhyve is configured with NVMe drivers), and 4 cores assigned.&lt;/li&gt;
&lt;li&gt;A VNET jail with a reverse proxy (nginx) - they know how to modify virtual hosts and generate certificates with certbot, pointing to the underlying jails.&lt;/li&gt;
&lt;li&gt;A series of "empty" VNET jails, to be cloned, for each type of setup (they mainly have CMS based on WordPress and Laravel, so with all dependencies inside - nginx, php, redis, etc. except the databases).&lt;/li&gt;
&lt;li&gt;A VNET jail with MariaDB installed, to be cloned, to be attached to different projects as needed.&lt;/li&gt;
&lt;li&gt;zfs-autobackup performs local snapshots, keeping: one every 15 minutes for 3 hours, one per hour for 24 hours, one per day for 3 days.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Backups &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;are also performed using zfs-autobackup&lt;/a&gt; and, in case of disaster recovery in rapid times, a zfs-send (and corresponding zfs-receive) every 10 minutes on another machine (the other, smaller one, also taken at auction), with the same bridges, firewall rules, BastilleBSD, and bhyve installed - ready to start in case of disaster. Being a test server, we didn't consider to implement a proper HA - at the moment, it wouldn't make sense.&lt;/p&gt;
&lt;p&gt;They also have another job with zfs-autobackup that performs an additional backup on a server (Debian in their offices). &lt;a href="https://my-notes.dragas.net/posts/2024/who-is-the-real-owner-of-your-data/"&gt;Safe data, in my opinion, are those in storage under your b...ench&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I delivered everything to them and gave a brief course to the more experienced devs on how to manage things. No explanation on the Alpine Linux VM, but I showed them the jails, how to clone, configure, and manage them.&lt;/p&gt;
&lt;h2&gt;Real-world Testing&lt;/h2&gt;
&lt;p&gt;I didn't hear from them anymore. After a few weeks, one of the devs contacted me urgently because a junior unfortunately made a mistake and deleted an entire project from one of the jails. I explained that the local snapshots were restorable with a command, and he was thrilled. He restored both the development jail and the one with the database made two minutes before the "mishap" and they restarted immediately.&lt;/p&gt;
&lt;p&gt;I realized that this event would change some of their procedures and criteria.&lt;/p&gt;
&lt;p&gt;I hadn't heard from anyone for months. This morning, I received a call from their manager, whom I hadn't heard from since the beginning, and he told me how things had been going these months.&lt;/p&gt;
&lt;h2&gt;Lessons Learned&lt;/h2&gt;
&lt;p&gt;First, this person has good communication and commercial skills but little technical background. He is open-minded and tends to study carefully what is proposed to him. He doesn't discard any solution a priori, without having touched its pros and cons.&lt;/p&gt;
&lt;p&gt;They had leased servers with cPanel and were inserting their content inside them. The devs who arrived a few years ago suggested making a technological transition, eliminating these "obsolete" servers and "outdated" methodologies, pushing everything to the cloud and containerizing everything. When we first talked, he told me how they were "lucky to make that transition because their load had increased enormously and the old servers probably wouldn't have handled the load", instead autoscaling saved them. I had some reservations about autoscaling without particular controls, but clearly, I cannot impose my choices on others.&lt;/p&gt;
&lt;p&gt;To cut a long story short: seeing what happened with that junior dev's mistake (and the simplicity with which it was possible to restart immediately), they decided to increase the use of FreeBSD jails and reduce, at least on secondary loads, the use of their Cloud managed with Kubernetes. As they transitioned to jails, however, they noticed some slowdowns. These slowdowns worsened day by day. According to the devs, it would have been appropriate to go back to having, again, autoscaling ("we need moar powaaaaar!!!") but, fortunately, their boss decided to investigate carefully. They realized that these workloads (based on &lt;a href="https://laravel.com/"&gt;Laravel&lt;/a&gt;) were storing sessions on files. Over time, these millions of files (several gigabytes per day) slowed everything down because, for specific operations, Laravel scanned the entire directory. In other words, on the "cloud," they needed much more power than necessary (and much more disk space, but that was cheaper) to carry this load, which was, in fact, unnecessary. After realizing this, they moved the sessions to Redis. Needless to say, everything became extremely faster, even compared to the previous setup on Kubernetes and autoscaling.&lt;/p&gt;
&lt;p&gt;At that point, it was clear that one of the problems with their setup is (as often happens) poor optimization. Today, there's a tendency to rush, "throw in" functions, features, libraries, plugins, etc. without considering the interactions and consequences. If it works, it's fine. Even if it increases computational complexity exponentially just to, for example, change the color of an icon (absurd example, but to give an idea).&lt;/p&gt;
&lt;p&gt;They then started moving even the main Laravel workloads (thanks to the optimization implemented). At this point, they began moving some of the WordPress sites even though they were extremely concerned. In the cluster, every day, at fairly irregular intervals, the load would rise and everything would slow down until autoscaling started scaling up to the imposed limits. CPU at 100% on all containers, and the devs noticed that the load came from a series of "php" processes. Recreating the containers helped for some minutes, but did not solve the problem.&lt;/p&gt;
&lt;p&gt;To their great surprise, all this did not happen on the FreeBSD jails. The load was significantly lower, without any of these spikes. Satisfied, they decided to use this as their final setup. One of the devs, however, wanted to get to the bottom of it and decided to run a test: he moved some of these WordPress sites to the Alpine VM, on Docker. At that point, the spikes resumed, saturating the CPU of the Alpine machine.&lt;/p&gt;
&lt;p&gt;Without going into details, they eventually realized that there was a vulnerability in one (or more) of the many plugins installed on the WordPress sites, which was being exploited to inject a process, probably a cryptominer. The name given to the process was "php" - so the devs, not being system experts, did not worry about understanding better whether it was really php or another process pretending to be it. On FreeBSD, all this did not happen because the injected executable could not run - there was no &lt;a href="https://docs.freebsd.org/en/books/handbook/linuxemu/"&gt;Linux compatibility&lt;/a&gt; activated on the server.&lt;/p&gt;
&lt;p&gt;Until then, they considered these (expensive) spikes as organic and did not worry too much about them. Paying to have their friendly intruders mine.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;They asked me to help, as much as possible, to &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;move other services to FreeBSD&lt;/a&gt;. It won't be easy, probably we will need to use bhyve a lot, but they decided that this is the platform they want to focus on in the coming years.&lt;/p&gt;
&lt;p&gt;Undoubtedly, this is a success story of FreeBSD and, indirectly, of correct and careful management of one's resources. Too often today, there is the superficial belief that the cloud, with its "infinite" resources, is the solution to all problems. And that Kubernetes is the best solution for everything. I, on the other hand, have always believed that there is the right tool for everything. You can hammer a nail with a screwdriver, but it's not the most suitable and efficient tool.&lt;/p&gt;
&lt;p&gt;Today they spend about 1/10 of what they used to spend before, they have more control over their data and the tools they use. Undoubtedly, all this was also caused by poor optimization and control by those who manage the infrastructure, but the question is: how often do people decide that, in the end, it is okay to spend more (especially if it is someone else's money) rather than go crazy for hours behind such a situation? While having defined and limited resources (albeit elevated) poses different problems - but of optimization. And in the age of energy and resource savings, it might be wise to give more importance to optimization.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Abundance led to waste&lt;/em&gt;.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 04 Jul 2024 08:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/07/04/from-cloud-chaos-to-freebsd-efficiency/</guid><category>freebsd</category><category>zfs</category><category>backup</category><category>data</category><category>filesystems</category><category>snapshots</category><category>recovery</category><category>networking</category><category>security</category><category>server</category><category>hosting</category><category>linux</category><category>ownyourdata</category><category>jail</category><category>virtualization</category><category>alpine</category><category>bhyve</category><category>docker</category></item><item><title>Enhancing FreeBSD Stability with ZFS Pool Checkpoints</title><link>https://it-notes.dragas.net/2024/07/01/enhancing-freebsd-stability-with-zfs-pool-checkpoints/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="Enhancing FreeBSD Stability with ZFS Pool Checkpoints"&gt;&lt;/p&gt;&lt;p&gt;ZFS offers many interesting features, and one of the most widely used is the ability to create and transfer snapshots of entire datasets, even recursively. This approach is useful for &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;backups or maintaining a specific “point in time” for datasets&lt;/a&gt;. For example, on FreeBSD, automatic snapshots of the dataset containing the root file system have been taken with each system upgrade for several releases. This way, thanks to Boot Environments, if there are any problems, it is possible to reboot from a previous clone.&lt;/p&gt;
&lt;p&gt;However, sometimes we might need something more. Local snapshots do not protect against the deletion of entire datasets or the activation of new features that could potentially cause problems or incompatibilities.&lt;/p&gt;
&lt;p&gt;A very useful tool that I have successfully used for some time is the pool checkpoint feature. This feature, imported from Illumos to FreeBSD in 2018, allows creating a sort of snapshot of the entire pool, including features, metadata, etc.&lt;/p&gt;
&lt;p&gt;The checkpoint is different from snapshots of individual datasets. It is not possible to have more than one checkpoint, and some operations like &lt;code&gt;remove&lt;/code&gt;, &lt;code&gt;attach&lt;/code&gt;, &lt;code&gt;detach&lt;/code&gt;, &lt;code&gt;split&lt;/code&gt;, and &lt;code&gt;reguid&lt;/code&gt; will be impossible when a checkpoint exists. This also has a side effect: if there is a checkpoint, deleting a dataset will not release free space because the data will still be physically present in the storage thanks to the checkpoint.&lt;/p&gt;
&lt;p&gt;Additionally, checkpoints are detected by the FreeBSD boot loader. When booting the system, the boot loader will offer the option to perform a "Rewind ZFS checkpoint" and boot from that point, effectively discarding everything that occurred after the checkpoint. This option can be particularly useful in emergencies or when you need to quickly undo recent changes.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://man.freebsd.org/cgi/man.cgi?zpool-checkpoint"&gt;Creating a checkpoint is very simple&lt;/a&gt;. Just use the command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zpool checkpoint &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The operation is usually quick. When a checkpoint is present, the command &lt;code&gt;zpool status&lt;/code&gt; will show its details. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;pool: zroot
state: ONLINE
scan: scrub repaired 0B in 00:00:12 with 0 errors on Fri May 17 13:27:14 2024
checkpoint: created Sun Jun 30 12:30:51 2024, consumes 1.34M
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      ada1p4    ONLINE       0     0     0

errors: No known data errors
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To delete the checkpoint, you can use the command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zpool checkpoint -d &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To rollback state to checkpoint and remove the checkpoint:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zpool import --rewind-to-checkpoint &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To mount the pool read only (without rolling back the data):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zpool import --read-only=on --rewind-to-checkpoint &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It is therefore possible to generate a checkpoint automatically via cron or manually when necessary, for example, before an operating system upgrade.&lt;/p&gt;
&lt;p&gt;For more technical details, I suggest reading &lt;a href="https://freebsdfoundation.org/wp-content/uploads/2019/01/ZPool-Checkpoint.pdf"&gt;this excellent article by Serapheim Dimitropoulos&lt;/a&gt;, published in the FreeBSD Journal in January 2019.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 01 Jul 2024 09:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/07/01/enhancing-freebsd-stability-with-zfs-pool-checkpoints/</guid><category>freebsd</category><category>zfs</category><category>backup</category><category>data</category><category>filesystems</category><category>snapshots</category><category>recovery</category></item><item><title>FreeBSD Tips and Tricks: Creating Snapshots with UFS</title><link>https://it-notes.dragas.net/2024/06/04/freebsd-tips-and-tricks-creating-snapshots-with-ufs/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="FreeBSD Tips and Tricks: Creating Snapshots with UFS"&gt;&lt;/p&gt;&lt;p&gt;One of the main features that every modern file system should support, in my opinion, is the ability to take snapshots. Snapshots can be useful for many reasons, such as making changes or updates with the ability to revert at any time.&lt;/p&gt;
&lt;p&gt;On Linux, file systems operating in COW (like ZFS, btrfs, etc.) can handle snapshots efficiently, but traditional file systems like ext4 lack this support. The only way is to add another layer (like LVM) and take a snapshot of the volume, mounting it elsewhere in read-only mode.&lt;/p&gt;
&lt;p&gt;FreeBSD provides tools for creating snapshots using both ZFS and UFS. I'll focus on UFS. In the base system, &lt;a href="https://docs.freebsd.org/en/books/handbook/disks/#snapshots"&gt;there's a very useful and well-documented command&lt;/a&gt;: &lt;code&gt;mksnap_ffs&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Long story short: this video shows all the steps described below:&lt;/p&gt;
&lt;script src="https://asciinema.org/a/bXHVWlWDVqtvNkm1MhNoZ3OXa.js" id="asciicast-bXHVWlWDVqtvNkm1MhNoZ3OXa" async="true"&gt;&lt;/script&gt;

&lt;p&gt;&lt;code&gt;mksnap_ffs&lt;/code&gt; will create a snapshot of the current state of the specified file system and store it in the file you specify. For example, running &lt;code&gt;mksnap_ffs /snap&lt;/code&gt; will take a snapshot of the "/" file system and store it in the file "/snap". By creating a memory disk and mapping the /snap file to an md, you can then mount it in read-only mode and retrieve the information. Finally, you can unmount, remove the md (&lt;code&gt;mdconfig -du /dev/mdX&lt;/code&gt;), and delete the snapshot with a simple &lt;code&gt;rm&lt;/code&gt; (&lt;code&gt;rm /snap&lt;/code&gt;). Note that each file system can have up to 20 such snapshots, so it shouldn't be considered on par with ZFS. However, it can be crucial for making a consistent backup of a live system with tools like Borg, Restic, Kopia, etc., or a "simple" rsync.&lt;/p&gt;
&lt;h3&gt;Steps to Create and Use Snapshots with UFS&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create the Snapshot&lt;/strong&gt;
    &lt;code&gt;sh
    mksnap_ffs /snap&lt;/code&gt;
    This command takes a snapshot of the "/" file system and stores it in the file "/snap".&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Create a Memory Disk&lt;/strong&gt;
    &lt;code&gt;sh
    mdconfig /snap&lt;/code&gt;
    This command maps the snapshot file to a memory disk.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mount the Snapshot&lt;/strong&gt;
    &lt;code&gt;sh
    mount -o ro /dev/md0 /mnt&lt;/code&gt;
    This command mounts the memory disk in read-only mode to /mnt.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Access the Snapshot&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;You can now access the snapshot via /mnt.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cleanup&lt;/strong&gt;
    &lt;code&gt;sh
    umount /mnt
    mdconfig -du 0
    rm /snap&lt;/code&gt;
    These commands unmount the snapshot, remove the memory disk, and delete the snapshot file.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;By leveraging this built-in feature of FreeBSD, you can ensure your systems are more resilient and easier to manage, particularly for backups and system updates.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 04 Jun 2024 06:40:45 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/06/04/freebsd-tips-and-tricks-creating-snapshots-with-ufs/</guid><category>freebsd</category><category>tutorial</category><category>server</category><category>hosting</category><category>backup</category><category>series</category><category>tipsandtricks</category></item><item><title>Migrating from an Old Linux Server to a New FreeBSD Machine</title><link>https://it-notes.dragas.net/2023/10/25/migrating-from-an-old-linux-server-to-a-new-freebsd-machine/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/content/images/2023/10/3500d14b-cb7e-4de6-af9e-0ca135985b41.webp" alt="Migrating from an Old Linux Server to a New FreeBSD Machine"&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Preamble:&lt;/em&gt; I believe it's time to bid farewell to this venerable Linux server.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt;  &lt;em&gt;The article chronicles the journey of transitioning from an outdated Linux server, running for 1690 days without updates, to a modern FreeBSD machine. This migration involved using tools like mfsBSD, BastilleBSD, Borg Backup, and bhyve. Despite initial hesitations due to the Linux server's impeccable performance, the transition was smooth, resulting in improved manageability and efficiency. The piece emphasizes the importance of regular system updates and anticipates revisiting the topic in the future with new uptime achievements and updates.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This server loyally served for years as a secondary backup server while also providing a few minor services to users. As it often happens, it remained in operation, neglected and without updates for years. Stable operating systems have the "flaw" of being forgotten, giving the false impression that they don't need maintenance or updates. This machine continued its service without oversight for years. When approached for a service request (not due to malfunctions), I advised the client to upgrade the whole system. A mere update would not suffice, so I suggested starting afresh on new hardware with FreeBSD as the primary OS.&lt;/p&gt;
&lt;p&gt;The client was understandably hesitant given the uptime stats:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;08:58:43 up 1690 days, 21:32, 4 users, load average: 9.57, 10.15, 8.76&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Not a single error, not a single hiccup. From his perspective, a similar setup to what was installed many years ago and still working flawlessly was preferred. Nevertheless, he trusted my expertise and let me proceed.&lt;/p&gt;
&lt;p&gt;This server had a plethora of duties, among which:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One of the pivotal tasks was running &lt;a href="https://www.proxmox.com/en/proxmox-backup-server/overview"&gt;Proxmox Backup Server&lt;/a&gt; via Docker. Proxmox Backup Server requires Debian, but this server was running on Ubuntu 16.04 (previously upgraded from Ubuntu 14.04 – yes, ancient!). Hence, Proxmox Backup Server was still at version 1.x.&lt;/li&gt;
&lt;li&gt;Another critical function was storing backups made through &lt;a href="https://www.borgbackup.org/"&gt;BorgBackup&lt;/a&gt; on its file system. The /home directory used a mirrored btrfs file system, and each backed-up server had its user on this system. Clients could backup (using a push method) only via VPN and only during specific windows when the server permitted (by adding specific firewall rules via Jenkins. Jenkins also managed connection protocols, snapshots, backups, etc.).&lt;/li&gt;
&lt;li&gt;Among the lesser tasks, the server ran a few Docker containers with HandBrake on various presets. The client processed video conversions by uploading the original files via sftp, and after some hours, fetched the converted files from the destination directory. This will not be replicated on the new FreeBSD server since they now handle this operation locally on their high-performance MacBook Pro with Apple Silicon. However, a future restoration isn't off the table.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first thing I did was install FreeBSD on the new hardware. Given that it's a physical server on Hetzner (an auction pick due to disk space needs over power), and FreeBSD wasn't an option, I used &lt;a href="https://mfsbsd.vx.sk/"&gt;mfsBSD&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After booting the physical server in Linux rescue, I copied mfsBSD onto the disks using &lt;code&gt;dd&lt;/code&gt; and restarted. On boot, I SSHed into mfsBSD and executed the installation using &lt;code&gt;bsdinstall&lt;/code&gt;—a robust and efficient method.&lt;/p&gt;
&lt;p&gt;I set up a raidz1 with all four 6 TB disks, resulting in a final storage space of 21.8T, ample for now without the video files.&lt;/p&gt;
&lt;p&gt;To ensure continuity, I kept a setup similar to the old one. The clients would essentially continue with their usual backup procedure without necessitating drastic changes to backup scripts. To avoid storing these backups directly in the physical machine's /home and to leave the door open for future services, I installed &lt;a href="https://bastillebsd.org/"&gt;BastilleBSD&lt;/a&gt; and began setting up several jails. I replaced the old Linux machine's behavior with a VNET FreeBSD jail. In past scenarios, I've created Linux jails (thanks to BastilleBSD) and transferred the old server into the jail using rsync, making minor configuration tweaks. While this usually works, it doesn't address the underlying issue of an obsolete setup. Given the opportunity, I opted for a modern toolset.&lt;/p&gt;
&lt;p&gt;Thus, I copied every home directory (along with their historic backups) in its entirety, installed BorgBackup, and re-established the VPN. With a VNET jail, I can craft networking devices and fine-tune configurations. After recreating user accounts, inputting the various SSH &lt;code&gt;authorized_keys&lt;/code&gt;, and checking all clients, I set up a snapshot plan on the host. This ensures that if a client is compromised with the potential (however remote) for breach and backup deletion, a ZFS snapshot of the entire jail remains available.&lt;/p&gt;
&lt;p&gt;As mentioned, one of the core tools on the old server was Proxmox Backup Server. It's not natively installable on FreeBSD, necessitating a VM. Enter the fantastic &lt;code&gt;bhyve&lt;/code&gt;, supported by &lt;code&gt;vm-bhyve&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;However, one issue arose: backups would consume vast amounts of space, and I wanted to avoid housing an enormous disk image (or a zvol) with both VMs and backups. So, I opted for a slightly less performant but more flexible solution: installing Debian 12 and Proxmox Backup Server on the VM while placing backups on a separate ZFS dataset on the physical machine, exported via NFS and mounted on the VM.&lt;/p&gt;
&lt;p&gt;Given that the physical server has an internal bridge "vm-public" with IP &lt;code&gt;192.168.124.1&lt;/code&gt; and the VM is at &lt;code&gt;192.168.124.2&lt;/code&gt;, I just created a dataset named &lt;code&gt;zroot/PBS&lt;/code&gt; and added the following line to &lt;code&gt;/etc/exports&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;/zroot/PBS -alldirs -maproot=root -network 192.168.124.2/32&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;To enable NFS, insert into &lt;code&gt;/etc/rc.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;rpcbind_enable=&amp;quot;YES&amp;quot; 
nfs_server_enable=&amp;quot;YES&amp;quot; 
mountd_flags=&amp;quot;-r&amp;quot; 
rpc_lockd_enable=&amp;quot;YES&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Within the VM, create &lt;code&gt;/PBS&lt;/code&gt; and include in &lt;code&gt;/etc/fstab&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;192.168.124.1:/zroot/PBS /PBS nfs rw,async,soft,intr,noexec 0 0&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;After Proxmox Backup Server's installation, simply set up the datastores in &lt;code&gt;/PBS/&lt;/code&gt;, and they'll directly store on the physical machine's ZFS dataset.&lt;/p&gt;
&lt;p&gt;For firewall configurations, I exposed port 8007, redirecting it towards the VM, and everything started functioning smoothly. I then set a Proxmox Backup Server replica from the old to the new server. After completion, I changed the Proxmox Backup Server IP on all Proxmox hosts to point to the new server. Smooth sailing.&lt;/p&gt;
&lt;p&gt;The old Ubuntu server also managed other minor services, which have become obsolete and weren't replicated.&lt;/p&gt;
&lt;p&gt;The transition was seamless, the client is pleased, and I'm content since each service is now neatly segregated into its jail or VM. The machine's load is minimal, which might pave the way for other tasks, via VPN. Everything now rests on ZFS, and the icing on the cake: I made the client promise not to reach another 1690 days of uptime but to timely update as required.&lt;/p&gt;
&lt;p&gt;I'm not entirely convinced the promise will hold—meaning, in a few years, I might yet be discussing this "new" server, highlighting another impressive uptime and another upgrade journey.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 25 Oct 2023 16:39:37 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/10/25/migrating-from-an-old-linux-server-to-a-new-freebsd-machine/</guid><category>freebsd</category><category>bhyve</category><category>borg</category><category>btrfs</category><category>container</category><category>data</category><category>docker</category><category>filesystems</category><category>jail</category><category>server</category><category>snapshots</category><category>virtualization</category><category>vpn</category><category>proxmox</category><category>backup</category><category>linux</category></item><item><title>How we are migrating (many of) our servers from Linux to FreeBSD - Part 3 - Proxmox to FreeBSD</title><link>https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="How we are migrating (many of) our servers from Linux to FreeBSD - Part 3 - Proxmox to FreeBSD"&gt;&lt;/p&gt;&lt;p&gt;In recent years, &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;we've been migrating many of our servers from Linux to FreeBSD&lt;/a&gt; as part of our consolidation and optimization efforts. Specifically, we've been &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;moving services that were previously deployed using Docker onto FreeBSD&lt;/a&gt;, and it has proven to be a great choice for handling workloads efficiently.&lt;/p&gt;
&lt;p&gt;To this end, we've also been migrating many of our virtual machines (VMs) to FreeBSD, deploying services within FreeBSD jails. In some cases, these jails have even replaced entire VMs and run bare metal. Although we prefer to move to native FreeBSD whenever possible, sometimes it's not the best option for all the services we offer. As a result, one of our most critical physical servers has been left behind for years.&lt;/p&gt;
&lt;div class="hc-toc"&gt;&lt;/div&gt;

&lt;p&gt;This server was a Proxmox server that we installed many years ago and updated to version 6.4. It hosted some critical services, but upgrading to Proxmox 7.x posed some challenges. In particular, &lt;a href="https://forum.proxmox.com/threads/unified-cgroup-v2-layout-upgrade-warning-pve-6-4-to-7-0/"&gt;some of the LXC containers required tweaks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Unfortunately, this server was quite old, with only four physical disks and 64 GB of RAM. It was located in an OVH data center and had been running well until one of the disks started to malfunction once a week, on Sundays. This would trigger a RAID reconstruction that kept the system busy for about two days.&lt;/p&gt;
&lt;p&gt;Despite my preference for simple setups, this server had been deployed gradually over many years, and everything was tied together. As a result, unraveling the system to resolve the issues was not a simple task. &lt;em&gt;Sometimes the combination of simple things can make everything complex&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;The Proxmox Server&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://www.proxmox.com/en/"&gt;Proxmox&lt;/a&gt; server was configured as the central hub for various services, including primary DNS, web hosting, VOIP, and more. It featured several bridges, each with its own specific purpose, and was connected to a virtual machine running &lt;a href="https://mikrotik.com"&gt;MikroTik CHR&lt;/a&gt;. This machine was responsible for consolidating all incoming VPNs from the MikroTik devices we managed, both ours and those belonging to our clients. Additionally, it provided a series of bridges to manage these devices and all server management VPNs and other services. The Proxmox server also housed several virtual machines running Linux, FreeBSD, OpenBSD, and NetBSD, as well as LXC containers.&lt;/p&gt;
&lt;p&gt;Over the last two years, we've been migrating most of these virtual machines and containers to FreeBSD-based VMs, which feature their own specific jails. Consequently, most of the VMs we've had to move were BSD-based, while only five Linux VMs remained. The LXC containers hosted a range of services, including servers managed by &lt;a href="https://www.virtualmin.com"&gt;Virtualmin&lt;/a&gt;, a large installation of &lt;a href="https://www.zimbra.com"&gt;Zimbra&lt;/a&gt; (which was hosted within an LXC container running CentOS 7), as well as some minor Alpine Linux-based machines. We located all these virtual machines and containers in a LAN created and managed by CHR. All public IPs were managed by CHR, which relied on NAT mappings to establish communication between them. CHR had thus become the heart of our system, and if it experienced any issues, it could potentially take down the entire system. Fortunately, it remained stable for years.&lt;/p&gt;
&lt;h3&gt;Migration - first steps&lt;/h3&gt;
&lt;p&gt;The first step I took was to install FreeBSD on the new server. Easy peasy. The next step was to find a way for the CHR to migrate to the new server (under &lt;a href="https://bhyve.org"&gt;bhyve&lt;/a&gt;) and continue to manage all the public IPs of the original server. The problem is that OVH, with its failover IPs, &lt;a href="https://it-notes.dragas.net/2022/01/14/freebsd-assign-ovh-failover-ips-to-freebsd-jails/"&gt;ties a specific MAC address to each individual IP address&lt;/a&gt;. Therefore, the only way was to create a bridge on the FreeBSD server (on the Proxmox server, I already had the bridge on the physical network card) and create an L2 tunnel between the two servers - I used OpenVPN with tap interfaces, specifically inserted into the bridges. I could have used other methods and techniques, but I wanted to experiment with a setup that could allow, if necessary, to bridge a larger number of physical and virtual servers even if the IPs are all mapped to a single server. OVH does not allow, in fact, the splitting of classes, so a move must be made for the entire class, not for a single IP address.&lt;/p&gt;
&lt;p&gt;Initially, MikroTik CHR 7 did not boot on bhyve. In the end, &lt;a href="https://it-notes.dragas.net/2023/03/21/creating-a-mikrotik-chr-routeros-7-bhyve-vm-in-freebsd-2/"&gt;I managed to make it work&lt;/a&gt;, but I had other problems, probably related to the MTU of the interfaces. So I thought about taking the opportunity to unbind the LXC containers and VMs from CHR and remove MikroTik from the setup. With RouterOS version 7, in fact, Wireguard-based VPNs are also supported, so within a few days, it was possible to update the few routers still on 6.x and recreate some VPNs using Wireguard. I mapped both the VMs and LXC containers directly to their respective public IPs, greatly simplifying the steps. Everything worked perfectly.&lt;/p&gt;
&lt;p&gt;The next step was to test the first migrations, starting from the VMs already on FreeBSD. For simplicity, I created a new FreeBSD VM in bhyve and copied (via zfs-send and zfs-receive) the datasets related to &lt;a href="https://bastillebsd.org"&gt;BastilleBSD&lt;/a&gt;. All services are installed in jails managed by Bastille, so this was enough to have, in a short time, a new operating server equivalent to the previous one. At that point, I shut down the original server, connected the VM to the bridge linked to the tunnel (after modifying its MAC address), turned on the new FreeBSD VM (on bhyve), and everything started to work correctly - but from the new physical server.&lt;/p&gt;
&lt;p&gt;One by one, I moved all the FreeBSD VMs. For Linux, NetBSD, and OpenBSD, I simply copied the images and pointed bhyve to them. Some small specific configuration on vm-bhyve and everything started to work correctly. &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;Where possibile&lt;/a&gt;, I replaced the “virtio” with “nvme” as &lt;a href="https://klarasystems.com/articles/virtualization-showdown-freebsd-bhyve-linux-kvm/"&gt;it performs much better on bhyve&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Migration - LXC containers to Virtual Machines&lt;/h3&gt;
&lt;p&gt;For LXC containers, I initially thought of creating an Alpine Linux virtual machine, installing LXD, and copying each individual container. It worked for some of them, but for others, I started to encounter strange issues, similar to those that would have required manual intervention to upgrade from Proxmox 6.x to 7.x. As is often the case with Linux-based solutions, compatibility is not always preserved between updates, so I would have had to fine-tune all the containers, which I didn't feel like doing. The containers had been created (at the time) to optimize RAM usage on the Proxmox machine, but to date, they have caused more problems than benefits. In some cases, certain processes got "stuck," making it impossible to "reboot" the LXC container, requiring the entire physical node to be rebooted. If they had been virtual machines, I could have given a "kill" command from the virtualizer (to the respective KVM process, in that case) and restarted it.&lt;/p&gt;
&lt;p&gt;For greater compatibility and ease of future management, I decided to convert the LXC containers into actual VMs on bhyve. The process was simple:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating an empty VM with vm-bhyve and booting the VM with SystemRescueCD.&lt;/li&gt;
&lt;li&gt;Creating destination partitions and file systems in the VM, then doing a complete rsync of the original LXC container.&lt;/li&gt;
&lt;li&gt;Adjusting the fstab file, installing the kernel on the destination VM, and creating the initrd (some containers were already copies of VMs, so the kernel remained installed and updated, even though it wasn't being used. The initrd, on the other hand, did not include the &lt;em&gt;nvme&lt;/em&gt; or &lt;em&gt;virtio&lt;/em&gt; drivers, so I had to regenerate it anyway.)&lt;/li&gt;
&lt;li&gt;Adjusting the bhyve vm configuration file, doing one last rsync after shutting down the services, shutting down the original LXC container, and starting the bhyve VM.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Everything worked correctly, so one by one, I moved all the containers. The largest one ended up on another physical node (also FreeBSD with bhyve) temporarily because the space on the new server was not sufficient to contain it. It didn't need to be on this server, so no problem.&lt;/p&gt;
&lt;p&gt;One by one, the LXC containers started on the new server. Apart from some minor adjustments to the destination VMs (different network interface names, etc.), I didn't encounter any particular problems even after several days. Everything works perfectly.&lt;/p&gt;
&lt;p&gt;At the very end, I re-created the MikroTik CHR VM. I’ll keep this setup separate for now, as strictly tied to eoip interfaces. This was the main reason why I haven’t performed the migration before. Things were too tied together and I had to untie everything, step by step.&lt;/p&gt;
&lt;h3&gt;…and then one of the Linux VMs started to freeze&lt;/h3&gt;
&lt;p&gt;Several Linux VMs are just the basis on which Docker runs. One of them (not even among the busiest) started, every 12/15 hours, to completely freeze. It stopped responding to ping, and it was impossible to give any type of command from the console. In a word: &lt;em&gt;stuck&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Searching the web, I found some references to this problem and, observing the errors of an ssh session that was left connected (stuck, but still showing the last error), I found it to be a problem &lt;a href="https://forums.freebsd.org/threads/bhyve-debian-with-docker-unstable.87956/"&gt;similar to the one described in this post&lt;/a&gt;, namely:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;&amp;quot;watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [khugepaged:67]&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I tried various solutions such as changing the storage driver, the number of cores, the distribution (from Alpine to Debian), etc., but none of these operations solved the issue. I also noticed that the problem occurs with all Linux VMs, but only those with a recent kernel (&amp;gt; 5.10.x) freeze, while the others continue to work. The problem does not occur, however, with the *BSDs.&lt;/p&gt;
&lt;p&gt;In the end, I:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reduced the number of cores to 1 for the VMs that did not have a high load (some remained with multiple cores), hypothesising a problem with allocating cores that were too busy&lt;/li&gt;
&lt;li&gt;Gave the command: "&lt;em&gt;/usr/bin/echo 60 &amp;gt; /proc/sys/kernel/watchdog_thresh&lt;/em&gt;" to the VM.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The VM became stable, and I have not seen that error/warning on any other machine since. I will investigate further, but I believe it is a problem related to the Linux kernel, which, for some reason, generates a kernel panic if particular situations of CPU concurrency are generated.&lt;/p&gt;
&lt;h3&gt;The End…and a nice OOM!&lt;/h3&gt;
&lt;p&gt;After moving everything, I was finally able to migrate the entire class of OVH IPs from one physical server to another. The operation was quite quick, but in order to avoid problems, I notified all users and performed the operation on a Sunday and during off-peak hours. The whole process took about 10 minutes and there were no hitches of any kind.&lt;/p&gt;
&lt;p&gt;For safety reasons, I kept the Proxmox machine active for a few more days, but there was no need to use it. However, after a couple of days, I encountered a problem: the largest VM, in some cases, was being "killed" because FreeBSD generated an OOM. I had never seen, from FreeBSD 13.0 onwards, any OOM related to "abuse" of RAM usage by ZFS, but in this case, it actually happened.&lt;/p&gt;
&lt;p&gt;In the end, I understood that ZFS, on FreeBSD, is able to release memory, but not quickly enough to manage any "spikes" in individual VMs. In fact, the VMs do not know the situation of the physical host's RAM, so they will tend to occupy all the space allotted to them (even if only for caching). A sudden spike (i.e. if you create and launch a new VM) could cause a sudden increase in RAM usage by the bhyve process, and FreeBSD could be forced to kill it, even if part of the RAM is only ARC cache. While Proxmox supports HA (i.e., control over whether the VM is running), vm-bhyve only launches the VM (bhyve process). I should manage it with tools like &lt;em&gt;&lt;a href="https://mmonit.com/monit/"&gt;monit&lt;/a&gt;&lt;/em&gt;, but for now, I preferred to simply set limits on ZFS RAM usage using "vfs.zfs.arc_max", and there have been no more problems.&lt;/p&gt;
&lt;h3&gt;Final considerations&lt;/h3&gt;
&lt;p&gt;The operation was long but linear. The most complex part was unraveling all the configurations related to MikroTik CHR and the VPNs linked to each individual LXC machine/container. Once everything was implemented on a dedicated VM, the operation was fairly straightforward.&lt;/p&gt;
&lt;p&gt;The hardware specifications of the destination physical server are slightly better than the starting one, but the final performance of the setup has greatly improved. The VMs are very responsive (even those that were previously LXC containers running directly on bare metal) and, thanks to ZFS, I can make local snapshots every 5 minutes. In addition, every 10 minutes, I can copy (using the excellent zfs-autobackup) all the VMs and jails to other nodes &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;both as a backup and as an immediate restart in case of disaster&lt;/a&gt;. I just need to map the IPs, and everything will start working very quickly. Proxmox also allows you to perform this type of operation with ZFS, but you still need to have Proxmox (in a compatible version) on the target machine. With the current setup, I only need any FreeBSD node that supports bhyve.&lt;/p&gt;
&lt;p&gt;Proxmox is an excellent tool, well-developed, open-source, efficient, and stable. We manage many installations, including complex ones (&lt;a href="https://it-notes.dragas.net/2020/06/29/create-automatic-snapshots-on-cephfs/"&gt;ceph clusters&lt;/a&gt;, etc.), and it has never let us down. However, not all tools are ideal for all situations, and for setups like the one described, the new configuration based on FreeBSD has shown significantly interesting performance and greater management and maintenance granularity.&lt;/p&gt;
&lt;p&gt;Virtualizing on vm-bhyve is not complex, but it is certainly not comparable, at the current state, to the simplicity of using a clean and complete interface like Proxmox's. A complete HA system is still missing (sure, it's achievable manually, but...), as well as complete management web interface. However, for knowledgeable users, it is undoubtedly a powerful tool that allows you to have excellent FreeBSD as a base. I'm totally satisfied with my migration and the result is far better than I expected.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 14 Mar 2023 13:00:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/</guid><category>freebsd</category><category>alpine</category><category>data</category><category>bhyve</category><category>filesystems</category><category>docker</category><category>ha</category><category>hardware</category><category>hosting</category><category>linux</category><category>lxc</category><category>networking</category><category>ovh</category><category>proxmox</category><category>recovery</category><category>restore</category><category>server</category><category>snapshots</category><category>virtualization</category><category>web</category><category>zfs</category><category>backup</category><category>jail</category><category>container</category><category>mikrotik</category><category>ownyourdata</category><category>series</category></item><item><title>How we are migrating (many of) our servers from Linux to FreeBSD - Part 2 - Backups and Disaster Recovery</title><link>https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="How we are migrating (many of) our servers from Linux to FreeBSD - Part 2 - Backups and Disaster Recovery"&gt;&lt;/p&gt;&lt;p&gt;After &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;my post on why we’re migrating (most of) our servers from Linux to FreeBSD&lt;/a&gt;, I’ve started to &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;write about how we’re doing it&lt;/a&gt;. After covering a basic installation (we’re doing a massive use of jails), I’m going now to describe how we’re performing backups.&lt;/p&gt;
&lt;p&gt;Backup is not a tool. Backup is not a software you can buy.  &lt;a href="https://it-notes.dragas.net/2020/08/05/searching-for-a-perfect-backup-solution/"&gt;Backup is a &lt;em&gt;strategy&lt;/em&gt; you need to study and implement&lt;/a&gt; to be able to solve your specific problems. You need to understand what you’re doing, otherwise you’ll always have a &lt;strong&gt;Schrödinger’s Backup&lt;/strong&gt;  - it may work or not and if you don’t test it well enough (i.e. restore) you’ll find out when it’s too late.&lt;/p&gt;
&lt;p&gt;We’re performing backups in many different ways but, for our physical and virtual FreeBSD servers, we have a dual approach. We need both a “ready to use” backup (that will be described here, useful for a fast disaster recovery or prompt restore of specific jails) and a “colder”, more space efficient backup that can be kept for months (or years), &lt;a href="https://it-notes.dragas.net/tags/borg/"&gt;more similar to the borg approach on previous posts&lt;/a&gt;. Generally speaking, we store our OS (and jails) on ZFS, so I’ll describe this kind of approach here.&lt;/p&gt;
&lt;h2&gt;Disaster recovery backup - ZFS send/receive&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://bastillebsd.org"&gt;BastilleBSD&lt;/a&gt; creates its datasets and mounts them on /usr/local/bastille . There are no databases, all the jails’ configurations are inside that mountpoint so it’s quite easy to backup and restore all the jails or any single jail in one go. More, the “everything-in-a-jail” approach simplifies the restore process as you don’t need to restore the &lt;em&gt;entire&lt;/em&gt; host OS, just install an empty FreeBSD server, install BastilleBSD and restore the jails. Or add the jails to already existing FreeBSD systems.&lt;/p&gt;
&lt;p&gt;We normally use FreeBSD (or Linux with ZFS) backup servers, well protected and encrypted at rest. For the ZFS send/receive approach, our servers are &lt;strong&gt;NOT&lt;/strong&gt; reachable from the outside. We can ssh into them only using a VPN - they’re too precious to be exposed on the World &lt;em&gt;Wild&lt;/em&gt; Web - or, if strictly needed, we expose ssh only using keys, no passwords. We perform the backups using a &lt;em&gt;pull&lt;/em&gt; strategy: the backup server connects to the production servers, gets the data, disconnects. The production servers have NO ACCESS to the main backup server. Should they ever be seriously compromised, the backup is safe.&lt;/p&gt;
&lt;p&gt;There are many tools that can help to set up this kind of configuration. I’ve tried many of them and found that they all have some good and bad points. The one I decided to use for our servers is &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;zfs-autobackup&lt;/a&gt;. It’s easy to use, everything can be set via command line and has a good cron (or Jenkins) output, useful to understand if everything is right.&lt;/p&gt;
&lt;p&gt;Let’s consider two servers, one is called “ProdA” and the other is called “Bck” - we obviously want to backup the ProdA into Bck.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;Installing ProdA has been covered on a previous post&lt;/a&gt;, Bck is quite simple and outside the scope of this post. We just need a protected zfs FreeBSD (or Linux) server. That’s all. Let’s assume that ProdA has a BastilleBSD zfs dataset (and children datasets), with jails and everything needed, as configured in the last post. We now need to install the needed software. On Bck:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;pkg install py311-zfs-autobackup mbuffer
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On ProdA:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;pkg install mbuffer
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;mbuffer will be used as a ram buffer to avoid read/write spikes (or slowdowns) while sending/receiving the snapshots.&lt;/p&gt;
&lt;p&gt;It’s time to prepare the destination dataset. Assuming that Bck has a zroot base dataset, we’ll be creating (as root) a zroot/backups/ProdA&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs create -p zroot/backups/ProdA
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Ok, let’s now go to ProdA&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We want to create an unprivileged user that will send the data. &lt;strong&gt;We don’t want to allow Bck to connect as root&lt;/strong&gt;, even if it’s trusted and secure. Let’s create a user called “backupper”. Then, we need to give backupper the right permissions:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs allow -u backupper send,snapshot,hold,mount,destroy zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Note: if you want Bck to be able to delete the snapshots on ProdA, backupper needs the destroy permission. That means this user can destroy the whole system as can ALSO destroy zroot (or any source dataset you decide). If you’re afraid of this, different approaches must be used (i.e.: local root performing snapshot/cleanups and Bck only transferring them, not hard to achieve with zfs-autobackup). Considering that the Bck is safe, secure and protected, we can tolerate this weakness. Just be sure nobody can break the “backupper” user. Do not use password, use ssh keys and treat this user with the same care you'd use with root.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Now, as root on ProdA:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs set autobackup:bck_server=true zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We’re setting a custom property, called “autobackup:bck_server”, allowing the zroot (and children) dataset to be backed up by zfs-autobackup. zfs-autobackup will search for all datasets with that property set to “true” (also on different pools) and will backup them. If there’s a specific dataset you don’t want to backup, just set it to “false”. Or if you don’t want to backup the entire zroot but, for example, only “zroot/bastille” (and children),  just set autobackup:bck_server=true for that dataset.&lt;/p&gt;
&lt;h3&gt;ssh config&lt;/h3&gt;
&lt;p&gt;zfs-autobackup will connect via ssh and zfs-autobackup will try to connect as root. Moreover, even after exchanging the ssh key, Bck will connect many times to ProdA to send its zfs commands (one connection per command). Ssh session initiation is quite long, so there will be some latency. In order to (greatly) speed up this time,&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“You can make your ssh connections persistent and greatly speed up zfs-autobackup:&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;On the server that initiates the backup add this to your ~/.ssh/config:&lt;/em&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;Host ProdA
User backupper
ControlPath ~/.ssh/control-master-%r@%h:%p
ControlMaster auto
ControlPersist 3600
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;(Taken from &lt;a href="https://github.com/psy0rz/zfs_autobackup/wiki/Performance"&gt;https://github.com/psy0rz/zfs_autobackup/wiki/Performance&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It's now time to go back to Bck and issue a command like this (one line):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;/usr/local/bin/zfs-autobackup --ssh-source ProdA bck_server zroot/backups/ProdA --zfs-compressed --no-progress --verbose --buffer 32M --keep-source 0 --no-holds  --set-properties readonly=on --clear-refreservation --keep-target 1d1w,1w1m,1m6m  --destroy-missing 30d --clear-mountpoint
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Bck will connect to ProdA, perform the snapshots and start transferring. The most interesting options I used here are:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt; --keep-source 0 (only the last snapshot will be kept on ProdA)
 --set-properties readonly=on (be sure the Bck clone is read only, so we will be able to perform an incremental/differential backup next time)
 --keep-target 1d1w,1w1m,1m6m (keep one backup per day for one week, one per week for one month, one per month for six months)
 --destroy-missing 30d (if we've deleted a dataset, keep it for 30 days before removing it from Bck)
 --clear-mountpoint (do not mount the dataset in Bck, as it will cause problems sooner or later)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The first copy will be slow as it'll need to send all the data. The second will be quite fast as only the differences will be transferred.&lt;/p&gt;
&lt;h2&gt;How to perform a disaster recovery&lt;/h2&gt;
&lt;p&gt;Ok, your dataset (or datasets) has gone. You need to replace it with the last external backup.  You have to retransfer the copy into ProdA (or another FreeBSD host, no difference). Connect to Bck and search for the snapshot you want to restore (&lt;em&gt;zfs list -t snapshot&lt;/em&gt; will help). Once identified (one line):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs send -R zroot/backups/ProdA/zroot/bastille/bastille/jails/t1@bck_server@20220528005830 | mbuffer -4 -s 128k -m 32M | ssh root@ProdA &amp;quot;zfs receive -F -x canmount -x readonly zroot/bastille/bastille/jails/t1&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note the &lt;em&gt;-x canmount -x readonly&lt;/em&gt;  flags. Remember that we altered the canmount and readonly properties of the transferred datasets during the backup, so we must restore them into a normal state.&lt;/p&gt;
&lt;p&gt;Once finished, ProdA (or the other, restored host) will show t1 as an available jail and you'll be able to start it.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 30 May 2022 03:03:52 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/</guid><category>freebsd</category><category>data</category><category>filesystems</category><category>jail</category><category>linux</category><category>backup</category><category>restore</category><category>borg</category><category>recovery</category><category>snapshots</category><category>tutorial</category><category>container</category><category>server</category><category>security</category><category>zfs</category><category>ownyourdata</category><category>series</category></item><item><title>Efficient backup of lxc containers in Proxmox - ZFS</title><link>https://it-notes.dragas.net/2022/01/20/efficient-backup-of-lxc-containers-in-proxmox-zfs/</link><description>&lt;p&gt;&lt;img src="https://images.unsplash.com/photo-1592946879272-bc79c290b1e5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDI3fHxjb250YWluZXJ8ZW58MHx8fHwxNjQyMjMzMzI4&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Efficient backup of lxc containers in Proxmox - ZFS"&gt;&lt;/p&gt;&lt;p&gt;I've already written about some of my backup strategies in Proxmox. Proxmox Backup Server is an option, but it's not always the best option, especially if you're using lxc containers.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://it-notes.dragas.net/2020/10/06/efficient-backup-of-lxc-containers-in-proxmox/"&gt;LVM and Ceph RBD baked containers have already been covered&lt;/a&gt; in another post, but one of the (many) great options, if you use Proxmox, is ZFS. I extensively use ZFS both on FreeBSD and Linux (&lt;a href="https://it-notes.dragas.net/2020/06/28/btrfs-automatic-snapshots-and-remote-backups/"&gt;and always wished that BTRFS could reach the same level of reliability&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;When I don't need a networked file system (ceph) or want to use LVM, I tend to install Proxmox VMs and lxc containers on ZFS.. Let's now focus on backing up lxc containers.&lt;/p&gt;
&lt;p&gt;Proxmox uses ZFS datasets for lxc containers' storage so you'll find all your files on &lt;em&gt;/poolname/subvol-x-disk-y .&lt;/em&gt; We can easily backup as we've done in my previous article, we just need a different method for taking snapshots of all those datasets.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ZFS datasets have a hidden &lt;em&gt;.zfs&lt;/em&gt; directory that contains all the snapshots that currently exist of that specific dataset&lt;/strong&gt;. &lt;em&gt;ls&lt;/em&gt; won't show it, but you can &lt;em&gt;cd&lt;/em&gt; and it will be working.&lt;/p&gt;
&lt;p&gt;Of course we can use native zfs send/receive or a tool like &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;&lt;em&gt;zfs-autobackup&lt;/em&gt;&lt;/a&gt;, which I use daily for local snapshots and remote replication, but we want to save the files, not the zfs dataset, so we can be able to backup to a different file system. Any file system. So we will be using &lt;a href="https://it-notes.dragas.net/2020/06/30/searching-for-a-perfect-backup-solution-borg-and-restic/"&gt;borg&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Let's suppose our ZFS pool is named "&lt;em&gt;proxzfs&lt;/em&gt;". Here is a suggested script. Of course, &lt;em&gt;this is my script, it works for me and I'm not responsible if it doesn't work for you/destroys all your data/eats your server/etc.&lt;/em&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;#!/bin/bash

/usr/sbin/zfs snapshot -r proxzfs@forborg

REPOSITORY=yourpath/server/whatever:borgrepository/
TAG=mytag
borg create -v --stats --compression lz4 --progress    \
   $REPOSITORY::$TAG'-{now:%Y-%m-%dT%H:%M:%S}'          \
   /proxzfs/*/.zfs/snapshot/forborg/  \
   --exclude '*subvolYouMayWantToExclude-disk-0*'

/usr/sbin/zfs destroy -vrR proxzfs@forborg

borg prune -v $REPOSITORY --stats --prefix $TAG'-' \
   --keep-daily=31 --keep-weekly=4 --keep-monthly=12
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This small script will create a &lt;em&gt;@forborg&lt;/em&gt; snapshot for any dataset it will find under "&lt;em&gt;proxzfs&lt;/em&gt;", then will fire up borg and ask it to traverse the &lt;em&gt;forborg&lt;/em&gt; snapshots automatically mounted inside the .&lt;em&gt;zfs&lt;/em&gt; directory of any dataset.&lt;/p&gt;
&lt;p&gt;After that, it will destroy the '&lt;em&gt;forborg&lt;/em&gt;' snapshots and execute a borg prune_._ That will delete the old backups, according to the policy you have established. This step can be avoided here but I prefer to perform it after a backup so my repository is always consistent with my policy.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 20 Jan 2022 07:05:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2022/01/20/efficient-backup-of-lxc-containers-in-proxmox-zfs/</guid><category>proxmox</category><category>borg</category><category>container</category><category>linux</category><category>snapshots</category><category>backup</category><category>tutorial</category><category>lxc</category><category>zfs</category></item><item><title>FreeBSD, Raspberry PI - problems booting with USB powered hub connected</title><link>https://it-notes.dragas.net/2021/12/04/freebsd-raspberry-pi-problems-booting-with-usb-powered-hub-connected/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="FreeBSD, Raspberry PI - problems booting with USB powered hub connected"&gt;&lt;/p&gt;&lt;p&gt;I’ve been using Raspberry Pis since 2012, running both Linux and FreeBSD.&lt;/p&gt;
&lt;p&gt;A couple of Raspberry Pi 4 devices are running local services and are performing as backup servers. They’ve been running for one year on Alpine and btrfs but I’ve then decided to convert both of them to FreeBSD and ZFS.&lt;/p&gt;
&lt;p&gt;Everything is ok, I can boot from a read only SD card and use the external usb disks as ZFS pools/volumes. The Rpi4 has two USB3 ports, so I’m using a powered USB 3.0 hub. FreeBSD is able to detect it, but there seems to be a problem with u-boot.&lt;/p&gt;
&lt;p&gt;If u-boot detects there's a powered hub, it seems to go crazy and continue to try to boot from the USB hub. Even if you have a bootable disk/usb pen, u-boot just goes crazy and isn’t even able to recognize the partitions on the SD card. The result is that the Rpi4 won’t boot and start to loop.&lt;/p&gt;
&lt;p&gt;I’ve been able to solve by recompiling u-boot without any USB support. This means I always have to use a SD card (or a network boot), but it’s ok for me. U-Boot won’t even see the USB stack and boot directly from the local SD. Then, when FreeBSD kernel has been loaded, all the USB devices will be detected.&lt;/p&gt;
&lt;p&gt;If you want to try my binary file - for Raspberry 4, compiled on 03/12/2021, &lt;a href="https://github.com/draga79/binrepo/raw/main/u-boot.bin"&gt;you can find it here&lt;/a&gt;. The file is provided as is, it's working for me but there's no gurantee that it will be okay for you. &lt;strong&gt;I don't take responsibility at all.&lt;/strong&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Sat, 04 Dec 2021 10:30:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2021/12/04/freebsd-raspberry-pi-problems-booting-with-usb-powered-hub-connected/</guid><category>alpine</category><category>hardware</category><category>server</category><category>backup</category><category>data</category><category>freebsd</category><category>raspberrypi</category></item><item><title>Efficient backup of lxc containers in Proxmox</title><link>https://it-notes.dragas.net/2020/10/06/efficient-backup-of-lxc-containers-in-proxmox/</link><description>&lt;p&gt;&lt;img src="https://images.unsplash.com/photo-1549299096-56b3ebc3259a?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Efficient backup of lxc containers in Proxmox"&gt;&lt;/p&gt;&lt;p&gt;Sometimes an &lt;a href="https://linuxcontainers.org"&gt;lxc container&lt;/a&gt; can be a better alternative than a KVM virtual machine. It has a smaller overhead, easier and more efficient resource management and lower impact on the physical machine.&lt;/p&gt;
&lt;p&gt;You get (potentially) less isolation from the physical machine and you are forced to use the host kernel. Moreover, you cannot run different a different OS.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.proxmox.com/en/"&gt;Proxmox&lt;/a&gt; has a great support of lxc containers and they're often a good choice. I've recently converted some VMs to lxc containers and they're faster, lighter and the Ram occupation is quite lower.&lt;/p&gt;
&lt;p&gt;There are two major drawbacks: &lt;strong&gt;you cannot migrate a running lxc container&lt;/strong&gt; (it will be shut down, moved and restarted - if using a shared storage, it is often a matter of seconds) and  &lt;strong&gt;&lt;em&gt;&lt;a href="https://it-notes.dragas.net/2020/08/23/proxmox-backup-server-hints/"&gt;backups to Proxmox Backup Server&lt;/a&gt; are slower than a VM's backup&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This is because a VM can monitor, &lt;a href="https://it-notes.dragas.net/2020/08/23/proxmox-backup-server-hints/"&gt;thanks to KVM dirty bitmaps&lt;/a&gt;, its disk operations to know in advance which blocks should be checked and copied. When dealing with lxc containers it can't be done, so every backup will need to check every single file - this proved to be efficient in terms of space and deduplication, but quite slow in terms of time. One of my VMs (750 GB stored but with rare disk access) took just a few minutes to be backed up with dirty bitmaps, but more or less 2 hours to backup since it's been transformed to a lxc container.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://it-notes.dragas.net/2020/06/30/searching-for-a-perfect-backup-solution-borg-and-restic/"&gt;borg backup proved to be quite efficient&lt;/a&gt;, so I tried to find a way to backup those containers performing a snapshot, backing up the snapshot and, then, releasing it.&lt;/p&gt;
&lt;p&gt;We cannot use dattobd on the container, so we must snapshot and backup from the host. This has one big advantage: we don't need the container to even know its backup location as every operation will be done by the Proxmox host.&lt;/p&gt;
&lt;p&gt;My containers generally are stored in &lt;strong&gt;Ceph rbd volumes&lt;/strong&gt; or &lt;strong&gt;LVM thin pools&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;I've created a script to deal with both those situations - &lt;em&gt;and it doesn't depend on Proxmox, so it also works if you don't use it&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;First of all, create a user and a borg repository on a remote machine and allow the Proxmox host to login via ssh using a key. Then, create a directory (&lt;em&gt;/tobackup&lt;/em&gt; in the following scripts, on the host) where the snapshots will be mounted.&lt;/p&gt;
&lt;p&gt;In the following example/scripts, no encryption will be used for the backup storage. If you want to encrypt your backup, you will have to perform small modifications.&lt;/p&gt;
&lt;p&gt;Here's my &lt;em&gt;rbd_borg_bck.sh&lt;/em&gt; :&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;#!/bin/bash
set -e

export SOURCE_DISK=$1
export USERNAME=$2
export HOST=$3

PATH=&amp;quot;/usr/local/jdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/pkg/bin:/usr/pkg/sbin&amp;quot;
export PATH

export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes;

echo &amp;quot;Creating and mapping snapshot...&amp;quot;
rbd snap create $SOURCE_DISK@tobackup &amp;amp;&amp;amp; DDISK=`rbd map $SOURCE_DISK@tobackup`

echo &amp;quot;Creating backup directory...&amp;quot;
mkdir -p /tobackup/$DDISK

echo &amp;quot;Mounting snapshot...&amp;quot;
mount -o noload $DDISK /tobackup/$DDISK

echo &amp;quot;DOING BACKUP...&amp;quot;

REPOSITORY=$USERNAME@$HOST:$USERNAME/
TAG=daily

borg create -v --stats --progress --compression zlib,9                           \
    $REPOSITORY::$TAG'-{now:%Y-%m-%dT%H:%M:%S}'          \
    /tobackup/$DDISK/                                       \
    --exclude '*/home/*/.cache*'                  \
    --exclude '*/home/*/.local*'                  \
    --exclude '*/home/*/.pki*'                    \
    --exclude '*/home/*/Virtualbox VMs*'          \
    --exclude '*/home/*/.vagrant.d*'              \
    --exclude '*/root/.cache*'                    \
    --exclude '*/var/swap*'

sleep 3s;

echo &amp;quot;Unmounting snapshot...&amp;quot;
umount /tobackup/$DDISK

sleep 3s;

echo &amp;quot;Unmapping snapshot...&amp;quot;
rbd unmap $DDISK

echo &amp;quot;Remove snapshot...&amp;quot;
rbd snap remove $SOURCE_DISK@tobackup

echo &amp;quot;Pruning...&amp;quot;
borg prune -v $REPOSITORY --stats --prefix $TAG'-' \
    --keep-daily=14 --keep-weekly=8 --keep-monthly=12

echo &amp;quot;Done!&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For example, to backup a disk stored in pool "&lt;em&gt;vmrbdpool&lt;/em&gt;" named '&lt;em&gt;vm-101-disk-0'&lt;/em&gt; via ssh to server &lt;em&gt;backupserver.mydomain.org&lt;/em&gt;, username on the server (and repository name, inside that user's home directory, called &lt;em&gt;server01&lt;/em&gt;), you can launch the script this way:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;/usr/local/bin/rbd_borg_bck.sh vmrbdpool/vm-101-disk-0 server01 backupserver.mydomain.org
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This can also be scheduled and launched via cron (on the Proxmox host) or any other way.&lt;/p&gt;
&lt;p&gt;For the LVM thin pools, the script is a bit different. Here's my &lt;em&gt;lvm_thin_borg_bck.sh&lt;/em&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;#!/bin/bash
set -e

export SOURCE_DISK=$1
export USERNAME=$2
export HOST=$3
export LVM=$4
export DDISK=$4/$2-borgsnap

PATH=&amp;quot;/usr/local/jdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/bin/X11:/usr/pkg/bin:/usr/pkg/sbin&amp;quot;
export PATH

export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes;

echo &amp;quot;Creating and mapping snapshot...&amp;quot;
lvcreate -n $2-borgsnap -s $1
lvchange -ay -Ky $DDISK


echo &amp;quot;Creating backup directory...&amp;quot;
mkdir -p /tobackup/$DDISK

echo &amp;quot;Mounting snapshot...&amp;quot;
mount -o ro /dev/$DDISK /tobackup/$DDISK

echo &amp;quot;DOING BACKUP...&amp;quot;

REPOSITORY=$USERNAME@$HOST:$USERNAME/
TAG=daily

borg create -v --stats --progress --compression zlib,9                           \
    $REPOSITORY::$TAG'-{now:%Y-%m-%dT%H:%M:%S}'          \
    /tobackup/$DDISK/                                       \
    --exclude '*/home/*/.cache*'                  \
    --exclude '*/home/*/.local*'                  \
    --exclude '*/home/*/.pki*'                    \
    --exclude '*/home/*/Virtualbox VMs*'          \
    --exclude '*/home/*/.vagrant.d*'              \
    --exclude '*/root/.cache*'                    

sleep 3s;

echo &amp;quot;Unmounting snapshot...&amp;quot;
umount /tobackup/$DDISK

sleep 3s;

echo &amp;quot;Remove snapshot...&amp;quot;
yes | lvremove $DDISK

echo &amp;quot;Pruning...&amp;quot;
borg prune -v $REPOSITORY --stats --prefix $TAG'-' \
    --keep-daily=14 --keep-weekly=8 --keep-monthly=12

echo &amp;quot;Done!&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this example, we want to backup a disk  "&lt;em&gt;/dev/prox1/vm-120-disk-0&lt;/em&gt;", username on the server is &lt;em&gt;server01&lt;/em&gt; (same repository name), server is &lt;em&gt;backupserver.mydomain.org&lt;/em&gt; and, as last argument, "&lt;em&gt;prox1&lt;/em&gt;" which is the lvm pool.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;/usr/local/bin/lvm_thin_borg_bck.sh /dev/prox1/vm-120-disk-0 server01 backupserver.mydomain.org prox1
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Borg is efficient and fast and, thanks to this kind of setup, I can create a backup of the 750 GB container in 2 minutes.&lt;/p&gt;
&lt;p&gt;Thanks to this kind of setup, backups can be done in a consistent and efficient way.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 06 Oct 2020 08:22:44 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2020/10/06/efficient-backup-of-lxc-containers-in-proxmox/</guid><category>proxmox</category><category>data</category><category>linux</category><category>restore</category><category>container</category><category>snapshots</category><category>borg</category><category>server</category><category>backup</category><category>lxc</category></item><item><title>Proxmox Backup Server - hints for a perfect deployment</title><link>https://it-notes.dragas.net/2020/08/23/proxmox-backup-server-hints/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/broken_disk.webp" alt="Proxmox Backup Server - hints for a perfect deployment"&gt;&lt;/p&gt;&lt;p&gt;Proxmox Backup Server (PBS) has been released. It's still in beta but is already perfectly usable. After many years, it's now possible to perform &lt;em&gt;incremental backups of the VMs and thanks to the &lt;a href="https://wiki.qemu.org/Features/IncrementalBackup#Dirty_Bitmaps_and_Incremental_Backup"&gt;qemu dirty bitmaps&lt;/a&gt;, backups are also fast.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;The documentation is well-done and clear, so I suggest to &lt;a href="https://pbs.proxmox.com/docs/"&gt;have a look at it for all its options and features&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;In short, it allows to backup any Proxmox VM or Container to a specific PBS server (which can also run Proxmox or not - &lt;strong&gt;it's not wise to backup on the same server where the VMs are running).&lt;/strong&gt; It doesn't require any special file system nor setting as it splits files in chunks (so deduplication is possible and efficient) and can be stored in any supported FS.&lt;/p&gt;
&lt;p&gt;I've been using it since the first day and it's quite efficient and reliable, I've backed up hundreds of servers and restored many of them without any problem.&lt;/p&gt;
&lt;p&gt;Here's some hints (this post will be updated as needed):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The first backup will be slow - at least as slow as a traditional Proxmox backup. Don't worry, it's perfectly normal.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DO NOT TRIM THE VMs,&lt;/strong&gt; but use the "discard" option at mount time (if Linux): trim will pass all the dirty blocks and zero them, filling the dirty bitmaps database and slowing down the next backup (even if it will not be larger).&lt;/li&gt;
&lt;li&gt;For the same reason, if you want to keep multiple backup servers, remember to backup to &lt;strong&gt;ONLY ONE&lt;/strong&gt;  &lt;strong&gt;PBS&lt;/strong&gt; server and then sync it to another one. Backing up the same VMs to multiple servers will confuse the dirty bitmaps and every backup will be slower (not larger), even if it will work.&lt;/li&gt;
&lt;li&gt;Sometimes syncing to another server can fail and get stuck. At the moment (0.8.11-1), I found that stopping won't solve the issue and you have to restart the PBS, otherwise the task would seem to be running.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Proxmox Backup System has solved a long time problem: performing efficient, deduplicated, incremental Proxmox VM backups.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Sun, 23 Aug 2020 18:16:57 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2020/08/23/proxmox-backup-server-hints/</guid><category>proxmox</category><category>backup</category><category>security</category><category>server</category><category>recovery</category><category>linux</category><category>snapshots</category><category>restore</category></item><item><title>Take automatic snapshots of cephfs file systems</title><link>https://it-notes.dragas.net/2020/06/29/create-automatic-snapshots-on-cephfs/</link><description>&lt;p&gt;&lt;img src="https://images.unsplash.com/photo-1576261240726-b4782dfcd02e?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="Take automatic snapshots of cephfs file systems"&gt;&lt;/p&gt;&lt;p&gt;&lt;a href="https://docs.ceph.com/docs/master/cephfs/"&gt;Cephfs&lt;/a&gt; is a great tool. It enables users to store and share files while taking advantage of the granularity and performance of &lt;a href="https://ceph.io"&gt;Ceph&lt;/a&gt;. One of the reasons why I'm using it is the possibility of being used on erasure coded (slow disk) pools and cached on fast ssd disks.&lt;/p&gt;
&lt;p&gt;Big amounts of data can be stored and live snapshots can be taken so remote backups can be easily performed. Those snapshots have helped me to restore specific files or directories to specific points in time in a transparent and easy way, multiple times.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Taking a manual snapshot is easy&lt;/strong&gt;: you'll find a ".snap" hidden directory inside any directory of your (cephfs) file system. You can just "mkdir &lt;em&gt;something&lt;/em&gt;" inside the .snap directory and you'll have a full snapshot called "&lt;em&gt;something&lt;/em&gt;". To remove a snapshot, simply use the 'rmdir' command and CephFS will delete and trim the space as soon as possible.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To be able to perform it automatically&lt;/strong&gt;, I've adaped a script called "&lt;em&gt;&lt;a href="https://github.com/nachoparker/btrfs-snp/blob/master/btrfs-snp"&gt;btrfs-snp&lt;/a&gt;&lt;/em&gt;", that I generally use to snapshot a &lt;a href="https://it-notes.dragas.net/2020/06/28/btrfs-automatic-snapshots-and-remote-backups/"&gt;BTRFS&lt;/a&gt; file system, and created &lt;em&gt;"&lt;a href="https://github.com/draga79/cephfs-snp"&gt;cephfs-snp&lt;/a&gt;".&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Using cephfs-snp is quite easy:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;# cephfs-snp
Usage: cephfs-snp &amp;lt;dir&amp;gt; (&amp;lt;tag&amp;gt;) (&amp;lt;limit&amp;gt;) (&amp;lt;seconds&amp;gt;)

  dir     │ create snapshot of &amp;lt;dir&amp;gt;
  tag     │ name the snapshot &amp;lt;tag&amp;gt;_&amp;lt;timestamp&amp;gt;
  limit   │ keep &amp;lt;limit&amp;gt; snapshots with this tag. 0 to disable
  seconds │ don't create snapshots before &amp;lt;seconds&amp;gt; have passed from last with this tag. 0 to disable
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can just ask to perform a snapshot or specify more options to have a more granular control of the snapshots created.&lt;/p&gt;
&lt;p&gt;It becomes quite interesting when used with cron:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;# cat &amp;gt; /etc/cron.hourly/cephfs-snp &amp;lt;&amp;lt;EOF
#!/bin/bash
/usr/local/sbin/$BIN /home hourly  24 3600
/usr/local/sbin/$BIN /home daily    7 86400
/usr/local/sbin/$BIN /home weekly   4 604800
/usr/local/sbin/$BIN /     weekly   4 604800
/usr/local/sbin/$BIN /home monthly 12 2592000
EOF
chmod +x /etc/cron.hourly/cephfs-snp
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;For more information, you can have a look here: &lt;a href="https://github.com/draga79/cephfs-snp"&gt;https://github.com/draga79/cephfs-snp&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Have fun!&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 29 Jun 2020 05:00:53 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2020/06/29/create-automatic-snapshots-on-cephfs/</guid><category>cephfs</category><category>backup</category><category>filesystems</category><category>proxmox</category><category>data</category><category>linux</category><category>server</category><category>snapshots</category></item><item><title>BTRFS: performing automatic snapshots and remote backups</title><link>https://it-notes.dragas.net/2020/06/28/btrfs-automatic-snapshots-and-remote-backups/</link><description>&lt;p&gt;&lt;img src="https://images.unsplash.com/photo-1589985270826-4b7bb135bc9d?ixlib=rb-1.2.1&amp;q=80&amp;fm=jpg&amp;crop=entropy&amp;cs=tinysrgb&amp;w=2000&amp;fit=max&amp;ixid=eyJhcHBfaWQiOjExNzczfQ" alt="BTRFS: performing automatic snapshots and remote backups"&gt;&lt;/p&gt;&lt;p&gt;I'm using BTRFS with satisfaction (I mentioned it in 2014 and, &lt;a href="https://it-notes.dragas.net/2018/10/13/btrfs-best-pratices/"&gt;more recently, in the IT-Notes blog&lt;/a&gt;). I don't consider it optimal for all types of loads, but it solves many problems in many situations. One of the things I use with more satisfaction is the dynamic &lt;em&gt;snapshot&lt;/em&gt; generation function, which guarantees the possibility to have a perfect and immediate copy of a specific volume (or subvolume). When a backup is needed, for example, the use of snapshots is fundamental and having them at file system level is undoubtedly a good help.&lt;/p&gt;
&lt;p&gt;When it comes to BTRFS volumes, however, we have additional options. There are native tools that are able to send and receive data to and from BTRFS volumes in an optimized way, taking advantage of the inherent features of the file system itself. Here is a method to automatically take snapshots and transfer them to another BTRFS volume (local or remote).&lt;/p&gt;
&lt;p&gt;I've been using two interesting scripts for some time now. In the past I used &lt;a href="http://snapper.io/"&gt;snapper&lt;/a&gt; but I didn't find it optimal for my use. I have therefore continued to search and found some valid alternatives, namely the combination of &lt;a href="https://github.com/nachoparker/btrfs-snp"&gt;btrfs-snp&lt;/a&gt; and &lt;a href="https://github.com/nachoparker/btrfs-sync"&gt;btrfs-sync&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;btrfs-snp&lt;/h2&gt;
&lt;p&gt;btrfs-snp allows to be launched specifying a path, a prefix linked to the snapshot taken, the maximum number of snapshots with that prefix to hold and a minimum time between snapshots.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;root@foobar:~# /usr/local/sbin/btrfs-snp / hourly 10 3600&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It will take a snapshot of /, with the term &lt;em&gt;hourly&lt;/em&gt; as prefix (but it can be any prefix), keep a maximum of 10 snapshots and will not take any more (i.e. the program will exit without action) in less than 3600 seconds from the previous one.&lt;/p&gt;
&lt;p&gt;You can of course combine them in this way, to have more snapshots of different types:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;/usr/local/sbin/btrfs-snp / hourly 10 3600/usr/local/sbin/btrfs-snp / daily 3 86400/usr/local/sbin/btrfs-snp / weekly 1 604800&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;It can be launched using cron every hour (or every quarter as it will not take a new snapshot if less than 3600 seconds from the previous one have passed).&lt;/p&gt;
&lt;h2&gt;btrfs-sync&lt;/h2&gt;
&lt;p&gt;At this point, we can copy all the snapshots to another file system (local or remote) thanks to btrfs-sync.&lt;/p&gt;
&lt;p&gt;The command can be of this type:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;root@foobar:~# /usr/local/sbin/btrfs-sync -d -v /.snapshots btrfs@host:/backups/host01/&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;(it's a one line command)&lt;/p&gt;
&lt;p&gt;Obviously the destination file system must be a BTRFS volume.&lt;/p&gt;
&lt;p&gt;I usually run the command through a script in the cron.daily, to get a daily copy of the snapshots out.&lt;/p&gt;
&lt;p&gt;This kind of setup has already got me out of trouble several times and seems to be quite efficient.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Sun, 28 Jun 2020 12:00:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2020/06/28/btrfs-automatic-snapshots-and-remote-backups/</guid><category>backup</category><category>btrfs</category><category>filesystems</category><category>linux</category><category>server</category></item></channel></rss>