<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"><channel xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><title>IT Notes - zfs</title><link>https://it-notes.dragas.net/categories/zfs/</link><description>Articles in category zfs</description><language>en</language><lastBuildDate>Wed, 28 Jan 2026 08:52:00 +0000</lastBuildDate><atom:link href="https://it-notes.dragas.net/categories/zfs/feed.xml" rel="self" type="application/rss+xml"></atom:link><item><title>Time Machine inside a FreeBSD jail</title><link>https://it-notes.dragas.net/2026/01/28/time-machine-freebsd-jail/</link><description>&lt;p&gt;&lt;img src="https://unsplash.com/photos/W32yvc0JJjw/download?force=true&amp;w=640" alt="Time Machine inside a FreeBSD jail"&gt;&lt;/p&gt;&lt;p&gt;Many of my clients do not use Microsoft systems on their desktops; they use Linux-based systems or, in some cases, FreeBSD. Many use Apple systems - macOS - and are generally satisfied with them.
While I wash my hands of it when it comes to Microsoft systems (telling them they have to manage their desktops autonomously), I am often able to lend a hand with macOS. And one of the main requests they make is to manage the backups of their individual workstations.&lt;/p&gt;
&lt;p&gt;macOS, thanks to its Unix base, offers good native tools. Time Machine is transparent and effective, allowing a certain freedom of management. APFS, Apple's current file system, supports snapshots, so the backup will be effectively made on a snapshot. It also supports multiple receiving devices, so you can even have a certain redundancy of the backup itself.&lt;/p&gt;
&lt;p&gt;Having many FreeBSD servers, I am often asked to use their resources and storage. To build, in practice, a Time Machine inside one of the servers. And it is a simple and practical operation, quick and "painless". There are many guides, including the excellent one by &lt;a href="https://freebsdfoundation.org/our-work/journal/browser-based-edition/storage-and-filesystems/samba-based-time-machine-backups/"&gt;Benedict Reuschling&lt;/a&gt; from which I took inspiration for this one, and I will describe the steps I usually follow to set it all up in just a few minutes.&lt;/p&gt;
&lt;p&gt;I usually use &lt;a href="https://bastillebsd.org"&gt;BastilleBSD&lt;/a&gt; to manage my jails, so the first step is to create a new jail dedicated to the purpose. Here you have to decide on the approach: I suggest using a VNET jail or an "inherit" jail - meaning one that attaches to the host's network stack. On one hand, the inherit approach is less secure but, as often happens, it depends on the complexity of the situation. If, for example, we are using a Raspberry PI dedicated to the purpose, there is no reason to complicate things with bridges, etc., but we can attach directly to the network card with a creation command like:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille create tmjail 15.0-RELEASE inherit igb0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Where &lt;code&gt;igb0&lt;/code&gt; is the network interface we want to attach to.&lt;/p&gt;
&lt;p&gt;In case we want to attach to the interface but in the form of a bridge, we should use this syntax:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille create -V tmjail 15.0-RELEASE 192.168.0.42/24 igb0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Or, if our server already has a bridge (in this case it's &lt;code&gt;bridge0&lt;/code&gt;, but yours might be named differently):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille create -B tmjail 15.0-RELEASE 192.168.0.42/24 bridge0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At this point, you can choose: do we want to keep the backups inside the jail or in a separate dataset - which can even be on another pool? In some cases, this can be extremely useful: often I have jails running on fast disks (SSD or NVMe) but abundant storage on slower devices. In this example, therefore, I will create an external dataset for the backups (directly from the host) and mount it in the jail. You could also delegate the entire management of the dataset to the jail, which is a different approach.&lt;/p&gt;
&lt;p&gt;Let's create a space of 600 GB - already reserved - on the chosen pool. 600 GB is a small space, but it's ok for an example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs create -o quota=600G -o reservation=600G bigpool/tmdata
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We can also create separate datasets inside for each user and assign a specific space:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs create -o refquota=500g -o refreservation=500g bigpool/tmdata/stefano
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We can enter the jail and install what we need, remembering also to create the "mountpoint" for the dataset we just created:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille console tmjail 

pkg install -y samba419
mkdir /tmdata
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Exit the jail and instruct Bastille to mount the dataset inside the jail every time it is launched:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;exit
bastille mount tmjail /bigpool/tmdata /tmdata nullfs rw 0 0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's go back into the jail and start with the actual configuration. First, for each Time Machine user, we will create a system user. In my example, I will create the user "stefano", giving him &lt;code&gt;/var/empty&lt;/code&gt; as the home directory - this will give an error since we created a Bastille thin jail, but it's not a problem. It happens because in a thin jail some system paths are read-only or not manageable as they are on a full base system, but the user is only needed for ownership and Samba login.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-text"&gt;root@tmjail:~ # adduser
Username: stefano
Full name: Stefano
Uid (Leave empty for default):
Login group [stefano]:
Login group is stefano. Invite stefano into other groups? []:
Login class [default]:
Shell (sh csh tcsh nologin) [sh]: nologin
Home directory [/home/stefano]: /var/empty
Home directory permissions (Leave empty for default):
Use password-based authentication? [yes]: no
Lock out the account after creation? [no]:
Username    : stefano
Password    : &amp;lt;disabled&amp;gt;
Full Name   : Stefano
Uid         : 1001
Class       :
Groups      : stefano
Home        : /var/empty
Home Mode   :
Shell       : /usr/sbin/nologin
Locked      : no
OK? (yes/no) [yes]: yes
pw: chmod(var/empty): Operation not permitted
pw: chown(var/empty): Operation not permitted
adduser: INFO: Successfully added (stefano) to the user database.
Add another user? (yes/no) [no]: no
Goodbye!
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Give the correct permissions to the user:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# If you've not created specific datasets for the users, you'd better create their home directories now
mkdir /tmdata/stefano
chown -R stefano /tmdata/stefano/
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now we configure Samba for Time Machine. The file to create/modify is &lt;code&gt;/usr/local/etc/smb4.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-ini"&gt;[global]
workgroup = WORKGROUP
security = user
passdb backend = tdbsam
fruit:aapl = yes
fruit:model = MacSamba
fruit:advertise_fullsync = true
fruit:metadata = stream
fruit:veto_appledouble = no
fruit:nfs_aces = no
fruit:wipe_intentionally_left_blank_rfork = yes
fruit:delete_empty_adfiles = yes

[TimeMachine]
path = /tmdata/%U
valid users = %U
browseable = yes
writeable = yes
vfs objects = catia fruit streams_xattr zfsacl
fruit:time machine = yes
create mask = 0600
directory mask = 0700
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We have set up Time Machine to support all the necessary features of macOS and to show itself as "Time Machine". Having set &lt;code&gt;path = /tmdata/%U&lt;/code&gt;, each user will only see their own path.&lt;/p&gt;
&lt;p&gt;At this point, we create the Samba user (meaning the one we will have to type on macOS when we configure the Time Machine):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;smbpasswd -a stefano
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The Time Machine is seen by macOS because it announces itself via mDNS on the network. This type of service is performed by Avahi, which we are now going to configure. Although not strictly necessary (we can always find the Time Machine by connecting directly to its IP and macOS will remember everything), seeing it announced will help other non-expert users and ourselves when we have to configure another Mac in the future.&lt;/p&gt;
&lt;p&gt;Recent Samba releases won't need any specific avahi configuration, so we can skip this step.&lt;/p&gt;
&lt;p&gt;We are now ready to enable everything.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service dbus enable
service dbus start
service avahi-daemon enable
service avahi-daemon start
service samba_server enable
service samba_server start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Et voilà. If everything went according to plan, the Time Machine will announce itself on your network (if you have different networks, remember to configure the mDNS proxy on your router) and you will be able to log in (with the smb user you created) and start your first backup.&lt;/p&gt;
&lt;p&gt;I suggest encrypting the backups for maximum security and observing, from time to time, your Mac as it silently makes its backups to your trusted FreeBSD server.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 28 Jan 2026 08:52:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2026/01/28/time-machine-freebsd-jail/</guid><category>freebsd</category><category>timemachine</category><category>apple</category><category>backup</category><category>data</category><category>zfs</category><category>server</category><category>tutorial</category><category>ownyourdata</category></item><item><title>Installing Void Linux on ZFS with Hibernation Support</title><link>https://it-notes.dragas.net/2025/12/22/void-linux-zfs-hibernation-guide/</link><description>&lt;p&gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/02/Void_Linux_logo.svg/960px-Void_Linux_logo.svg.png" alt="Installing Void Linux on ZFS with Hibernation Support"&gt;&lt;/p&gt;&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;FreeBSD continues to make strides in desktop support, but Linux still holds an advantage in hardware compatibility. After running openSUSE Tumbleweed on my mini PC for several months, I decided it was time to switch to a solution I could control more closely. Not because Tumbleweed doesn't work well - it works great! - but I prefer having direct control over what happens on my machine. And I want native ZFS, because I prefer it over btrfs and it allows me to manage snapshots, backups, and rollbacks just as I do on FreeBSD, using the same tools and procedures.&lt;/p&gt;
&lt;p&gt;The choice of &lt;a href="https://voidlinux.org/"&gt;Void Linux&lt;/a&gt; comes from its BSD-like approach: modular and free of unnecessary complexity. This makes it an excellent solution for this type of setup.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://docs.zfsbootmenu.org/"&gt;ZFSBootMenu&lt;/a&gt; is an extremely powerful tool. It provides an experience similar to FreeBSD's boot loader and natively supports ZFS. I strongly recommend reading the documentation and exploring its features, as some of them - like the built-in SSH daemon - can be genuine lifesavers in recovery scenarios.&lt;/p&gt;
&lt;h2&gt;Prerequisites and Audience&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;This guide is not for absolute beginners.&lt;/strong&gt; If you're new to Linux or Unix-like operating systems, you'd be better served by a ready-to-use distribution like &lt;a href="https://www.opensuse.org/"&gt;openSUSE&lt;/a&gt; Leap (or Tumbleweed for a rolling distribution), &lt;a href="https://linuxmint.com/"&gt;Linux Mint&lt;/a&gt;, &lt;a href="https://www.debian.org/"&gt;Debian&lt;/a&gt;, &lt;a href="https://ubuntu.com/"&gt;Ubuntu&lt;/a&gt;, or &lt;a href="https://manjaro.org/"&gt;Manjaro&lt;/a&gt;. The purpose of this article is to demonstrate a stable, upgradeable, and reasonably secure base setup for users already comfortable with system administration. It uses the &lt;strong&gt;glibc&lt;/strong&gt; variant of Void Linux. The &lt;em&gt;&lt;a href="https://docs.voidlinux.org/installation/musl.html"&gt;musl&lt;/a&gt;&lt;/em&gt; version requires different commands, for example for locale generation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use at your own risk.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This guide synthesizes instructions from several sources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.zfsbootmenu.org/en/latest/guides/void-linux/uefi.html"&gt;Void Linux (UEFI) from ZFSBootMenu&lt;/a&gt; - which doesn't address swap. Using a zvol for swap (not the best solution) prevents hibernation and resume. Our approach uses a separate encrypted swap partition that enables proper resume.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.voidlinux.org/installation/guides/fde.html"&gt;Void Linux Full Disk Encryption&lt;/a&gt; - excellent for btrfs or ext4, but we want ZFS. We'll borrow the swap configuration approach from here.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://compactbunker.org/p/install-void-linux/"&gt;Install Void Linux with a desktop environment + Flatpaks&lt;/a&gt; - for the desktop portion.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If your setup differs from what's described here (NVMe disk, UEFI boot, Secure Boot disabled), consult the linked guides for explanations and variations.&lt;/p&gt;
&lt;h3&gt;Installation Script (Optional)&lt;/h3&gt;
&lt;p&gt;If you want to reproduce this setup quickly, I maintain a script that automates the procedure described in this guide: disk partitioning, ZFS pool and dataset creation, encrypted swap for hibernation resume, dracut configuration, and ZFSBootMenu EFI setup. An optional KDE Plasma desktop installation is also supported.&lt;/p&gt;
&lt;p&gt;The script is interactive and will ask for the required parameters (target disk, timezone and keymap, passphrases, desktop options). &lt;a href="https://brew.bsd.cafe/stefano/void-zfs-hibernation"&gt;Requirements, usage instructions, and known limitations are documented in the repository README&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That said, I still recommend going through the manual process at least once. Understanding each step is part of the value of this setup, especially when troubleshooting or adapting it to different hardware.&lt;/p&gt;
&lt;h2&gt;Boot Environment&lt;/h2&gt;
&lt;p&gt;Since ZFS isn't supported by the base Void Linux image, we'll use &lt;a href="https://github.com/leahneukirchen/hrmpf/releases"&gt;hrmpf&lt;/a&gt;, an excellent rescue system based on Void Linux that includes ZFS support out of the box.&lt;/p&gt;
&lt;p&gt;After booting, you can either proceed directly or SSH into the machine to continue remotely. I generally prefer SSH since it makes copy-paste operations much easier - especially when dealing with UUIDs and long commands. To enable SSH access, set a root password and allow root login:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;passwd
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Edit &lt;code&gt;/etc/ssh/sshd_config&lt;/code&gt; and enable:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;PermitRootLogin yes
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Restart the SSH daemon:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;sv restart sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Find the machine's IP address:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;ip addr
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can now connect via SSH from another device.&lt;/p&gt;
&lt;h2&gt;Initial Setup&lt;/h2&gt;
&lt;p&gt;Set up the environment variables and generate a host ID - we need it for ZFS:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;source /etc/os-release
export ID

zgenhostid -f 0x00bab10c
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Disk Configuration&lt;/h2&gt;
&lt;p&gt;Identify your target disk and set up the partition variables. This approach keeps everything consistent and reduces errors:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Set the base disk - adjust this to match your system
export DISK=&amp;quot;/dev/nvme0n1&amp;quot;

# For NVMe disks, partitions are named like nvme0n1p1, nvme0n1p2, etc.
# For SATA/SAS disks (sda, sdb), partitions are named sda1, sda2, etc.
# Set the partition separator accordingly:
export PART_SEP=&amp;quot;p&amp;quot;  # Use &amp;quot;p&amp;quot; for NVMe, empty string &amp;quot;&amp;quot; for SATA/SAS

# Define partition numbers
export BOOT_PART=&amp;quot;1&amp;quot;
export SWAP_PART=&amp;quot;2&amp;quot;
export POOL_PART=&amp;quot;3&amp;quot;

# Build full device paths
export BOOT_DEVICE=&amp;quot;${DISK}${PART_SEP}${BOOT_PART}&amp;quot;
export SWAP_DEVICE=&amp;quot;${DISK}${PART_SEP}${SWAP_PART}&amp;quot;
export POOL_DEVICE=&amp;quot;${DISK}${PART_SEP}${POOL_PART}&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Verify your configuration before proceeding:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;echo &amp;quot;Boot device: $BOOT_DEVICE&amp;quot;
echo &amp;quot;Swap device: $SWAP_DEVICE&amp;quot;
echo &amp;quot;Pool device: $POOL_DEVICE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Wipe the Disk&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Warning: This operation will irreversibly destroy all data on the selected disk. Double-check that you've selected the correct disk and be sure to have a complete backup of your system!&lt;/strong&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool labelclear -f &amp;quot;$DISK&amp;quot;

wipefs -a &amp;quot;$DISK&amp;quot;
sgdisk --zap-all &amp;quot;$DISK&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Create Partitions&lt;/h2&gt;
&lt;h3&gt;EFI System Partition&lt;/h3&gt;
&lt;p&gt;If you're not using UEFI boot, adapt this procedure following the appropriate guide linked at the beginning of this post:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;sgdisk -n &amp;quot;${BOOT_PART}:1m:+512m&amp;quot; -t &amp;quot;${BOOT_PART}:ef00&amp;quot; &amp;quot;$DISK&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Swap Partition&lt;/h3&gt;
&lt;p&gt;The swap partition should be slightly larger than your RAM to support hibernation. When you hibernate, the entire contents of RAM are written to swap, so you need enough space to hold it all plus some overhead. In this example, I have 16 GB of RAM, so I'm creating an 18 GB swap partition:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;sgdisk -n &amp;quot;${SWAP_PART}:0:+18g&amp;quot; -t &amp;quot;${SWAP_PART}:8200&amp;quot; &amp;quot;$DISK&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;ZFS Pool Partition&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;sgdisk -n &amp;quot;${POOL_PART}:0:-10m&amp;quot; -t &amp;quot;${POOL_PART}:bf00&amp;quot; &amp;quot;$DISK&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Set Up ZFS Encryption&lt;/h2&gt;
&lt;p&gt;Encrypting the disk is strongly recommended, especially for laptops. Replace &lt;code&gt;SomeKeyphrase&lt;/code&gt; with a strong passphrase that's easy to type. Keep in mind that during early boot, the keyboard layout might default to US, so choose a passphrase that's easy to type on a US keyboard layout:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;echo 'SomeKeyphrase' &amp;gt; /etc/zfs/zroot.key
chmod 000 /etc/zfs/zroot.key
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Create the ZFS Pool&lt;/h2&gt;
&lt;p&gt;Create the pool with conservative, well-tested options:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool create -f -o ashift=12 \
 -O compression=lz4 \
 -O acltype=posixacl \
 -O xattr=sa \
 -O relatime=on \
 -O encryption=aes-256-gcm \
 -O keylocation=file:///etc/zfs/zroot.key \
 -O keyformat=passphrase \
 -o autotrim=on \
 -o compatibility=openzfs-2.2-linux \
 -m none zroot &amp;quot;$POOL_DEVICE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Create ZFS Datasets&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ -o canmount=noauto zroot/ROOT/${ID}
zfs create -o mountpoint=/home zroot/home

zpool set bootfs=zroot/ROOT/${ID} zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Export and Reimport for Installation&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool export zroot
zpool import -N -R /mnt zroot
zfs load-key -L prompt zroot

zfs mount zroot/ROOT/${ID}
zfs mount zroot/home

udevadm trigger
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Install the Base System&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;XBPS_ARCH=x86_64 xbps-install \
  -S -R https://mirrors.servercentral.com/voidlinux/current \
  -r /mnt base-system
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Copy Host Configuration&lt;/h2&gt;
&lt;p&gt;Copy the files we generated earlier to the new system:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cp /etc/hostid /mnt/etc
mkdir -p /mnt/etc/zfs
cp /etc/zfs/zroot.key /mnt/etc/zfs
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configure Encrypted Swap&lt;/h2&gt;
&lt;p&gt;Now we'll set up the encrypted swap partition. This is where the hibernation magic happens - by using a separate LUKS-encrypted partition instead of a ZFS zvol, we can properly resume from hibernation.&lt;/p&gt;
&lt;p&gt;Format the swap partition with LUKS:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cryptsetup luksFormat --type luks1 &amp;quot;$SWAP_DEVICE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Open the encrypted partition, create the swap filesystem, and activate it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cryptsetup luksOpen &amp;quot;$SWAP_DEVICE&amp;quot; cryptswap
mkswap /dev/mapper/cryptswap
swapon /dev/mapper/cryptswap
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Preserve Variables for Chroot&lt;/h2&gt;
&lt;p&gt;Before entering the chroot, save the disk variables so they remain available inside the new environment:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; /mnt/root/disk-vars.sh
export DISK=&amp;quot;$DISK&amp;quot;
export PART_SEP=&amp;quot;$PART_SEP&amp;quot;
export BOOT_PART=&amp;quot;$BOOT_PART&amp;quot;
export SWAP_PART=&amp;quot;$SWAP_PART&amp;quot;
export POOL_PART=&amp;quot;$POOL_PART&amp;quot;
export BOOT_DEVICE=&amp;quot;$BOOT_DEVICE&amp;quot;
export SWAP_DEVICE=&amp;quot;$SWAP_DEVICE&amp;quot;
export POOL_DEVICE=&amp;quot;$POOL_DEVICE&amp;quot;
export ID=&amp;quot;$ID&amp;quot;
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Enter the Chroot Environment&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xchroot /mnt
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;From this point forward, all commands are executed inside the new system.&lt;/p&gt;
&lt;p&gt;First, load the saved variables:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;source /root/disk-vars.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configure fstab&lt;/h2&gt;
&lt;p&gt;Add the swap entry to &lt;code&gt;/etc/fstab&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;/dev/mapper/cryptswap   none            swap            defaults        0 0
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Set Up Automatic Swap Unlock&lt;/h2&gt;
&lt;p&gt;To avoid entering the swap password separately after unlocking the ZFS pool, we'll create a keyfile stored on the encrypted ZFS dataset. This is secure because the keyfile only becomes accessible after the ZFS pool is unlocked.&lt;/p&gt;
&lt;p&gt;First, install cryptsetup in the new system:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S cryptsetup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Generate a random keyfile and add it to the LUKS partition:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;dd bs=1 count=64 if=/dev/urandom of=/boot/volume.key

cryptsetup luksAddKey &amp;quot;$SWAP_DEVICE&amp;quot; /boot/volume.key

chmod 000 /boot/volume.key
chmod -R g-rwx,o-rwx /boot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Add the keyfile to &lt;code&gt;/etc/crypttab&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;echo &amp;quot;cryptswap   $SWAP_DEVICE   /boot/volume.key   luks&amp;quot; &amp;gt;&amp;gt; /etc/crypttab
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Include the keyfile and crypttab in the initramfs. Create &lt;code&gt;/etc/dracut.conf.d/10-crypt.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;install_items+=&amp;quot; /boot/volume.key /etc/crypttab &amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Basic System Configuration&lt;/h2&gt;
&lt;p&gt;Configure keyboard layout and hardware clock. Adjust the keymap and timezone to match your location:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/rc.conf
KEYMAP=&amp;quot;us&amp;quot;
HARDWARECLOCK=&amp;quot;UTC&amp;quot;
EOF

ln -sf /usr/share/zoneinfo/Europe/Rome /etc/localtime
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Configure locales:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/default/libc-locales
en_US.UTF-8 UTF-8
en_US ISO-8859-1
EOF

echo &amp;quot;LANG=en_US.UTF-8&amp;quot; &amp;gt; /etc/locale.conf

xbps-reconfigure -f glibc-locales
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Set the root password:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;passwd
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configure ZFS Boot Support&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; /etc/dracut.conf.d/zol.conf
nofsck=&amp;quot;yes&amp;quot;
add_dracutmodules+=&amp;quot; zfs &amp;quot;
omit_dracutmodules+=&amp;quot; btrfs &amp;quot;
install_items+=&amp;quot; /etc/zfs/zroot.key &amp;quot;
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Install ZFS:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S zfs
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configure ZFSBootMenu&lt;/h2&gt;
&lt;p&gt;Set the basic boot properties:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs set org.zfsbootmenu:commandline=&amp;quot;quiet&amp;quot; zroot/ROOT
zfs set org.zfsbootmenu:keysource=&amp;quot;zroot/ROOT/${ID}&amp;quot; zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;The Critical Step: Hibernation Support&lt;/h3&gt;
&lt;p&gt;Now we need to configure hibernation resume. This is the key insight that makes this setup work: normally, the encrypted ZFS root mounts first, and then it unlocks the swap partition. But when resuming from hibernation, the kernel needs to read the hibernation image from swap &lt;em&gt;before&lt;/em&gt; mounting the root filesystem - otherwise, the saved state would be lost.&lt;/p&gt;
&lt;p&gt;To solve this, we tell ZFSBootMenu to unlock the swap partition early, before mounting ZFS, by specifying its LUKS UUID.&lt;/p&gt;
&lt;p&gt;Get the UUID of your swap partition:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;blkid &amp;quot;$SWAP_DEVICE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You'll see output like:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;/dev/...: UUID=&amp;quot;xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&amp;quot; TYPE=&amp;quot;crypto_LUKS&amp;quot; PARTUUID=&amp;quot;...&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Store the UUID in a variable for the next step:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;SWAP_UUID=$(blkid -s UUID -o value &amp;quot;$SWAP_DEVICE&amp;quot;)
echo &amp;quot;Swap UUID: $SWAP_UUID&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now set the boot parameters using the captured UUID:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs set org.zfsbootmenu:commandline=&amp;quot;rd.luks.uuid=$SWAP_UUID resume=/dev/mapper/cryptswap&amp;quot; zroot/ROOT/${ID}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Set Up EFI Boot&lt;/h2&gt;
&lt;p&gt;Create and mount the EFI partition:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;mkfs.vfat -F32 &amp;quot;$BOOT_DEVICE&amp;quot;

mkdir -p /boot/efi
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Add the EFI partition to &lt;code&gt;/etc/fstab&lt;/code&gt; using its UUID:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;BOOT_UUID=$(blkid -s UUID -o value &amp;quot;$BOOT_DEVICE&amp;quot;)
echo &amp;quot;UUID=$BOOT_UUID    /boot/efi    vfat    defaults    0 0&amp;quot; &amp;gt;&amp;gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Mount it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;mount /boot/efi
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Install ZFSBootMenu&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S curl

mkdir -p /boot/efi/EFI/ZBM
curl -o /boot/efi/EFI/ZBM/VMLINUZ.EFI -L https://get.zfsbootmenu.org/efi
cp /boot/efi/EFI/ZBM/VMLINUZ.EFI /boot/efi/EFI/ZBM/VMLINUZ-BACKUP.EFI
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Configure the EFI boot entries:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S efibootmgr

efibootmgr -c -d &amp;quot;$DISK&amp;quot; -p &amp;quot;$BOOT_PART&amp;quot; \
  -L &amp;quot;ZFSBootMenu (Backup)&amp;quot; \
  -l '\EFI\ZBM\VMLINUZ-BACKUP.EFI'

efibootmgr -c -d &amp;quot;$DISK&amp;quot; -p &amp;quot;$BOOT_PART&amp;quot; \
  -L &amp;quot;ZFSBootMenu&amp;quot; \
  -l '\EFI\ZBM\VMLINUZ.EFI'
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Microcode updates&lt;/h3&gt;
&lt;p&gt;Void Linux is modular, so you may need to install additional packages for your specific hardware. For the Intel microcode, you need the non-free repo:
For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# For Intel CPUs
xbps-install -S void-repo-nonfree 
xbps-install -S intel-ucode

# For AMD CPUs/GPUs
xbps-install -S linux-firmware-amd
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After installing microcode updates, regenerate the boot images and exit:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-reconfigure -fa
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Desktop Installation (Optional)&lt;/h2&gt;
&lt;p&gt;If all you need is a minimal system or a server, you're done and ready to reboot. For a complete desktop environment, continue with the following steps.&lt;/p&gt;
&lt;h3&gt;Install Core Desktop Packages&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S vim nano dbus elogind polkit xorg xorg-fonts xorg-video-drivers xorg-input-drivers dejavu-fonts-ttf terminus-font NetworkManager pipewire alsa-pipewire wireplumber xdg-user-dirs unzip gzip xz 7zip
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Install KDE Plasma&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S kde-plasma dolphin konsole firefox kdegraphics-thumbnailers ffmpegthumbs vlc ark kwrite discover kf6-purpose
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Enable Services&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;ln -s /etc/sv/NetworkManager /etc/runit/runsvdir/default/
ln -s /etc/sv/dbus /etc/runit/runsvdir/default/
ln -s /etc/sv/udevd /etc/runit/runsvdir/default/
ln -s /etc/sv/polkitd /etc/runit/runsvdir/default/
ln -s /etc/sv/sddm /etc/runit/runsvdir/default/
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Configure PipeWire Audio&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;mkdir -p /etc/xdg/autostart
ln -sf /usr/share/applications/pipewire.desktop /etc/xdg/autostart/

mkdir -p /etc/pipewire/pipewire.conf.d
ln -sf /usr/share/examples/wireplumber/10-wireplumber.conf /etc/pipewire/pipewire.conf.d/
ln -sf /usr/share/examples/pipewire/20-pipewire-pulse.conf /etc/pipewire/pipewire.conf.d/

mkdir -p /etc/alsa/conf.d
ln -sf /usr/share/alsa/alsa.conf.d/50-pipewire.conf /etc/alsa/conf.d
ln -sf /usr/share/alsa/alsa.conf.d/99-pipewire-default.conf /etc/alsa/conf.d
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Enable Additional Repositories and Flatpak (Optional)&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S void-repo-nonfree void-repo-multilib void-repo-multilib-nonfree

xbps-install -S flatpak
flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Create a Regular User and exit&lt;/h3&gt;
&lt;p&gt;For desktop use, create a non-root user with appropriate group memberships.
Replace &lt;code&gt;username&lt;/code&gt; with your desired username.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;useradd -m username
passwd username
usermod username -G video,wheel,plugdev,kvm,audio,network
exit
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Fix for NetworkManager&lt;/h3&gt;
&lt;p&gt;xchroot will bind mount /etc/resolv.conf and leave an empty file. Network Manager won't like it. So let's clean it up:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;umount -l /mnt/etc/resolv.conf 2&amp;gt;/dev/null || true

rm -f /mnt/etc/resolv.conf
ln -s /run/NetworkManager/resolv.conf /mnt/etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Exit and Reboot&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;umount -n -R /mnt
zpool export zroot
reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Post-Installation&lt;/h2&gt;
&lt;p&gt;If everything went well, after entering your ZFS encryption password, you'll be greeted by the SDDM login screen.&lt;/p&gt;
&lt;h2&gt;Testing Hibernation&lt;/h2&gt;
&lt;p&gt;To verify that hibernation works correctly, you can clock the "Hibernate" button or:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;loginctl hibernate
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The system should power off. When you turn it back on, ZFSBootMenu will prompt for the password, unlock the swap partition, detect the hibernation image, and resume your session exactly where you left off.&lt;/p&gt;
&lt;p&gt;If resume fails, check that:
1. The LUKS UUID in the ZFS commandline property matches your swap partition
2. The swap partition is large enough for your RAM
3. The dracut configuration includes the crypttab and keyfile&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;You now have a fully functional Void Linux system with native ZFS, full disk encryption, and working hibernation. The system is rolling, lightweight, and easy to maintain. Enjoy!&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 22 Dec 2025 08:43:02 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/12/22/void-linux-zfs-hibernation-guide/</guid><category>linux</category><category>desktop</category><category>zfs</category><category>server</category><category>tutorial</category><category>ownyourdata</category><category>voidlinux</category></item><item><title>Introducing the illumos Cafe: Another Cozy Corner for OS Diversity</title><link>https://it-notes.dragas.net/2025/08/18/introducing-the-illumos-cafe/</link><description>&lt;p&gt;&lt;img src="https://illumos.cafe/illumos_cafe.webp" alt="illumos Cafe logo - a coffee cup with an illumos logo"&gt;&lt;/p&gt;&lt;h3&gt;&lt;strong&gt;Introducing the illumos Cafe: Another Cozy Corner for OS Diversity&lt;/strong&gt;&lt;/h3&gt;
&lt;h4&gt;From the BSD Cafe to illumos Cafe&lt;/h4&gt;
&lt;p&gt;The idea for this new project was born from the success of the BSD Cafe, an initiative I introduced to the world in July 2023, which received an incredibly positive response. Far more than I ever anticipated. The BSD community already had its well-established hubs: in the Fediverse, places like &lt;a href="https://bsd.network"&gt;bsd.network&lt;/a&gt;, &lt;a href="https://exquisite.social"&gt;exquisite.social&lt;/a&gt;, and others were already thriving, not to mention all the forums, channels, and Reddit communities.&lt;/p&gt;
&lt;p&gt;But in my vision, something was still missing: a hub of services with a positive spirit, built exclusively with open-source tools, where people could come to share, learn, and experience technology with a positive mindset. The BSD Cafe is therefore not just an instance, but a true Cafe - &lt;a href="https://events.eurobsdcon.org/2025/talk/PJJLFV/"&gt;I’ll be speaking more about the BSD Cafe in detail at the next EuroBSDCon&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;Why Another Cafe?&lt;/h4&gt;
&lt;p&gt;In a world increasingly dominated by centralized services under the control (or lack thereof) of the usual big players, it has become essential to create free, independent communities, devoid of the algorithmic and commercial controls that influence our overall experience. From day one, the BSD Cafe has embodied this spirit.&lt;/p&gt;
&lt;p&gt;Linux is a good kernel, and there are excellent distributions based on it (some using the GNU userland, others only partially, like Alpine Linux), but it cannot and should not become a monoculture. The alternatives are extremely capable, and for many use cases - in my opinion and experience - they are even more suitable. &lt;a href="https://it-notes.dragas.net/2024/10/03/i-solve-problems-eurobsdcon/"&gt;BSD systems have served me exceptionally well for over 20 years&lt;/a&gt;, providing stability and security. At the same time, many other operating systems are renowned for their robustness, reliability, and the quality of their design and implementation.&lt;/p&gt;
&lt;h4&gt;Why illumos?&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://illumos.org"&gt;illumos&lt;/a&gt; is one of them. As the open-source descendant of OpenSolaris, it is an operating system known for its enterprise-grade stability and innovative technologies like ZFS, DTrace, and "zones". It was born from the solid foundations of Solaris and has evolved over time while remaining true to many of its core principles. I have always seen illumos and its distributions as kindred spirits to the BSDs, despite their differences. The philosophy is one of evolution without revolution, of guaranteeing long-term continuity and reliability rather than chasing the latest hype. This is precisely why, for some time now (and thanks in part to the &lt;a href="https://www.tumfatig.net/tags/illumos/"&gt;inspiring posts by Joel Carnat&lt;/a&gt;, which further sparked my curiosity), I have been running &lt;a href="https://omnios.org"&gt;OmniOS&lt;/a&gt; and &lt;a href="https://smartos.org"&gt;SmartOS&lt;/a&gt; alongside my BSD-based setups for certain workloads.&lt;/p&gt;
&lt;p&gt;However, there is very little information online about services running on them. So, a few months ago, I began to consider a new project: &lt;a href="http://illumos.cafe"&gt;the illumos Cafe&lt;/a&gt;.&lt;/p&gt;
&lt;h4&gt;The illumos Cafe Project&lt;/h4&gt;
&lt;p&gt;The illumos Cafe is a project similar to the &lt;a href="https://bsd.cafe"&gt;BSD Cafe&lt;/a&gt; (though perhaps less complex, at least initially). It shares the same spirit of positivity and inclusivity and aims to provide services running on illumos-based operating systems to demonstrate that there are no reasons not to use them. Just like with the BSD Cafe, diversifying the operating systems we use - even while using the same platforms - is fundamental to improving the reliability and resilience of the Internet. The Internet was born as a decentralized network, but for most people, it has sadly become just a tool to access the services of big players.&lt;/p&gt;
&lt;h4&gt;Community and Philosophy&lt;/h4&gt;
&lt;p&gt;But we want to connect. We want relationships with people, between people. We don't want algorithms. We don't want our data to be monetized by "us and our 65535 partners". We want a network that serves us, an OS that serves us - not an OS that just serves as a vehicle to store our data in "someone else's house". The illumos Cafe, therefore, aims to be a home for anyone interested in developing, using, or who is simply curious about illumos-based operating systems.&lt;/p&gt;
&lt;h4&gt;Technical Setup&lt;/h4&gt;
&lt;p&gt;&lt;a href="https://wiki.bsd.cafe/bsdcafe-technical-details"&gt;As with the BSD Cafe&lt;/a&gt;, the entire setup will be documented. For now, it is very simple: there is a VM (running on FreeBSD and bhyve, on hardware I manage) where I have installed SmartOS. The physical host also runs the reverse proxy (in a jail). Inside the SmartOS VM, there are a series of zones:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Zone 1: nginx&lt;/strong&gt; (Web Server) - Currently serving &lt;a href="https://illumos.cafe"&gt;the project's homepage&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Zone 2: Mastodon&lt;/strong&gt; (Social) - Hosting the Mastodon instance and its dependencies at &lt;a href="https://mastodon.illumos.cafe"&gt;https://mastodon.illumos.cafe&lt;/a&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Zone 3: PostgreSQL&lt;/strong&gt; (Database) - The Mastodon database, on a dedicated zone.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Zone 4: Redis&lt;/strong&gt; (Cache) - The Mastodon cache, on a dedicated zone.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Zone 5: snac&lt;/strong&gt; (LX Zone) - Currently in an LX zone (Alpine) as I ran into some issues getting it to work in a native zone. It will be moved to a native zone as soon as I resolve them. It's serving the snac instance at &lt;a href="https://snac.illumos.cafe"&gt;https://snac.illumos.cafe&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Media files are stored on an external physical server (running FreeBSD, the same one as the BSD Cafe, but in a dedicated jail) with &lt;a href="https://github.com/seaweedfs/seaweedfs"&gt;SeaweedFS&lt;/a&gt;. I was able to compile and run SeaweedFS on illumos without any problems, but at the moment, I don't have a host with enough storage space for the media.&lt;/p&gt;
&lt;h4&gt;Available Services&lt;/h4&gt;
&lt;p&gt;More services will arrive over time. For now, two gateways to the Fediverse are already available:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mastodon&lt;/strong&gt; - &lt;a href="https://mastodon.illumos.cafe"&gt;https://mastodon.illumos.cafe&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;snac&lt;/strong&gt; - &lt;a href="https://snac.illumos.cafe"&gt;https://snac.illumos.cafe&lt;/a&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Both instances share the same rules as the BSD Cafe. Positivity. Supporters, not haters. I want them to be places of enjoyment, not venting. Of friendship, not hate.&lt;/p&gt;
&lt;h4&gt;Registrations and Logo&lt;/h4&gt;
&lt;p&gt;Registrations for the Mastodon instance are now open, and the available themes are the default ones plus &lt;a href="https://github.com/nileane/TangerineUI-for-Mastodon"&gt;the colorful TangerineUI&lt;/a&gt; - whose orange hue echoes the illumos logo.&lt;/p&gt;
&lt;p&gt;The project's logo was not generated by an AI. I made it myself by hastily sticking the illumos SVG onto a coffee cup. Basic, perhaps. But authentic.&lt;/p&gt;
&lt;h4&gt;Looking Ahead&lt;/h4&gt;
&lt;p&gt;The BSD Cafe will, of course, remain my primary home. But I want to bring illumos into the Fediverse and provide a home for anyone who wishes to share their interest in this excellent OS.&lt;/p&gt;
&lt;p&gt;I will document the entire process, just &lt;a href="https://it-notes.dragas.net/2022/11/23/installing-mastodon-on-a-freebsd-jail/"&gt;as I did with Mastodon on FreeBSD&lt;/a&gt;, as it is a bit more intricate. Because in my dreams, I see Fediverse statistics showing instances spread fairly evenly across the major open-source operating systems. Because relying on a single OS, even if it's open-source, and ceasing to support the others is also a single point of failure.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 18 Aug 2025 09:04:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/08/18/introducing-the-illumos-cafe/</guid><category>illumos</category><category>community</category><category>bhyve</category><category>fediverse</category><category>data</category><category>server</category><category>ownyourdata</category><category>social</category><category>web</category><category>zfs</category></item><item><title>Make Your Own Backup System – Part 2: Forging the FreeBSD Backup Stronghold</title><link>https://it-notes.dragas.net/2025/07/29/make-your-own-backup-system-part-2-forging-the-freebsd-backup-stronghold/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="A hard disk - ready to host our backups"&gt;&lt;/p&gt;&lt;p&gt;With the &lt;a href="https://it-notes.dragas.net/2025/07/18/make-your-own-backup-system-part-1-strategy-before-scripts/"&gt;primary backup strategies and methodologies introduced&lt;/a&gt;, we've reached the point where we can get specific: the Backup Server configuration.&lt;/p&gt;
&lt;p&gt;When choosing the type of backup server to use, I tend to favor specific setups: either I trust a professional backup service provider (like Colin Percival's &lt;a href="https://www.tarsnap.com/"&gt;Tarsnap&lt;/a&gt;), or I want full control over the disks where the backups will be hosted. In both cases, for the past twenty years, my operating system of choice for backup servers has been FreeBSD. With a few rare exceptions for clients with special requests, it covers all my needs. When I require Linux-based solutions, such as the &lt;a href="https://www.proxmox.com/en/products/proxmox-backup-server/overview"&gt;Proxmox Backup Server&lt;/a&gt;, I create a VM and manage it within.&lt;/p&gt;
&lt;p&gt;I typically use both IPv4 and IPv6. For IPv4, I "play" with NAT and port forwarding. For IPv6, I tend to assign a public IPv6 address to each jail or VM, which is then filtered by the physical server's firewall. Unfortunately, every provider, server, and setup has a different approach to IPv6, making it impossible to cover them all in this article. When a provider allows for routed setups, I use this approach: &lt;a href="https://it-notes.dragas.net/2023/09/23/make-your-own-vpn-freebsd-wireguard-ipv6-and-ad-blocking-included/"&gt;Make your own VPN: FreeBSD, WireGuard, IPv6, and ad-blocking included&lt;/a&gt; - assigning a /72 to the bridge for the jails and VMs.&lt;/p&gt;
&lt;p&gt;In my opinion, FreeBSD is a perfect all-rounder for backups, thanks to its ability to completely partition services. You can separate backup services (or specific servers/clients) into different jails or even VMs. Furthermore, using ZFS greatly enhances both flexibility and the range of tools you can use.&lt;/p&gt;
&lt;p&gt;The main distinction is usually between local backup servers (physically accessible, though not always attended, and in locations deemed secure) and remote ones, such as leased external servers. I personally use a combination of both. If the services I need to back up are external, in a datacenter, and need to be quickly restorable, I prefer to always have a copy on another server in a different datacenter with good outbound connectivity. This guarantees good bandwidth for restores, which isn't always available from a local connection to the outside world. However, an internal, nearby, and accessible backup server (even a Raspberry Pi or a mini PC) ensures physical access to the data. Whenever possible, I maintain both an external and an internal copy - and they are autonomous, meaning the internal copy is &lt;em&gt;not&lt;/em&gt; a replica of the external one, but an additional, independent backup. This ensures that if a problem occurs with the external backup, it won't automatically propagate to the internal one. In any case, the backup must always be in a different datacenter from the one containing the production data. When &lt;a href="https://www.reuters.com/article/idUSKBN2B20NT/"&gt;the fire at the OVH datacenter in Strasbourg&lt;/a&gt; caused the entire complex to shut down, many people found themselves in trouble because their backups were in the same, now unreachable, location. I had a copy with another provider, in a different datacenter and country, as well as a local copy.&lt;/p&gt;
&lt;p&gt;Despite it being "just" a backup server, I almost always use some form of disk redundancy. If I have two disks, I set up a mirror. With three or more, I use RaidZ1 or RaidZ2. This is because, in my view, backups are nearly as important as production data. The inability to recover data from a backup means it's lost forever. And it happens often, very often, that someone contacts me to recover a file (or a database, etc.) days or weeks after its accidental loss or deletion. Usually, pulling out a file from a two-month-old backup generates a mix of disbelief, admiration, but above all, a sense of security in the person requesting it. And that is what our work should instill in the people we collaborate with.&lt;/p&gt;
&lt;p&gt;The backup server should be hardened. If possible, it should be protected and unreachable from the outside. My best backup servers are those accessible only via VPN, capable of pulling the data on their own. If they are on a LAN, it's even better if they are completely disconnected from the Internet.&lt;/p&gt;
&lt;p&gt;For this very reason, &lt;strong&gt;backups must always be encrypted&lt;/strong&gt;. Having a backup means having full access to the data, and the backup server is the prime target for being breached or stolen if the goal is to get your hands on that data. I've seen healthcare facilities' backup servers being targeted (in a rather trivial way, to be honest) by journalists looking for health details of important figures. It is therefore critical that the backup server be as secure as possible.&lt;/p&gt;
&lt;p&gt;Based on the type of access, I use two types of encryption:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;If the server is local&lt;/strong&gt; (especially if the ZFS pool is on external disks), I usually install FreeBSD on UFS in read-only mode, as &lt;a href="https://it-notes.dragas.net/2024/05/31/freebsd-tips-and-tricks-native-ro-rootfs/"&gt;I've described in a previous article&lt;/a&gt;, and encrypt the backup disks with &lt;a href="https://man.freebsd.org/cgi/man.cgi?geli(8)"&gt;GELI&lt;/a&gt;. This ensures that in the event of a "dirty" shutdown (more likely in unattended environments), I can reconnect to the host and then reactivate the ZFS pool. This approach makes it nearly impossible to retrieve even the pool's metadata if the disks are stolen, as GELI performs a full-device encryption. For example, an employee of a company I work with stole one of the secondary backup disks (which was located at a different, unmonitored company site) to steal information. He got nothing but a criminal complaint. With this approach, it's also not necessary to further encrypt the datasets, which avoids some issues (which I'll discuss later, in a future post).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;If the server is remote&lt;/strong&gt;, in a datacenter, I usually use ZFS native encryption, encrypting the main backup dataset (and &lt;a href="https://bastillebsd.org/"&gt;BastilleBSD&lt;/a&gt;'s, if applicable). Consequently, all child datasets containing backups will also be encrypted. In this case as well, a password will be required after a reboot to unlock those datasets, ensuring that the data cannot be extracted if control of the disks is lost.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Here is an example of how to use GELI to encrypt an entire partition and then create a ZFS pool on it (in the example, the disk is &lt;code&gt;da1&lt;/code&gt; - do not follow these commands blindly, or you will erase all content on the &lt;code&gt;da1&lt;/code&gt; device!):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# WARNING: This destroys the existing partition table on disk da1
gpart destroy -F da1

# Create a new GPT partition table
gpart create -s gpt da1

# Add a freebsd-zfs partition that spans the entire disk
# The -a 1m flag ensures proper alignment
gpart add -t freebsd-zfs -a 1m da1

# Initialize GELI encryption on the new partition (da1p1)
# We use AES-XTS with 256-bit keys and a 4k sector size
# The -b flag means &amp;quot;boot,&amp;quot; prompting for the passphrase at boot time
geli init -b -l 256 -s 4096 da1p1
# You will be prompted for a passphrase: choose a strong one and save it!

# Attach the encrypted partition. A new device /dev/da1p1.eli will be created.
# You will be prompted for the passphrase you just set
geli attach da1p1

# (Optional) Check the status of the encrypted device
geli status da1p1

# Create the ZFS pool &amp;quot;bckpool&amp;quot; on the encrypted device
# We enable zstd compression (an excellent compromise) and disable atime
zpool create -O compression=zstd -O atime=off bckpool da1p1.eli
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this setup, the reference pool for everything related to backups will be &lt;code&gt;bckpool&lt;/code&gt; - and you'll need to keep this in mind for the next steps. Additionally, after every server reboot, you'll need to "unlock" the disk and import the pool:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Enter the passphrase when prompted
geli attach da1p1

# Import the ZFS pool, which is now visible
zpool import bckpool
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;With this method, it's not necessary to encrypt the ZFS datasets, as the underlying disk (or, more precisely, the partition containing the ZFS pool) is already encrypted.&lt;/p&gt;
&lt;p&gt;If, instead, you choose to encrypt the ZFS dataset (for example, if you install FreeBSD on the same disks that will hold the data and don't want to use a multi-partition approach), you should create a base encrypted dataset. Inside it, you can create the various backup datasets, VMs, and the BastilleBSD mountpoint. Due to property inheritance, they will all be encrypted as well.&lt;/p&gt;
&lt;p&gt;To create an encrypted dataset, a command like this will suffice:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Creates a new dataset with encryption enabled.
# keylocation=prompt will ask for a passphrase every time it's mounted.
# keyformat=passphrase specifies the key type.
zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase zfspool/dataset
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this case, after every reboot, you will need to load the key and mount the dataset:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs load-key zfspool/dataset
zfs mount zfspool/dataset
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Keep in mind the setup you choose, as many of the subsequent choices and commands will depend on it.&lt;/p&gt;
&lt;h2&gt;Base System Setup&lt;/h2&gt;
&lt;p&gt;I'll install BastilleBSD - a useful tool for separating services into jails. It will be helpful for isolating our backup services:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install -y bastille
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If you used ZFS for the root filesystem, you can proceed directly with the setup. Otherwise (i.e., ZFS on other disks), you'll need to edit the &lt;code&gt;/usr/local/etc/bastille/bastille.conf&lt;/code&gt; file and specify the correct dataset on which to install the jails. Then run:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille setup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once the automatic setup is complete, check the &lt;code&gt;/etc/pf.conf&lt;/code&gt; file - it will be automatically configured to only accept SSH connections. Ensure the network interface is set correctly. When you activate &lt;code&gt;pf&lt;/code&gt;, you will be kicked out of the server, but you can then reconnect.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service pf start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's bootstrap a FreeBSD release for the jails - this will be useful later.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille bootstrap 14.3-RELEASE update
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, we create a local bridge. Jails and VMs can be attached to it, making them fully autonomous. Using VNET jails, for example, will allow the creation of VPNs or &lt;code&gt;tun&lt;/code&gt; interfaces inside them, simplifying potential future setups (and increasing security by using a dedicated network stack).&lt;/p&gt;
&lt;p&gt;Modify the &lt;code&gt;/etc/rc.conf&lt;/code&gt; file and add:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Add lo1 and bridge0 to the list of cloned interfaces
cloned_interfaces=&amp;quot;lo1 bridge0&amp;quot;
# Assign an IP address and netmask to the bridge
ifconfig_bridge0=&amp;quot;inet 192.168.0.1 netmask 255.255.255.0&amp;quot;
# Enable gateway functionality for routing
gateway_enable=&amp;quot;yes&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's also modify &lt;code&gt;/etc/pf.conf&lt;/code&gt; to allow the &lt;code&gt;192.168.0.0/24&lt;/code&gt; subnet to access the Internet via NAT. We will skip packet filtering on &lt;code&gt;bridge0&lt;/code&gt; and enable NAT. This isn't the most secure setup, but it's sufficient to get started:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;#...
# Skip PF processing on the internal bridge interface
set skip on bridge0
#...
# NAT traffic from our internal network to the outside world
nat on $ext_if from 192.168.0.0/24 to any -&amp;gt; ($ext_if:0)
#...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To ensure the new settings are correct, it's a good idea to test with a reboot.&lt;/p&gt;
&lt;p&gt;Since I often configure &lt;a href="https://github.com/freebsd/vm-bhyve"&gt;vm-bhyve&lt;/a&gt; in my setups, I prefer to install it right away, creating the dataset that will contain the VMs and installation templates. Remember that &lt;code&gt;zroot&lt;/code&gt; is only valid if you installed the entire system on ZFS; otherwise, you'll need to change it to your own dataset:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Install required packages
pkg install vm-bhyve grub2-bhyve bhyve-firmware
# Create a dataset to store VMs
zfs create zroot/VMs
# Enable the vm service at boot
sysrc vm_enable=&amp;quot;YES&amp;quot;
# Set the directory for VMs, using the ZFS dataset
sysrc vm_dir=&amp;quot;zfs:zroot/VMs&amp;quot;
# Initialize vm-bhyve
vm init
# Copy the example templates
cp /usr/local/share/examples/vm-bhyve/* /zroot/VMs/.templates/
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At this point, I usually enable the console via &lt;code&gt;tmux&lt;/code&gt;. This means that when a VM is launched, it won't open a VNC port by default, but a &lt;code&gt;tmux&lt;/code&gt; session connected to the VM's serial port. Let's install and configure &lt;code&gt;tmux&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install -y tmux
vm set console=tmux
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's also attach the switch we created (&lt;code&gt;bridge0&lt;/code&gt;) to &lt;code&gt;vm-bhyve&lt;/code&gt; so we can use it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;vm switch create -t manual -b bridge0 public
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, &lt;code&gt;vm-bhyve&lt;/code&gt; is ready.&lt;/p&gt;
&lt;p&gt;The basic infrastructure is complete. We now have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;ZFS&lt;/strong&gt; to ensure data integrity, which will also handle redundancy, etc.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;BastilleBSD&lt;/strong&gt; to manage jails, useful for backing up Linux, NetBSD, OpenBSD, and non-ZFS FreeBSD machines.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;vm-bhyve&lt;/strong&gt; to install specific systems (like Proxmox Backup Server).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Backup Strategies&lt;/h2&gt;
&lt;p&gt;I use various backup tools, too many to list in this article. So I'll make a broad distinction, describing how to use this server to achieve our goal: securing data.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;FreeBSD servers with ZFS&lt;/strong&gt; (hosts, VMs, jails, hypervisors, and their respective VMs), I use an extremely useful, efficient, and reliable tool: &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;zfs-autobackup&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Linux servers (without ZFS), NetBSD, OpenBSD&lt;/strong&gt;, etc. (any non-ZFS OS), I usually use &lt;a href="https://www.borgbackup.org/"&gt;BorgBackup&lt;/a&gt;. There are other fantastic tools like &lt;a href="https://restic.net/"&gt;restic&lt;/a&gt;, &lt;a href="https://kopia.io/"&gt;Kopia&lt;/a&gt;, etc., but BorgBackup has never let me down and has served me well even on low-power devices and after incredibly complex disasters.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;Proxmox servers&lt;/strong&gt; (a solution I've used with satisfaction in production since 2013, although I'm recently migrating to FreeBSD/bhyve where possible), I use two possible alternatives (often both at the same time): if the storage is ZFS, I use the &lt;code&gt;zfs-autobackup&lt;/code&gt; approach. In either case, the most practical solution is the Proxmox Backup Server. And the Proxmox Backup Server is one of the reasons I proposed installing &lt;code&gt;vm-bhyve&lt;/code&gt;: running it in a VM and storing the data on the FreeBSD host gives you the best of both worlds. Some time ago, I tried running it in a FreeBSD jail (via Linuxulator), but it didn't work.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Backups using zfs-autobackup&lt;/h3&gt;
&lt;p&gt;&lt;code&gt;zfs-autobackup&lt;/code&gt; is an extremely useful and effective tool. It allows for "pull" type backups, as well as having an intermediary host that connects to both the source and destination, which is useful if you don't want direct contact between the source and destination. I won't describe the latter setup, but the documentation is clear, and I have several of them in production, ensuring that the production server and its backup server cannot communicate with each other.&lt;/p&gt;
&lt;p&gt;I usually create a dataset for each server and instruct &lt;code&gt;zfs-autobackup&lt;/code&gt; to keep that server's backups in that dataset. The snapshots taken and transferred will all be from the same instant, so as not to create a time skew (some tools of this kind snapshot a dataset, then transfer it, which can result in minutes of difference between two different datasets from the same server).&lt;/p&gt;
&lt;p&gt;I've described in detail how I perform this type of backup in a &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;previous post&lt;/a&gt;, so I suggest reading that post for reference.&lt;/p&gt;
&lt;p&gt;Let's install zfs-autobackup on the FreeBSD server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install py311-zfs-autobackup mbuffer
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Backups for other servers using BorgBackup&lt;/h3&gt;
&lt;p&gt;When I don't have ZFS available or need to perform a file-based backup (all or partial), I use a different technique. &lt;code&gt;BorgBackup&lt;/code&gt; backups are primarily "push" based, meaning the client will connect to the backup server. This is not optimal or the most secure approach, as the backup server should, in theory, be hardened. Even when protecting everything via VPN, the risk remains that a compromised server could connect to its backup server and alter or delete the backups. I have seen this happen in ransomware cases (especially in the Microsoft world), and so I try to be careful to minimize this type of problem, mainly through snapshots of the backup server (an operation that will be described later).&lt;/p&gt;
&lt;p&gt;To ensure the highest possible security, I create a FreeBSD jail on the backup server for each server I need to back up. The advantage of this approach is the complete separation of all servers from each other. By using a regular user inside a jail, a compromised server that connects to its backup server would only be able to reach its own backups, as it would be confined to a user account and, even if it managed to escalate privileges, still be inside a jail.&lt;/p&gt;
&lt;p&gt;Let's say, for example, we want to back up a server called "ServerA" (great imagination, I know). We create a dedicated jail on the backup server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Create a new VNET jail named &amp;quot;servera&amp;quot; attached to our bridge
bastille create -B servera 14.3-RELEASE 192.168.0.101/24 bridge0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;BastilleBSD will automatically set the host's gateway for the jail. In our case, this is incorrect, so we need to modify it and set the jail's gateway to &lt;code&gt;192.168.0.1&lt;/code&gt; in the &lt;code&gt;/usr/local/bastille/jails/servera/root/etc/rc.conf&lt;/code&gt; file:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# ...
defaultrouter=&amp;quot;192.168.0.1&amp;quot;
# ...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Restart the jail and connect to it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille restart servera
bastille console servera
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, inside the jail, we install &lt;code&gt;borgbackup&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install py311-borgbackup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;BorgBackup doesn't run a daemon; it's launched by the remote server (which sends its data to the backup server), so it's important that the installed version is compatible with the one on the remote host.&lt;/p&gt;
&lt;p&gt;Since we'll be using SSH, let's enable it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service sshd enable
service sshd start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And create a non-privileged user for this purpose:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# The 'adduser' utility provides an interactive way to create a user.
root@servera:~ # adduser
Username: servera
Full name: Server A
Uid (Leave empty for default): 
Login group [servera]: 
Login group is servera. Invite servera into other groups? []: 
Login class [default]: 
Shell (sh csh tcsh nologin) [sh]: 
Home directory [/home/servera]: 
Home directory permissions (Leave empty for default): 
Use password-based authentication? [yes]: 
Use an empty password? (yes/no) [no]: 
Use a random password? (yes/no) [no]: yes
Lock out the account after creation? [no]: 
Username    : servera
Password    : &amp;lt;random&amp;gt;
Full Name   : Server A
Uid         : 1001
Class       : 
Groups      : servera 
Home        : /home/servera
Home Mode   : 
Shell       : /bin/sh
Locked      : no
OK? (yes/no) [yes]: yes
adduser: INFO: Successfully added (servera) to the user database.
adduser: INFO: Password for (servera) is: JIkdq8Ex
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The user is created and can receive SSH connections. After setting everything up, I suggest disabling password-based login in the jail's SSH configuration, using only public key authentication.&lt;/p&gt;
&lt;p&gt;As mentioned, the biggest risk of a "push" backup is that a compromised client could access the backup server and delete or encrypt the backup history, rendering it useless.&lt;/p&gt;
&lt;p&gt;To drastically mitigate this risk, we can configure SSH to force the client to operate in a special Borg mode called &lt;strong&gt;append-only&lt;/strong&gt;. In this mode, the SSH key used by the client will only have permission to create new archives, not to read or delete old ones. However, this approach could complicate some client-side operations (like &lt;code&gt;mount&lt;/code&gt;, &lt;code&gt;prune&lt;/code&gt;, etc.), forcing them to be done on the server. For this reason, I won't describe it in this setup, "limiting" myself to taking snapshots of the repositories. It can be a very good practice, so I recommend considering it.&lt;/p&gt;
&lt;p&gt;Let's initialize the BorgBackup repository. In this example, for simplicity, I won't set up repository encryption. If the jails are on an encrypted dataset or GELI-encrypted disks, there will still be data encryption on the disks, but there will be no protection against someone who could physically access the server while the disks are mounted. As usual, security is like an onion: every layer helps. Personally, I suggest enabling and using it ALWAYS.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Switch to the new user
su -l servera
# Initialize a new Borg repo named &amp;quot;servera&amp;quot; with no encryption (for this example)
borg init -e none servera
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The jail is ready, but it's unreachable from the outside. There are two ways to make it accessible:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Install a VPN system inside the jail itself.&lt;/strong&gt; Using tools like Zerotier or Tailscale (which don't need to expose ports) will immediately create the conditions to connect to the jail, which will remain inaccessible from the outside. As the jail is a VNET jail, we're free to choose any of the supported VPN technologies.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Expose a port on the backup server&lt;/strong&gt;, i.e., on the host, to allow external connections. Many advise against this path as they consider it less secure. It is, but sometimes we don't have the luxury of installing whatever we want on the server we're backing up.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;To expose the port, go back to the host and modify the &lt;code&gt;/etc/pf.conf&lt;/code&gt; file, creating the &lt;code&gt;rdr&lt;/code&gt; and &lt;code&gt;pass&lt;/code&gt; rules to let packets in:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# ...
# Redirect incoming traffic on port 1122 to the jail's SSH port (22)
rdr on $ext_if inet proto tcp from any to any port = 1122 -&amp;gt; 192.168.0.101 port 22
# ...
# Allow incoming traffic on port 1122
pass in inet proto tcp from any to any port 1122 flags S/SA keep state
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Reload the &lt;code&gt;pf&lt;/code&gt; configuration:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service pf reload
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The jail will now be reachable on the server's public IP, on port 1122. Obviously, this port number is for illustrative purposes, and I used &lt;code&gt;from any&lt;/code&gt;, but for better security, you should replace &lt;code&gt;any&lt;/code&gt; with the IP address of the server that will be connecting to perform the backup.&lt;/p&gt;
&lt;p&gt;By repeating this process for each server and creating a separate jail for each, you can have isolated jails in separate datasets with their backups, potentially setting space limits using ZFS quotas.&lt;/p&gt;
&lt;p&gt;It's important to remember that backing up a live filesystem (i.e., without a snapshot or dumps) has a very high probability of being impossible to restore completely. Databases hate this approach because files will change while being copied and tend to get corrupted. Of course, it depends on the nature of the data (a backup of a static website will have no issues, but a WordPress database probably will), but it's crucial to think about a technique to snapshot the filesystem before proceeding. For example, I have already written about how to create snapshots on FreeBSD with UFS in a previous article: &lt;a href="https://it-notes.dragas.net/2024/06/04/freebsd-tips-and-tricks-creating-snapshots-with-ufs/"&gt;FreeBSD tips and tricks: creating snapshots with UFS&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I will cover other operating systems in a future, dedicated post.&lt;/p&gt;
&lt;h3&gt;Proxmox Backup Server in a Dedicated VM&lt;/h3&gt;
&lt;p&gt;Starting with version 4.0 (which is still in beta at the time of this writing), Proxmox Backup Server (PBS) supports storing its data in an S3 bucket. This is excellent news as it decouples the server from the data. There are great open-source S3 implementations, like &lt;a href="https://min.io/"&gt;Minio&lt;/a&gt; or &lt;a href="https://github.com/seaweedfs/seaweedfs"&gt;SeaweedFS&lt;/a&gt;, which allow for clustering, replication, etc. In this "simple" case, we will install Proxmox Backup Server in a small VM, while for the data, we'll install Minio in a native FreeBSD jail. The advantage is undeniable: the VM will only serve as an "intermediary", but the data will rest directly on the FreeBSD host's dataset, natively. It will also be possible to specify other external endpoints, other repositories, etc.&lt;/p&gt;
&lt;p&gt;As a philosophy, I tend not to use external providers unless for specific needs, so installing Minio in a jail is a perfect solution to manage this situation.&lt;/p&gt;
&lt;p&gt;Let's install PBS by downloading the ISO from their website (https://enterprise.proxmox.com/iso/) - at this moment, the version that supports this setup is 4.0 Beta.&lt;/p&gt;
&lt;p&gt;The directory to download to is the &lt;code&gt;vm-bhyve&lt;/code&gt; ISOs directory. It's not strictly necessary, but it's useful for not "losing" it somewhere. So, go to the directory and download it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cd /zroot/VMs/.iso
fetch https://enterprise.proxmox.com/iso/proxmox-backup-server_4.0-BETA-1.iso
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now let's create a VM with &lt;code&gt;vm-bhyve&lt;/code&gt;. We can start from the Debian template, but we'll make some modifications to optimize performance. In this example, I'm giving it 30 GB of disk space, 2 GB of RAM, and 2 cores.&lt;/p&gt;
&lt;p&gt;If you want to store all backups inside the VM, you'll need to size the virtual disk correctly (or create and attach another one). In this case, I will focus on the "clean" VM that will store its data on a dedicated jail with Minio.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;vm create -t debian -s 30G -m 2G -c 2 pbs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once the empty VM is created, let's modify its options:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;vm configure pbs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We will modify the VM to be UEFI and to use the NVME disk driver - bhyve &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;performs significantly better on NVME than virtio, as previously tested&lt;/a&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;loader=&amp;quot;uefi&amp;quot;
cpu=&amp;quot;2&amp;quot;
memory=&amp;quot;2G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;nvme&amp;quot;
disk0_name=&amp;quot;disk0.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Fortunately, the Proxmox team has provided for the installation of the Backup Server on devices without a graphical interface, so the boot menu will allow installation via serial console. Let's launch the installation and connect to the virtual serial console:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cd /zroot/VMs/.iso
vm install pbs proxmox-backup-server_4.0-BETA-1.iso
vm console pbs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Select the installation via &lt;strong&gt;Terminal UI (serial console)&lt;/strong&gt; and proceed normally as if it were a physical host, assigning an IPv4 address from the &lt;code&gt;192.168.0.x&lt;/code&gt; range (in this example, I'll use &lt;code&gt;192.168.0.3&lt;/code&gt;).&lt;/p&gt;
&lt;p&gt;This way, the Proxmox Backup Server will run in a VM, with the ability to take snapshots before updates, etc., without any worries.&lt;/p&gt;
&lt;p&gt;Once the installation is complete, PBS will reboot and listen on port 8007 of its IP. Again, as with the jails, we have two options: install a VPN system within the VM itself (thus exposing it automatically only on that VPN - generally a more secure operation) or expose port 8007 on the server's public IP.&lt;/p&gt;
&lt;p&gt;In the latter case, add the relevant lines to the &lt;code&gt;/etc/pf.conf&lt;/code&gt; file on the FreeBSD backup server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# ...
# Redirect incoming traffic on port 8007 to the PBS VM's web interface
rdr on $ext_if inet proto tcp from any to any port = 8007 -&amp;gt; 192.168.0.3 port 8007
# ...
# Allow that traffic to pass
pass in inet proto tcp from any to any port 8007 flags S/SA keep state
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Reload the &lt;code&gt;pf&lt;/code&gt; configuration:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service pf reload
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The PBS VM configuration is complete. If you chose to use the PBS's internal disk as a repository, no further operations are necessary (other than the normal repository creation, etc., within PBS).&lt;/p&gt;
&lt;p&gt;In this case, however, we will use a different approach.&lt;/p&gt;
&lt;h4&gt;Creating a Minio Jail as a Data Repository for PBS&lt;/h4&gt;
&lt;p&gt;This approach, in my opinion, has a number of important advantages. The first is that Minio will run in a dedicated jail on the host, at full performance, and will store the data directly on the physical ZFS datapool, thus removing any other layer in between. This jail could potentially be moved to other hosts (by connecting PBS and the jail via VPN or public IP), made redundant thanks to all of Minio's features, etc. Another solution I am successfully testing (in other setups) is SeaweedFS.&lt;/p&gt;
&lt;p&gt;Let's create a dedicated jail with Minio and put it on the bridge, so that PBS can access it on the LAN.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille create -B minio 14.3-RELEASE 192.168.0.11/24 bridge0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If not configured directly, BastilleBSD will use the host's gateway for the jail, which is incorrect in this case. So let's go modify it and restart the jail. Enter the jail with:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille console minio
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And modify the &lt;code&gt;/etc/rc.conf&lt;/code&gt; file to have the correct gateway (following the example addresses):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# ...
ifconfig_vnet0=&amp;quot; inet 192.168.0.11/24 &amp;quot;
defaultrouter=&amp;quot;192.168.0.1&amp;quot;
# ...
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Exit the jail and restart it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille restart minio
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Enter the jail and install Minio:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;bastille console minio
pkg install -y minio
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Minio is already able to start, but PBS, even on the LAN, wants an encrypted connection. Fortunately, there's a handy tool that can generate the certificates for us:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Download the certgen tool
fetch https://github.com/minio/certgen/releases/latest/download/certgen-freebsd-amd64

# Make it executable and run it for our jail's IP
chmod a+rx certgen-freebsd-amd64
./certgen-freebsd-amd64  -host &amp;quot;192.168.0.11&amp;quot;

# Create the necessary directories and set permissions
mkdir -p /usr/local/etc/minio/certs
cp private.key public.crt /usr/local/etc/minio/certs/
chown -R minio:minio /usr/local/etc/minio/certs/
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let's view the certificate's fingerprint. Since it's self-signed, we'll need it for PBS later. For security reasons, PBS will ask for the fingerprint of non-directly verifiable certificates. Run the following command and take note of the result:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;openssl x509 -in /usr/local/etc/minio/certs/public.crt -noout -fingerprint -sha256
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At this point, enable and configure Minio in &lt;code&gt;/etc/rc.conf&lt;/code&gt;. 
&lt;strong&gt;WARNING&lt;/strong&gt;: The username and password (access key and secret) used in this example are insecure and for testing purposes only. It is strongly recommended to use different values:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Enable Minio service
minio_enable=&amp;quot;YES&amp;quot;
# Set the address for the Minio console
minio_console_address=&amp;quot;:8751&amp;quot;
# Set the root user and password as environment variables
minio_env=&amp;quot;MINIO_ROOT_USER=testaccess MINIO_ROOT_PASSWORD=testsecret&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Start Minio:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;service minio start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If everything went correctly, Minio is now running (with its certificates) and ready to receive connections.&lt;/p&gt;
&lt;p&gt;It's now time to create the bucket(s) that PBS will use. There are several ways to do this, but to test that everything is working and to configure PBS, I suggest connecting via an SSH tunnel.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Create an SSH tunnel from your local machine to the backup server
# Port 8007 is forwarded to the PBS web UI
# Port 8751 is forwarded to the Minio console
ssh user@backupServerIP -L8007:192.168.0.3:8007 -L8751:192.168.0.11:8751
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This way, we'll create a tunnel between the FreeBSD backup server and our workstation, mapping &lt;code&gt;127.0.0.1:8007&lt;/code&gt; to &lt;code&gt;192.168.0.3:8007&lt;/code&gt; (the PBS web interface) and &lt;code&gt;127.0.0.1:8751&lt;/code&gt; to &lt;code&gt;192.168.0.11:8751&lt;/code&gt; (the Minio console port).&lt;/p&gt;
&lt;p&gt;Now, connect to &lt;code&gt;https://127.0.0.1:8751&lt;/code&gt;, enter the credentials specified in &lt;code&gt;/etc/rc.conf&lt;/code&gt;, and create a bucket.&lt;/p&gt;
&lt;p&gt;Once the bucket is created, you can configure PBS to use it. Connect to PBS via &lt;code&gt;https://127.0.0.1:8007&lt;/code&gt; and go to &lt;strong&gt;S3 Endpoints&lt;/strong&gt;. Set a name, use &lt;code&gt;192.168.0.11&lt;/code&gt; as the IP and &lt;code&gt;9000&lt;/code&gt; as the port, enter the access and secret keys, and the certificate fingerprint we generated earlier. &lt;strong&gt;Enable "Path Style"&lt;/strong&gt; or it will not work.&lt;/p&gt;
&lt;p&gt;Then go to &lt;strong&gt;Datastores&lt;/strong&gt; and add it, as you would for any other S3 datastore, by specifying the created bucket and a local directory where the system will keep its cache.&lt;/p&gt;
&lt;p&gt;If everything was set up correctly, PBS will create its structure in the bucket, and from that moment on, you can use it. Always keep in mind that this is still a "technology preview", so there may be issues, but from my tests, it is sufficiently reliable.&lt;/p&gt;
&lt;h3&gt;Taking Local Snapshots of Backups&lt;/h3&gt;
&lt;p&gt;One of the most common techniques used in ransomware attacks is to also delete or encrypt backups. They often use automated methods, relying on the fact that many (too many!) consider a "backup" to be a simple copy of files to a network share. However, it's not impossible that, in specific cases, they might compromise the machine and connect to the backup server. This is nearly impossible with a "pull" type backup (like the one managed by &lt;code&gt;zfs-autobackup&lt;/code&gt;) but is still possible with the "push" approach, which involves using BorgBackup or similar tools.&lt;/p&gt;
&lt;p&gt;This happened to one of my clients once - in that case, the problem originated internally, from an employee who wanted to cover up his mistake, inadvertently creating a disaster - but that will be material for another post.&lt;/p&gt;
&lt;p&gt;Fortunately, the client had a system that solved the problem: thanks to ZFS, we can have local snapshots on the backup server, which are invisible and inaccessible to the production server. Since we have already installed &lt;code&gt;zfs-autobackup&lt;/code&gt;, it's easy to use it for this purpose as well. I've already talked about this in a &lt;a href="https://it-notes.dragas.net/2024/08/21/automating-zfs-snapshots-for-peace-of-mind/"&gt;previous article&lt;/a&gt; and won't rewrite the steps here. Just consult that article, keeping in mind that in this case, it's not advisable to snapshot all the datasets on the backup server (the space would grow exponentially), but only those at risk. In the cases analyzed in this post, this applies only to the &lt;code&gt;push&lt;/code&gt; part, as PBS will also be accessible only from the Proxmox servers and not from the VMs they contain. If, in this case too, you don't trust those who manage the Proxmox servers, just set up snapshots for the Minio jail as well.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;This long post aims to analyze, in a general way, how I believe one can manage reasonably secure backups of their data. Obviously, there are many variables, additional precautions, possible optimizations, hardening, etc., that must be studied on a case-by-case basis. There are old rules, new rules, old and new philosophies. Recently, many people who have embraced the cloud have often stopped thinking about backups, only to realize it when something happens and the data has, indeed, vanished... into the clouds.&lt;/p&gt;
&lt;p&gt;In this post, I have generically covered the setup of the backup server, and this demonstrates how FreeBSD, thanks to its features, can be considered an ideal platform for this type of task.&lt;/p&gt;
&lt;p&gt;In the next articles in this series, I will examine the client side, i.e., how to structure them for a sufficiently reliable backup, and how to monitor everything - because I've seen this too: people resting easy because the backup was supposedly running every night, but in fact, the backup had been failing every night for more than 4 years.&lt;/p&gt;
&lt;p&gt;Stay Tuned and stay...backupped!&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 29 Jul 2025 08:00:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/07/29/make-your-own-backup-system-part-2-forging-the-freebsd-backup-stronghold/</guid><category>backup</category><category>freebsd</category><category>jail</category><category>bhyve</category><category>borg</category><category>data</category><category>server</category><category>vps</category><category>filesystems</category><category>proxmox</category><category>snapshots</category><category>sysadmin</category><category>virtualization</category><category>ownyourdata</category><category>zfs</category><category>series</category><category>tutorial</category></item><item><title>OSDay 2025 - Why Choose to Use the BSDs in 2025</title><link>https://it-notes.dragas.net/2025/03/23/osday-2025-why-choose-bsd-in-2025/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/nana_bianca.avif" alt="Photo: Nana Bianca - Firenze"&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;This is the text underlying my presentation at &lt;a href="https://osday.dev"&gt;OSDay 2025&lt;/a&gt;, held on 21 March 2025 in Florence, Italy. There was limited time, so I couldn't go into much detail and had to keep things more general and structured than usual. You can watch &lt;a href="https://www.youtube.com/live/_IdH5YTBAGs?t=24936s"&gt;the video of my talk on YouTube&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The slides can be downloaded &lt;a href="/slides/osday_2025.pdf"&gt;here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Happy reading!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;OSDay Florence - 21 March 2025 - &lt;a href="https://osday.dev/schedule/9688a15e-e9ed-4803-8ac9-114400446bf4"&gt;Why Choose to Use the BSDs in 2025&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;"I'm Stefano Marinelli, &lt;a href="https://it-notes.dragas.net/2024/10/03/i-solve-problems-eurobsdcon/"&gt;I solve problems&lt;/a&gt;."&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I'm the founder and Barista of the &lt;a href="https://bsd.cafe"&gt;BSD Cafe&lt;/a&gt;, a community of *BSD enthusiasts.&lt;/p&gt;
&lt;p&gt;I work in my company, called &lt;a href="https://prodottoinrete.it"&gt;Prodottoinrete&lt;/a&gt; - a container of ideas and solutions.&lt;/p&gt;
&lt;p&gt;I'm passionate about technology and computing, and I've made my passion my profession. Every morning, when I sit in front of the computer, a new world opens up for me to explore.&lt;/p&gt;
&lt;p&gt;I've been a Linux user since 1996, before I turned 17. Back then, I used Fidonet and would read about alternative operating systems. I experimented with Linux distributions from CDs, and by 1997, Linux became my everyday system. It was only in 2002 that I began exploring BSD systems, largely thanks to FreeBSD's fantastic handbook.&lt;/p&gt;
&lt;p&gt;The relationship we had with Open Source 20-30 years ago was fundamentally different than today. Back then, embracing Open Source meant thinking differently. It meant embracing freedom. We chose Linux and the BSDs when Windows and commercial Unix systems dominated the market. Not because they were simple or free (as in free beer), but for freedom from impositions - both technological and ideological.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I solve problems.&lt;/strong&gt; And to solve problems effectively, we need to recognize when the landscape has changed.&lt;/p&gt;
&lt;p&gt;The reality today is that while we won that war - Open Source is everywhere - we're facing a new challenge. The "mainstream" Open Source world is creating monocultures. The focus has shifted from technologies to specific tools. We're seeing innovation for novelty's sake, not problem-solving.&lt;/p&gt;
&lt;p&gt;This shift has profound implications. In a world dominated by cyber threats, where everything is connected and we completely depend on technology, the value of stability has been lost. By stability, I don't just mean that a system doesn't crash. I mean continuity over time, upgradeability, and system visibility.&lt;/p&gt;
&lt;p&gt;Instead, the industry seems obsessed with the hype cycle. "New" is prioritized over secure and stable. The mantra has become:
- "It will be fixed in the next version"
- "We need automatic restarts when it crashes"
- "Do we need software that crashes less? We have systemd and Kubernetes to restart crashed workloads!"
- "We need moooarrr powaaaaaaar!!!!"&lt;/p&gt;
&lt;p&gt;Let me give you a concrete example. A program written in Rust should be memory safe - that's one of the main selling points of the language. But if that program uses unsafe functions and segfaults, what advantage does it offer over a mature C implementation? Stability matters more than the implementation language.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I solve problems.&lt;/strong&gt; And creating a monoculture does not solve problems - it creates new ones.&lt;/p&gt;
&lt;p&gt;Yes, Linux, Docker, and Kubernetes are better than closed source solutions. But when everyone uses the same tools, freedom dies. We use them because "everyone does" rather than because they're the best tool for our specific needs.&lt;/p&gt;
&lt;p&gt;If we had only used what everyone else used, we wouldn't have Linux or the BSDs today. There would be no LibreOffice, no Nextcloud. We'd just have Windows variations and expensive Unix systems. We'd be bound by licenses and vendors, stuck with closed solutions.&lt;/p&gt;
&lt;p&gt;This is where the BSDs offer a compelling alternative: "Be free and evaluate alternatives. Always."&lt;/p&gt;
&lt;p&gt;For those who don't know, the original BSD started in the 1970s (before Linux was conceived). Minix was created as an educational OS because it was believed that BSD, mature and professional, would be the Open Source OS that would dominate the market. A legal case stalled development and scared adopters, but in 1993, NetBSD and FreeBSD emerged. OpenBSD forked from NetBSD later, then DragonflyBSD from FreeBSD.&lt;/p&gt;
&lt;p&gt;As Linus Torvalds said in 1993, "If 386BSD had been available when I started on Linux, Linux would probably never had happened."&lt;/p&gt;
&lt;p&gt;What makes the BSDs special is their philosophy:
- Kernel and userland developed by same teams
- Consistency in tools and updates
- Excellent documentation - especially OpenBSD, where insufficient docs are considered a bug
- Man pages contain virtually everything
- Evolution, not Revolution&lt;/p&gt;
&lt;p&gt;Let me briefly introduce the main BSD variants that I work with daily:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt; is a generalist system. It focuses on stability and performance - with HardenedBSD as a security-enhanced fork. It has native ZFS, Boot Environments, and complete separation between OS and packages. It's had container support via jails since 2000 - which predates Linux cgroups by a decade! It offers bhyve virtualization (more efficient than KVM). OPNsense and pfSense are based on FreeBSD, as pf is a powerful firewall. It's used by Netflix for streaming video delivery and forms the foundation for PlayStation consoles. MacOS and iOS also contain some FreeBSD code.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;OpenBSD&lt;/strong&gt; focuses on security and code correctness. Its code is constantly audited and simplified - less is more. The team believes "The more complex the code, the less maintainable." It has security mechanisms like pledge() and unveil(). OpenSSH (and many other nice things) originated and are developed here. Development is driven by team priorities, not user requests. It's ideal for routers, firewalls, and security-critical systems.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NetBSD&lt;/strong&gt; lives by the motto "Of course it runs NetBSD!" Its focus is on correctness, portability, and proper implementation. It supports 50+ architectures. Development centers on compatibility, which necessitates code quality. It must function on decades-old hardware. It's ideal for systems that require stability without the need for continuous updates, like embedded devices.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I solve problems.&lt;/strong&gt; And in my experience, the BSDs have consistently proven to be excellent problem-solvers. Here are some real-world benefits I've experienced:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Better stability and security&lt;/li&gt;
&lt;li&gt;Simplified administration - upgrades won't destroy your system&lt;/li&gt;
&lt;li&gt;&lt;a href="https://it-notes.dragas.net/2024/07/04/from-cloud-chaos-to-freebsd-efficiency/"&gt;Less vulnerability to common attacks&lt;/a&gt; - "We don't need this patch, you're running OpenBSD and it's been fixed 20 years ago"&lt;/li&gt;
&lt;li&gt;Network interfaces maintain consistent names - ix0 will remain ix0, not renaming from enx3e3300c9e14e to enp10s0f0np0&lt;/li&gt;
&lt;li&gt;FreeBSD shows lower system load compared to Linux&lt;/li&gt;
&lt;li&gt;FreeBSD handles I/O pressure better - &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;on the same hardware, I've seen 70% time reduction&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;FreeBSD delivers improved end-user experience/responsiveness&lt;/li&gt;
&lt;li&gt;NetBSD provides the comfort of "Don't worry - your platform will be supported for the foreseeable future"&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;So why choose BSD in 2025? I believe there are several compelling reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Security in an increasingly hostile environment&lt;/li&gt;
&lt;li&gt;Stability in a world obsessed with novelty&lt;/li&gt;
&lt;li&gt;Performance without unnecessary complexity&lt;/li&gt;
&lt;li&gt;Freedom from the mainstream monoculture&lt;/li&gt;
&lt;li&gt;Systems designed with coherent philosophy&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Don't be afraid to try BSD systems - despite the Beastie mascot, they don't hurt and you'll appreciate them!&lt;/p&gt;
&lt;p&gt;See you at &lt;a href="https://bsd.cafe"&gt;BSD Cafe&lt;/a&gt;!&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Sun, 23 Mar 2025 10:30:00 +0100</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/03/23/osday-2025-why-choose-bsd-in-2025/</guid><category>osday</category><category>freebsd</category><category>netbsd</category><category>openbsd</category><category>zfs</category><category>server</category><category>ownyourdata</category></item><item><title>Managing ZFS Full Pool Issues with Reserved Space</title><link>https://it-notes.dragas.net/2024/11/28/managing-zfs-full-pool-issues-with-reserved-space/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="Managing ZFS Full Pool Issues with Reserved Space"&gt;&lt;/p&gt;&lt;p&gt;Yesterday morning, I received a panicked call from a developer:&lt;br /&gt;
"I accidentally filled up the storage, and now I can't perform any operations! My ZFS pool is full!"&lt;/p&gt;
&lt;p&gt;I immediately reassured them because I had anticipated this kind of issue. One of the things I almost always do when managing ZFS file systems is to reserve space in a specially created dataset.&lt;/p&gt;
&lt;p&gt;This is because ZFS, like all CoW (Copy-on-Write) file systems, can find itself unable to free up space when completely full. By using reserved space, I can always free it up and delete other data, restoring the system to normal operations.&lt;/p&gt;
&lt;p&gt;To reserve space, simply create a dataset and assign it a reserved size. Of course, this dataset should not be used for anything else; otherwise, the entire purpose would be defeated.&lt;/p&gt;
&lt;p&gt;To create it and reserve space, you only need two simple commands. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs create zroot/reserved
zfs set reservation=5G zroot/reserved
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This creates the dataset and assigns it 5 GB of reserved space.&lt;/p&gt;
&lt;p&gt;Here’s the situation before the operation:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs list zroot
NAME    USED  AVAIL  REFER  MOUNTPOINT
zroot  3.02G   109G    96K  /zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And here’s the situation after:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs list zroot
NAME    USED  AVAIL  REFER  MOUNTPOINT
zroot  8.02G   104G    96K  /zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As you can see, the 5 GB are removed from the available space and marked as used, but they are actually empty.&lt;/p&gt;
&lt;p&gt;In case of a full file system, you can delete this dataset (or reduce its size) to return to normal file system operation.&lt;/p&gt;
&lt;p&gt;Even with this technique, I still recommend not filling ZFS pools beyond 80% of their capacity, as performance degrades significantly past that point.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 28 Nov 2024 20:25:00 +0100</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/11/28/managing-zfs-full-pool-issues-with-reserved-space/</guid><category>zfs</category><category>freebsd</category><category>linux</category><category>data</category><category>filesystems</category><category>recovery</category><category>tipsandtricks</category><category>tutorial</category><category>series</category></item><item><title>Migrating Windows VMs from Proxmox BIOS/KVM to FreeBSD UEFI/bhyve</title><link>https://it-notes.dragas.net/2024/11/15/migrating-windows-vms-from-bios-kvm-to-uefi-bhyve/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/datacenter.webp" alt="Migrating Windows VMs from Proxmox BIOS/KVM to FreeBSD UEFI/bhyve"&gt;&lt;/p&gt;&lt;p&gt;A client of mine has several Windows Server VMs, which I had not migrated to FreeBSD/bhyve until a few weeks ago. These VMs were originally installed with the traditional BIOS boot mode, not UEFI, on Proxmox. Fortunately, their virtual disks are on ZFS, which allowed me to test and achieve the final result in just a few steps.&lt;/p&gt;
&lt;p&gt;This is because Windows VMs (server or otherwise) often installed on KVM (Proxmox, etc.), especially older ones, are non-UEFI, using the traditional BIOS boot mode. bhyve doesn’t support this setup, but Windows allows changing the boot mode, and I could perform the migration directly on the target FreeBSD server.&lt;/p&gt;
&lt;h2&gt;Setting Up the Network&lt;/h2&gt;
&lt;p&gt;Before starting, as usual in my setups, I manually created the bridge and configured a few pf rules for outbound NAT, which these servers need. Let’s begin with the modifications to &lt;code&gt;/etc/rc.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;cloned_interfaces=&amp;quot;bridge0&amp;quot;
ifconfig_bridge0=&amp;quot;inet 192.168.33.1 netmask 255.255.255.0&amp;quot;
gateway_enable=&amp;quot;YES&amp;quot;
pf_enable=&amp;quot;YES&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Next, for the &lt;code&gt;pf.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;ext_if=&amp;quot;igb0&amp;quot;

set block-policy return
set skip on bridge0
set skip on lo0

# Allow the VMs to connect to the outside world
nat on $ext_if from {192.168.33.0/24} to any -&amp;gt; ($ext_if)

block in all
antispoof for $ext_if inet
pass in inet proto icmp
pass out quick keep state
pass in inet proto tcp from any to any port ssh flags S/SA keep state
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To ensure everything is working as expected, I recommend rebooting the server. It’s not strictly necessary since you could reload network configurations and start pf without a reboot, but I prefer a few extra reboots in these cases. Why? Because system administrators are often afraid to reboot production servers, unsure if they will come back up smoothly. A configuration error could prevent a successful reboot, and this would be a significant issue if the server is already in production. I’ve faced this situation myself—many times! To mitigate this, I make frequent reboots, especially after changing network configurations, ensuring everything starts correctly on reboot.&lt;/p&gt;
&lt;h2&gt;Installing and Configuring vm-bhyve&lt;/h2&gt;
&lt;p&gt;Next, I installed &lt;code&gt;vm-bhyve&lt;/code&gt; on the target FreeBSD host and configured it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;pkg install vm-bhyve-devel bhyve-firmware
zfs create zroot/VMs
sysrc vm_enable=&amp;quot;YES&amp;quot;
sysrc vm_dir=&amp;quot;zfs:zroot/VMs&amp;quot;
vm init
cp /usr/local/share/examples/vm-bhyve/* /zroot/VMs/.templates/
vm switch create -t manual -b bridge0 public
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Usually, I enable the serial console on tmux, but in this case, I’ll skip that since Windows VMs need a graphical console. If the FreeBSD server is not on your local network, I suggest connecting via SSH with port forwarding for the necessary VNC ports (e.g., &lt;code&gt;-L5900:127.0.0.1:5900&lt;/code&gt;) to connect to bhyve via SSH tunnel. Never expose the VNC port directly!&lt;/p&gt;
&lt;h2&gt;Creating the VM Template&lt;/h2&gt;
&lt;p&gt;Now, I created an “empty” VM using the “Windows” template with &lt;code&gt;vm-bhyve&lt;/code&gt; and adjusted the configuration afterward:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;vm create -t windows -s 1G -m 16G -c 4 vm115
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I created a virtual disk of 1 GB because it will be replaced by the dataset sent from the current production server, so it’s a fictitious size. At that point, I deleted the &lt;code&gt;disk0&lt;/code&gt; image file:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;rm /zroot/VMs/vm115/disk0.img
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Modifying the VM Configuration&lt;/h2&gt;
&lt;p&gt;Next, I modified the VM configuration as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Changed the disk configuration to use a zvol, simplifying the send-receive operation from the original Proxmox host.&lt;/li&gt;
&lt;li&gt;In some cases, older Windows installations may not support the &lt;code&gt;nvme&lt;/code&gt; driver. If this happens, use &lt;code&gt;ahci-hd&lt;/code&gt; instead, as after the conversion Windows may fail to boot and display a BSOD.&lt;/li&gt;
&lt;li&gt;Changed the network adapter driver to &lt;code&gt;virtio-net&lt;/code&gt; (since it was already installed on the Proxmox VM).&lt;/li&gt;
&lt;li&gt;Set the network adapter MAC address to match the original VM.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After running &lt;code&gt;vm configure vm115&lt;/code&gt;, the configuration should look something like this:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;loader=&amp;quot;uefi&amp;quot;
graphics=&amp;quot;yes&amp;quot;
xhci_mouse=&amp;quot;yes&amp;quot;
cpu=&amp;quot;4&amp;quot;
memory=&amp;quot;16G&amp;quot;

ahci_device_limit=&amp;quot;8&amp;quot;

network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;

disk0_type=&amp;quot;nvme&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;

utctime=&amp;quot;no&amp;quot;
uuid=&amp;quot;myUUID&amp;quot;
network0_mac=&amp;quot;theProxmoxVirtualMachineMacAddress&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Copying the Virtual Disk&lt;/h2&gt;
&lt;p&gt;I took a snapshot of the virtual disks of the VMs on the original server and copied them to the target server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs snapshot vmpool/vm-115-disk-0@toSend01
zfs send -Rcv vmpool/vm-115-disk-0@toSend01 | mbuffer -m 128M | ssh user@FreebsdHostIP &amp;quot;zfs receive -F zroot/VMs/vm115/disk0&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The VM’s disk will be transferred to the target host.&lt;/p&gt;
&lt;h2&gt;Converting BIOS to UEFI&lt;/h2&gt;
&lt;p&gt;At this point, bhyve expects a UEFI VM, but the VM we just copied is BIOS-based. In other words, it won’t boot as is.&lt;/p&gt;
&lt;p&gt;Luckily, Microsoft provides a tool to convert partitions and change the server from BIOS to UEFI. Of course, I would only use such a tool on a snapshot, and while I could do it directly on the source server, I prefer minimizing downtime and avoiding touching the original server. I want to perform the conversion directly on the FreeBSD host, which also makes disaster recovery easier, allowing me to manage everything on FreeBSD without first restoring the VM on a Proxmox server.&lt;/p&gt;
&lt;p&gt;The first step is to download a Windows installation ISO (in my case, Windows Server). I downloaded the ISO from here: &lt;a href="https://www.microsoft.com/en-us/evalcenter/download-windows-server-2022"&gt;Windows Server Evaluation Center&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Once ready, launch the ISO as if you were installing Windows on the VM:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;vm install vm115 SERVER_EVAL_x64FRE_en-us.iso
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Running the Windows Repair Tool&lt;/h2&gt;
&lt;p&gt;Now the VM will boot. Connect via VNC (if using VNC tunneling, connect to &lt;code&gt;127.0.0.1:5900&lt;/code&gt;) to proceed. The ISO will prompt you to press a key to boot from the CDROM—go ahead and do that.&lt;/p&gt;
&lt;p&gt;Proceed as if you were repairing your installation ("Repair your computer"), then select Troubleshoot -&amp;gt; Command Prompt.&lt;/p&gt;
&lt;p&gt;Once at the prompt, run a validation check first (to avoid making changes if the disk has any unusual configurations):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;mbr2gpt /validate /disk:0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If the validation is successful, proceed with the actual conversion:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;mbr2gpt /convert /disk:0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Windows will convert the partition and indicate success at the end.&lt;/p&gt;
&lt;h2&gt;Booting the VM&lt;/h2&gt;
&lt;p&gt;Close the command prompt and shut down the VM. Everything should be ready now—easy peasy.&lt;/p&gt;
&lt;p&gt;Boot the VM with:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;vm start vm115
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Connecting via VNC to the VM will bring up the Windows boot screen. In case of BSOD, change the &lt;code&gt;nvme&lt;/code&gt;driver to &lt;code&gt;ahci-hd&lt;/code&gt;and it should be working. If the snapshot was taken while the VM was powered on, it may perform a disk check. If everything went as planned, you should see the Windows login screen.&lt;/p&gt;
&lt;p&gt;Additionally, Windows will likely require reactivation of the VM, as it will detect hardware changes.&lt;/p&gt;
&lt;h2&gt;Final Considerations&lt;/h2&gt;
&lt;p&gt;At this point, you need to decide whether to continue with this VM or resynchronize it with the original (e.g., if the contents have changed). In the first case, no further action is needed. In the second case, the proper procedure would be to power off the original VM, create a new snapshot, roll back the snapshot on the FreeBSD host, transfer it incrementally—&lt;a href="https://it-notes.dragas.net/2024/10/21/from-proxmox-to-freebsd-story-of-a-migration/"&gt;a process I’ve described in a previous post&lt;/a&gt;—and then redo the conversion.&lt;/p&gt;
&lt;p&gt;To configure the VM to boot automatically when the FreeBSD host boots, add the necessary settings to &lt;code&gt;/etc/rc.conf&lt;/code&gt;, &lt;a href="https://it-notes.dragas.net/2024/10/21/from-proxmox-to-freebsd-story-of-a-migration/#configuring-automatic-vm-startup"&gt;as described in a previous article&lt;/a&gt;.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Fri, 15 Nov 2024 10:57:00 +0100</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/11/15/migrating-windows-vms-from-bios-kvm-to-uefi-bhyve/</guid><category>freebsd</category><category>proxmox</category><category>ownyourdata</category><category>bhyve</category><category>virtualization</category><category>server</category><category>filesystems</category><category>windows</category><category>snapshots</category><category>zfs</category></item><item><title>From Proxmox to FreeBSD - Story of a Migration</title><link>https://it-notes.dragas.net/2024/10/21/from-proxmox-to-freebsd-story-of-a-migration/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="From Proxmox to FreeBSD - Story of a Migration"&gt;&lt;/p&gt;&lt;p&gt;I have been managing a client's server for several years. I inherited a setup (actually, partly done by me at the time, but following the directions of their internal administrator) based on Proxmox. Originally, the VM disks were &lt;code&gt;qcow2&lt;/code&gt; files, but over time and with Proxmox updates, I managed to create a ZFS pool and move them onto it. For backups, I continued to use Proxmox Backup Server (even though virtualized on bhyve) - a solution we've been using for several years.&lt;/p&gt;
&lt;p&gt;The existing VMs/containers were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;An LXC container based on Debian&lt;/strong&gt;, with Apache as a reverse proxy. The choice of Apache was mainly tied to specific configurations from the company that produces the underlying software. It has always done an excellent job.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A VM running OPNsense&lt;/strong&gt; — the choice was related to direct use by the client, as it allows easy creation/modification of users for the internal VPN. Some users can access via VPN (OpenVPN) and perform specific operations, such as SSH connections. It has always worked well and is up to date.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Two VMs based on Rocky Linux 9&lt;/strong&gt; — both with components of the management software in Java, with the reverse proxy directing requests based on the requested site.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Two VMs based on Rocky Linux 8&lt;/strong&gt; — these also have parts of the management software and the databases. These machines are used both by providing specific URLs and by the other two as databases.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The load is not particularly high, and the machines have good performance. Suddenly, however, I received a notification: one of the NVMe drives died abruptly, and the server rebooted. ZFS did its job, and everything remained sufficiently secure, but since it's a leased server and already several years old, I spoke with the client and proposed getting more recent hardware and redoing the setup &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;based&lt;/a&gt; on a &lt;a href="https://it-notes.dragas.net/2024/10/03/i-solve-problems-eurobsdcon/"&gt;FreeBSD&lt;/a&gt; host.&lt;/p&gt;
&lt;p&gt;Unfortunately, I cannot change the setup of the VMs. In my opinion, there are no contraindications since the databases are PostgreSQL and the management applications are Java applications with Tomcat. However, the software house only certifies that type of setup, and considering they are the ones performing updates and fixes, it's appropriate to maintain the setup they suggest. It's a technically sound choice in my opinion, so I can't say anything negative.&lt;/p&gt;
&lt;h2&gt;Acquiring New Hardware&lt;/h2&gt;
&lt;p&gt;The first step was, of course, to get the new server. It took a few days because I requested ECC RAM (for a small price variation), and the time extended. As soon as the physical server was ready, I got to work.&lt;/p&gt;
&lt;p&gt;I installed &lt;strong&gt;FreeBSD 14.1-RELEASE&lt;/strong&gt;, root on ZFS, creating a mirror setup between the NVMe drives. The I/O is not particularly heavy in this setup, but I don't want to waste potential: even if I can reduce the length of an operation by a few seconds, I see no reason not to do it.&lt;/p&gt;
&lt;p&gt;As the first step, I decided to manually create the bridge to which I will connect both the VMs and the reverse proxy, which will be installed inside a FreeBSD jail.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vm-bhyve&lt;/code&gt;, the tool I use to manage VMs, allows creating bridges, but I prefer to manage it manually and maintain more complete control over everything. I will also enable &lt;code&gt;pf&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I decided to use &lt;code&gt;zstd&lt;/code&gt; as the compression algorithm and disable &lt;code&gt;atime&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs set compression=zstd zroot
zfs set atime=off zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I then modified the &lt;code&gt;/etc/rc.conf&lt;/code&gt; file as follows:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;cloned_interfaces=&amp;quot;bridge0 lo1&amp;quot;
ifconfig_lo1_name=&amp;quot;bastille0&amp;quot;
ifconfig_bridge0=&amp;quot;inet 192.168.33.1 netmask 255.255.255.0&amp;quot;
gateway_enable=&amp;quot;YES&amp;quot;
pf_enable=&amp;quot;YES&amp;quot;
bastille_enable=&amp;quot;YES&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I then updated FreeBSD by running the usual command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;freebsd-update fetch install
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configuring the Firewall&lt;/h2&gt;
&lt;p&gt;I configured the firewall with a basic setup:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-pf"&gt;ext_if=&amp;quot;igb0&amp;quot;

set block-policy return
set skip on bridge0

table &amp;lt;jails&amp;gt; persist
nat on $ext_if from {192.168.33.0/24} to any -&amp;gt; ($ext_if)
nat on $ext_if from &amp;lt;jails&amp;gt; to any -&amp;gt; ($ext_if:0)

# nginx-proxy - to the jail
rdr on $ext_if inet proto tcp from any to PUBLIC_IP port = 80 -&amp;gt; 192.168.33.254 port 80
rdr on $ext_if inet proto tcp from any to PUBLIC_IP port = 443 -&amp;gt; 192.168.33.254 port 443

# opnsense
rdr on $ext_if inet proto tcp from any to PUBLIC_IP port = 1194 -&amp;gt; 192.168.33.253 port 1194
rdr on $ext_if inet proto udp from any to PUBLIC_IP port = 1194 -&amp;gt; 192.168.33.253 port 1194

rdr-anchor &amp;quot;rdr/*&amp;quot;

block in all
antispoof for $ext_if inet
pass in inet proto icmp
pass out quick keep state
pass in inet proto tcp from any to any port {http,https} flags S/SA keep state
pass in inet proto {tcp,udp} from any to any port 1194 flags S/SA keep state
# This will be closed at the end of the setup and will be allowed only via VPN
pass in inet proto tcp from any to any port ssh flags S/SA keep state
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once finished, I rebooted everything to load the new kernel. No problems.&lt;/p&gt;
&lt;h2&gt;Installing Necessary Tools&lt;/h2&gt;
&lt;p&gt;I then installed some useful packages:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;pkg install tmux py311-zfs-autobackup mbuffer rsync vm-bhyve-devel edk2-bhyve grub2-bhyve bastille
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I created the datasets for the jails and VMs:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs create zroot/bastille
zfs create zroot/VMs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I then started configuring everything. First, &lt;strong&gt;BastilleBSD&lt;/strong&gt;: I modified &lt;code&gt;/usr/local/etc/bastille/bastille.conf&lt;/code&gt; by adding:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;## ZFS options
bastille_zfs_enable=&amp;quot;YES&amp;quot;
bastille_zfs_zpool=&amp;quot;zroot/bastille&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then I enabled and configured &lt;code&gt;vm-bhyve&lt;/code&gt;, enabling the serial console on tmux:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;sysrc vm_enable=&amp;quot;YES&amp;quot;
sysrc vm_dir=&amp;quot;zfs:zroot/VMs&amp;quot;
vm init
cp /usr/local/share/examples/vm-bhyve/* /zroot/VMs/.templates/
vm switch create -t manual -b bridge0 public
vm set console=tmux
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now I bootstrapped FreeBSD 14.1-RELEASE on BastilleBSD:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;bastille bootstrap 14.1-RELEASE update
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Setting Up the Reverse Proxy Jail&lt;/h2&gt;
&lt;p&gt;I then started by creating the jail for the reverse proxy:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;bastille create -B apache 14.1-RELEASE 192.168.33.254/24 bridge0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once created, I had to modify the default gateway of the jail because by default it is set to that of the host. It was enough to set &lt;code&gt;192.168.33.1&lt;/code&gt; in &lt;code&gt;/usr/local/bastille/jails/apache/root/etc/rc.conf&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The configuration of Apache will not be described here as it is closely dependent on the setup, but this jail can reach (in bridge mode) all the VMs that will be placed on the same bridge.&lt;/p&gt;
&lt;h2&gt;Migrating the VMs&lt;/h2&gt;
&lt;p&gt;For migrating the VMs, I decided to proceed as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Updating the operating systems&lt;/strong&gt; of the original VMs - to have the latest version of the kernel and all userland. The VMs are not UEFI, so it will be necessary to have the exact names of the kernel and &lt;code&gt;initrd&lt;/code&gt; as they will need to be specified in the configuration file of &lt;code&gt;vm-bhyve&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Creating templates&lt;/strong&gt; of the VMs with &lt;code&gt;vm-bhyve&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Snapshotting the VM disks&lt;/strong&gt; and initial transfer to the new server with the VMs still running.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Shutting down all the source VMs&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Creating a new snapshot&lt;/strong&gt; and performing an incremental transfer.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Adjusting configurations&lt;/strong&gt; and changing DNS.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I updated everything using &lt;code&gt;dnf&lt;/code&gt;, and rebooted when there was also a kernel update. I took note of the versions to use, namely &lt;code&gt;vmlinuz-4.18.0-513.18.1.el8_9.x86_64&lt;/code&gt; and &lt;code&gt;initramfs-4.18.0-513.18.1.el8_9.x86_64.img&lt;/code&gt; for Rocky Linux 8.10, and &lt;code&gt;vmlinuz-5.14.0-362.18.1.el9_3.0.1.x86_64&lt;/code&gt; and &lt;code&gt;initramfs-5.14.0-362.18.1.el9_3.0.1.x86_64.img&lt;/code&gt; for Rocky Linux 9.4.&lt;/p&gt;
&lt;h3&gt;Migrating Rocky Linux 8.10 VMs&lt;/h3&gt;
&lt;p&gt;On the destination server, I created the VMs. I used the &lt;code&gt;linux-zvol&lt;/code&gt; template, then modified the configurations:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;vm create -t linux-zvol -s 1G -m 8G -c 4 vm100
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I created a virtual disk of 1 GB because it will be replaced by the dataset sent from the current production server, so it's a fictitious size. At that point, I deleted the &lt;code&gt;zvol&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs destroy zroot/VMs/vm100/disk0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I performed an initial snapshot of the source VMs:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs snapshot zfspool/vm-100-disk-0@Move01
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And then I copied this first snapshot to the destination machine/VM:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs send -R zfspool/vm-100-disk-0@Move01 | mbuffer -s 128k -m 512M | ssh root@destMachine &amp;quot;zfs receive -F zroot/VMs/vm100/disk0&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At the end of the copy, you can do a test and try to boot. For the Rocky Linux 8.10 VMs, I had no problems because the &lt;code&gt;/boot&lt;/code&gt; partition is in &lt;code&gt;ext4&lt;/code&gt;. I then configured the VM as follows, running the command &lt;code&gt;vm configure vm100&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;loader=&amp;quot;grub&amp;quot;
cpu=&amp;quot;8&amp;quot;
memory=&amp;quot;12G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;
uuid=&amp;quot;the-uuid&amp;quot;
network0_mac=&amp;quot;the-mac&amp;quot;
grub_run0=&amp;quot;linux /vmlinuz-4.18.0-553.22.1.el8_10.x86_64 root=/dev/vda3&amp;quot;
grub_run1=&amp;quot;initrd /initramfs-4.18.0-553.22.1.el8_10.x86_64.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;All I needed to do is: pass to GRUB the name of the kernel and the &lt;code&gt;initramfs&lt;/code&gt;. It is now possible to start the machine and do a test and connect to the console:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;vm start vm100
vm console vm100
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In theory, the machine should boot correctly. It will probably be necessary to change the network interface configurations - in my case, even keeping the &lt;code&gt;virtio&lt;/code&gt; and the MAC address, it changed from &lt;code&gt;ens18&lt;/code&gt; to &lt;code&gt;enp0s5&lt;/code&gt; - but everything else should be fine.&lt;/p&gt;
&lt;p&gt;At this point, if the source VM was turned off/not reachable or simply there is nothing new to synchronize, the migration is complete. Otherwise, it will be necessary to shut down the source VM, create a new snapshot, and transfer it. Let's start by shutting down the source VM and the destination one (&lt;code&gt;vm stop vm100&lt;/code&gt;), roll back to the snapshot transferred to the destination server, create a new snapshot, and perform an incremental transfer:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs rollback zroot/VMs/vm100/disk0@Move01
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On the source server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs snapshot zfspool/vm-100-disk-0@Move02
zfs send -R -i zfspool/vm-100-disk-0@Move01 zfspool/vm-100-disk-0@Move02 | mbuffer -s 128k -m 512M | ssh root@destMachine &amp;quot;zfs receive -F zroot/VMs/vm100/disk0&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this way, we have updated the destination VM and transferred only the differences, without altering the configuration of &lt;code&gt;vm-bhyve&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Start the machine - if all goes well, the migration of this VM is finished.&lt;/p&gt;
&lt;h3&gt;Migrating Rocky Linux 9.4 VMs&lt;/h3&gt;
&lt;p&gt;For the Rocky Linux 9.4 VMs, the situation was more complex: the partitions were all - even &lt;code&gt;/boot&lt;/code&gt;—in &lt;strong&gt;XFS&lt;/strong&gt;. And, for some reason, the GRUB launched by bhyve cannot read from XFS. Therefore, I had to proceed differently.&lt;/p&gt;
&lt;p&gt;I copied, using &lt;code&gt;rsync&lt;/code&gt;, the &lt;code&gt;/boot&lt;/code&gt; of the original VPS (in ZFS) to a directory on the FreeBSD server (specifically, on the VPS I ran the command: &lt;code&gt;rsync -avhHPx --numeric-ids /boot root@FreeBSDHOST:/tmp/&lt;/code&gt;). In this way, I kept the files available.&lt;/p&gt;
&lt;p&gt;I shut down the machine on the original server and copied its &lt;code&gt;zvol&lt;/code&gt; to the new FreeBSD host using the same method as the others. At that point, I recreated the &lt;code&gt;/boot&lt;/code&gt; partition of my VPS in &lt;code&gt;ext3&lt;/code&gt; - directly from the FreeBSD host:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;pkg install fusefs-ext2
kldload fusefs
mkfs.ext3 /dev/zvol/zroot/VMs/vm104/disk0s1
fuse-ext2 -o force /dev/zvol/zroot/VMs/vm104/disk0s1 /mnt/
cd /mnt
rsync -avhHPx /tmp/boot/. .
umount /mnt
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this way, GRUB will be able to access the &lt;code&gt;/boot&lt;/code&gt; partition to load the kernel and &lt;code&gt;initramfs&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Here is the &lt;code&gt;vm-bhyve&lt;/code&gt; configuration for this VM:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;loader=&amp;quot;grub&amp;quot;
cpu=&amp;quot;8&amp;quot;
memory=&amp;quot;12G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;
uuid=&amp;quot;the-uuid&amp;quot;
network0_mac=&amp;quot;the-mac&amp;quot;
grub_run0=&amp;quot;linux /vmlinuz-5.14.0-427.37.1.el9_4.x86_64 root=/dev/vda3&amp;quot;
grub_run1=&amp;quot;initrd /initramfs-5.14.0-427.37.1.el9_4.x86_64.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Launching the machine, however, it will hang during boot and, after a timeout, will ask for administrator credentials to enter a recovery shell. This is because the &lt;code&gt;blkid&lt;/code&gt; of the &lt;code&gt;/boot&lt;/code&gt; partition has changed, and the &lt;code&gt;fstab&lt;/code&gt; of the VM still reports the data of the old XFS partition. In this case, I used the &lt;code&gt;blkid&lt;/code&gt; command in the VM and copied the UUID of the new partition. Then modify the &lt;code&gt;/etc/fstab&lt;/code&gt; file of the VM and put the new &lt;code&gt;blkid&lt;/code&gt;, as well as changing "xfs" to "ext3". After a reboot, the system should start without problems, again with a network card to reconfigure.&lt;/p&gt;
&lt;p&gt;This procedure worked correctly for all VMs. In this way, the storage remains on &lt;code&gt;virtio-blk&lt;/code&gt; even if it would be optimal for the driver to be changed to &lt;code&gt;nvme&lt;/code&gt;. To make this change, you will need to enter the VMs, create a file called &lt;code&gt;/etc/dracut.conf.d/00-custom.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-conf"&gt;add_drivers+=&amp;quot; nvme &amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And regenerate the &lt;code&gt;initramfs&lt;/code&gt; - in this way, the &lt;code&gt;nvme&lt;/code&gt; driver will be supported at boot:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;dracut --regenerate-all --force
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It will now be enough to change the configurations of &lt;code&gt;vm-bhyve&lt;/code&gt;—&lt;code&gt;virtio-blk&lt;/code&gt; becomes &lt;code&gt;nvme&lt;/code&gt;, and &lt;code&gt;/dev/vda3&lt;/code&gt; becomes &lt;code&gt;/dev/nvme0n1p3&lt;/code&gt;, for example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;loader=&amp;quot;grub&amp;quot;
cpu=&amp;quot;8&amp;quot;
memory=&amp;quot;12G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;nvme&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;
uuid=&amp;quot;the-uuid&amp;quot;
network0_mac=&amp;quot;the-mac&amp;quot;
grub_run0=&amp;quot;linux /vmlinuz-5.14.0-427.37.1.el9_4.x86_64 root=/dev/nvme0n1p3&amp;quot;
grub_run1=&amp;quot;initrd /initramfs-5.14.0-427.37.1.el9_4.x86_64.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Migrating the OPNsense VM&lt;/h3&gt;
&lt;p&gt;For the VM with OPNsense, the procedure was even simpler. It was enough to create a VM of type &lt;code&gt;freebsd-zvol&lt;/code&gt; and copy the disk as done for the others. In this case, I replicated the MAC address of the original virtualized network interface (Proxmox server) to ensure that the underlying FreeBSD recognizes it as the same. FreeBSD is less picky about these things.&lt;/p&gt;
&lt;p&gt;The final &lt;code&gt;vm-bhyve&lt;/code&gt; configuration file will be:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;loader=&amp;quot;bhyveload&amp;quot;
cpu=&amp;quot;2&amp;quot;
memory=&amp;quot;1G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;
uuid=&amp;quot;my-uuid&amp;quot;
network0_mac=&amp;quot;my-mac&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configuring Automatic VM Startup&lt;/h2&gt;
&lt;p&gt;To ensure that the VMs all start at boot, just add them to &lt;code&gt;/etc/rc.conf&lt;/code&gt; as follows, giving a 15-second delay between one VM and another:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;[...]
vm_enable=&amp;quot;YES&amp;quot;
vm_dir=&amp;quot;zfs:zroot/VMs&amp;quot;
vm_list=&amp;quot;vm100 vm101 [...] opnsense&amp;quot;
vm_delay=&amp;quot;15&amp;quot;
[...]
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Setting Up Backups&lt;/h2&gt;
&lt;p&gt;At this point, I configured external backups, every hour, similarly to &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;how I described in a previous article&lt;/a&gt;. I also added a local snapshot every 15 minutes, always using &lt;code&gt;zfs-autobackup&lt;/code&gt;. To do this, I created a new tag:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs set autobackup:localsnap=true zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then I modified the &lt;code&gt;/etc/crontab&lt;/code&gt; file, adding this line:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-cron"&gt;*/15    *       *       *       *       root    /usr/local/bin/zfs-autobackup localsnap --keep-source 15min3h,1h1d &amp;gt; /dev/null 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That is, it will perform a snapshot every 15 minutes and keep them for 3 hours, then keep one per hour for a day. In this way, in case of quick recovery due to a problem/error, I won't need to transfer the entire dataset/VM from the backup.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The migration is complete: after changing the DNS, the client performed some tests, and everything works properly. The setup has been active for over a week, and there have been no issues. The performance is excellent; I did not perform tests compared to the previous setup since the hardware of the new FreeBSD host is more powerful and modern, so it wouldn't make sense.&lt;/p&gt;
&lt;p&gt;Apart from the manager, no user was informed of the change, and in the last week, no reports have been received. The VMs are stable, and the host's load is very low.&lt;/p&gt;
&lt;p&gt;The operation actually took less than two hours in total - most of the time has been consumed by the send/receive operations - and gave excellent results. The alternative would have been to install Proxmox on a new host and move the VMs - I probably would have saved a few minutes, but now I can use bhyve, as well as the simplicity and power of the underlying FreeBSD.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 21 Oct 2024 07:33:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/10/21/from-proxmox-to-freebsd-story-of-a-migration/</guid><category>freebsd</category><category>proxmox</category><category>ownyourdata</category><category>bhyve</category><category>virtualization</category><category>server</category><category>filesystems</category><category>linux</category><category>snapshots</category><category>zfs</category></item><item><title>I Solve Problems</title><link>https://it-notes.dragas.net/2024/10/03/i-solve-problems-eurobsdcon/</link><description>&lt;p&gt;&lt;img src="https://2024.eurobsdcon.org/images/banner-2024.jpg" alt="I Solve Problems"&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Addendum:&lt;/strong&gt; during the event, I was interviewed by &lt;a href="https://soc.feditime.com/users/Tubsta"&gt;Jason Tubnor&lt;/a&gt; for the &lt;a href="https://www.bsdnow.tv/"&gt;BSD Now&lt;/a&gt; Podcast, where I provided further information about the talk and the BSD Cafe project. Here is &lt;a href="https://www.bsdnow.tv/579"&gt;the link to the episode&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;This is the text underlying my presentation at EuroBSDCon 2024, on 21 September 2024, in Dublin, Ireland and BSDCan 2025, 13 June 2025, Ottawa, Canada.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The slides can be downloaded &lt;a href="https://it-notes.dragas.net/slides/EuroBSDCon2024_Marinelli.pdf"&gt;here&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The BSDCan 2025 video can &lt;a href="https://www.youtube.com/watch?v=UnVp25-6Qao"&gt;be viewed here&lt;/a&gt;. This is more up-to-date than the EuroBSDCon one.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;The EuroBSDCon 2024 video, not yet separated from the live stream, can &lt;a href="https://www.youtube.com/watch?t=19285&amp;amp;v=u_bdSqqHm58"&gt;be viewed here&lt;/a&gt; - At first, I was a bit tense, then I relaxed.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Happy reading!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;EuroBSDCon Dublin - 21 September 2024 - &lt;a href="https://events.eurobsdcon.org/2024/talk/LNMLZX/"&gt;Why (and how) we're migrating many of our servers from Linux to the BSDs&lt;/a&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;"I'm Stefano Marinelli, I solve problems."&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I’m the &lt;a href="https://bsd.cafe"&gt;founder and Barista of the BSD Cafe&lt;/a&gt;, a community of *BSD enthusiasts.&lt;/p&gt;
&lt;p&gt;I work in my company, called Prodottoinrete - a container of ideas and solutions.&lt;/p&gt;
&lt;p&gt;I’m passionate about technology and computing, and I’ve made my passion my profession. Every morning, when I sit in front of the computer, a new world opens up for me to explore, and I try to share this passion with my clients. Sometimes, I succeed.&lt;/p&gt;
&lt;p&gt;I've been a Linux user since 1996, before I turned 17. Back then, I used Fidonet and would read about alternative operating systems. Curiosity got the best of me, and I bought a set of CDs with various Linux distributions at the first opportunity. I tried it, I liked it, but at the time, I didn’t find it advantageous, so I continued using it for secondary tasks while Windows remained my daily driver.&lt;/p&gt;
&lt;p&gt;Things changed in late 1997. I decided to go deeper into Linux and, with the purchase of a new, more powerful computer, I realized that aside from gaming, Linux could be my everyday system. This came in handy when I started university in 1998, where the computer science department was very oriented towards Open Source solutions. I was one of the few students who already understood the concepts of Open Source and knew how to use Linux. I was also one of the few who wasn’t baffled when we found Solaris machines in the lab. After all, there were similarities.&lt;/p&gt;
&lt;p&gt;Over the years, I became one of the administrators of that lab, which was almost entirely Linux-based.&lt;/p&gt;
&lt;p&gt;In 2000, I was fortunate to have &lt;a href="https://en.wikipedia.org/wiki/%C3%96zalp_Babao%C4%9Flu"&gt;Professor Özalp Babaoğlu&lt;/a&gt; as my lecturer, which pushed me to explore other operating systems like the BSDs. However, all I had at the time was an old Compaq laptop (486/25 MHz and 4 MB RAM) and no fast internet connection. So, apart from some theoretical studies, I postponed hands-on experiments until I had better resources. By 2002, thanks to a broadband connection and a new computer, I began exploring BSD systems. I started with FreeBSD, largely thanks to its fantastic handbook. I asked my parents to buy a laser printer so I could "print academic material" - but really, I wanted to print all the documentation I could find, starting from the FreeBSD handbook. And it was incredibly helpful.&lt;/p&gt;
&lt;p&gt;Before long, FreeBSD became my daily driver - entire nights spent compiling KDE while I slept a meter away from my laptop’s wildly spinning fans - but FreeBSD ran much, much better on that machine than Linux did. Unfortunately, OpenBSD was too slow to be usable with any graphical interface.&lt;/p&gt;
&lt;p&gt;In 2003, my final thesis focused on virtualization on Open Source systems, NetBSD/Xen was one of the best and most efficient solutions I tested.&lt;/p&gt;
&lt;p&gt;Shortly after, I found a job at a company that was just beginning to offer Linux-based solutions, mainly for web and mail servers, but I was criticized because I completed tasks that didn’t require constant interventions - this hurt the company’s billing. That’s when I set my future guidelines:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;I would work for myself, following my own philosophy.&lt;/li&gt;
&lt;li&gt;I would not be tied to any particular vendor. I love exploring and learning, so I would always study solutions in depth and recommend the one I thought best suited for the client.&lt;/li&gt;
&lt;li&gt;I solve problems - I don’t sell boxes.&lt;/li&gt;
&lt;li&gt;I would use and promote Open Source solutions whenever possible.&lt;/li&gt;
&lt;li&gt;I would use a BSD whenever feasible and Linux where a BSD was not suitable.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In every technological decision, my priority is solving my clients’ specific problems, not selling a predefined solution. &lt;strong&gt;I solve problems.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I was often told that Open Source systems were “toys for universities” and that the real world ran on something else (mostly meaning Windows). I pressed on. In some cases, I offered to be paid only if the results were achieved, showing how much they’d save compared to traditional licensing costs. It worked, but...&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;They accepted Linux, albeit reluctantly, but rejected BSDs because they didn’t know them. In some cases, I managed to convince them. In others, sadly, I didn’t.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Result: I decided to use the BSDs where clients didn’t have direct access - like email servers or web hosting - while using Linux where they specifically requested it.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;NetBSD/Xen as the virtualization base, &lt;a href="https://it-notes.dragas.net/2023/08/27/that-old-netbsd-server-running-since-2010/"&gt;with excellent longevity and reliability&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;OpenBSD as network/firewall entry points.&lt;/li&gt;
&lt;li&gt;FreeBSD (especially with the introduction of ZFS) for various services, including backup servers.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It worked. What surprised clients most was the stability and the reduced need for maintenance. They saw more uptime and were happy.&lt;/p&gt;
&lt;p&gt;Problem: &lt;strong&gt;"If nothing is working, what am I paying you for? If everything’s working, what am I paying you for?"&lt;/strong&gt; So, I selected clients and situations that understood that if everything works, it’s because of the work behind the scenes. It’s better to pay for everything to work than to pay to fix problems. The problem to solve, in this case, is not stopping the client’s work. &lt;strong&gt;And I solve problems.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;I managed about 65% *BSD machines and 35% Linux. But as Linux’s popularity grew, so did client demand. It became necessary on multiple occasions to replace or implement Linux instead of a BSD, for specific requests.&lt;/p&gt;
&lt;p&gt;At a certain point, Linux virtualization solutions matured, and many of my hosts transitioned from NetBSD/Xen to OpenNebula (often with MooseFS) and then to Proxmox (often with Ceph). This happened because my clients needed to manage their VM lifecycles autonomously, change configurations, migrate hosts, etc.&lt;/p&gt;
&lt;p&gt;Proxmox showed excellent reliability and stability. The VMs were often Linux-based (especially if the client needed direct management) or FreeBSD, but in smaller quantities. My own infrastructure, however, remained primarily BSD-based.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Over time, I gradually started to reflect and 'take stock'.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;No *BSD has ever caused me to lose data to the point of having to restore from backup. On Linux, I’ve lost data with ext4, XFS, and btrfs. The most catastrophic case was with XFS - just a few files, backed up and restored, but the client was highly demanding and didn’t take it well. The largest failure was with btrfs - after a reboot, a 50 TB filesystem (in mirror, for backups) simply stopped working. No more mounting possible. Data was lost, but I had further backups. The client was informed and understood the situation. Within a few days, the server was rebuilt from scratch on FreeBSD with ZFS - since then, I haven’t lost a single bit.&lt;/p&gt;
&lt;p&gt;In 2018, I started introducing Docker and Podman to the developers with specific needs, especially to help them with their development. Many developers began incessantly asking for Docker, unfortunately some of them only wanted  to bypass many limitations that traditional setups imposed. Unfortunately, in many cases, those "limitations" were just bad practices - running outdated versions of software and libraries, or not wanting to bother with keeping the stack updated. There are similar problems with other solutions, but at least I can block them. They're great for quick deployment and increased security, but they aren’t always the best choice. Plus, they’re not the only option, despite what many people think these days.&lt;/p&gt;
&lt;p&gt;Linux has had major development over the past years, but this has shifted towards specific players’ interests (mainly cloud providers) rather than technical reasons. Maybe not in the kernel, but many Linux distributions and communities now seem focused on the constant push to replace "the old with the new" with solid theoretical reasons but sometimes seemingly without practical benefit.&lt;/p&gt;
&lt;p&gt;For me, computing should solve problems and provide opportunities to those who use it. &lt;strong&gt;I solve problems.&lt;/strong&gt; Every change or variation will solve one problem but create new ones. It’s crucial to be mindful not to create worse problems than the ones you’re solving, and today’s enterprise world often seems to overlook that.&lt;/p&gt;
&lt;p&gt;It’s common to see software distributed solely via Docker Compose files. Sometimes I use it as an installation tutorial, but I realize they’re just specific pieces precariously glued together (patched files, specific dependency versions, or nothing works).&lt;/p&gt;
&lt;p&gt;The trend is to rush, to simplify deployments as much as possible, sweeping structural problems under the rug. The goal is to "innovate", not necessarily improve - just as long as it’s "new" or "how everyone does it, nowadays".&lt;/p&gt;
&lt;p&gt;A massive business has grown around Linux: certifications, training, pentesting, certified platforms - the community is losing decision-making power. This is a far cry from the early days.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I solve problems.&lt;/strong&gt; And these fast-changing technologies risk creating more problems than they solve, at least in certain situations. The workloads I manage often stay up for years, requiring a more stable, upgradable, and consistent approach.&lt;/p&gt;
&lt;p&gt;If a client’s problem is to have an e-commerce site, they don’t really care if it runs on Docker, Podman, a FreeBSD jail, or a Raspberry Pi cluster - as long as their problem is solved. In fact, clients are happy when their solution is stable, upgradable, and secure.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;There isn’t a single solution to all problems, but many solutions for each problem.&lt;/strong&gt; My job is to give clients the best solution to solve their specific problem, not the most fashionable one.&lt;/p&gt;
&lt;p&gt;That’s why I decided, several years ago, &lt;em&gt;to reverse the proportion and implement and the BSDs for all possible workloads.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Each *BSD has its characteristics and target audience. Sometimes these targets overlap, but it’s generally not difficult to choose the most appropriate solution.&lt;/p&gt;
&lt;p&gt;The goal of the migration was to create stable, coherent, upgradable, and secure systems.&lt;/p&gt;
&lt;p&gt;By implementing an OpenBSD system, I often don’t need to install any additional packages. When it’s time to upgrade, it’s simple and secure. When a vulnerability emerges, I’m often fortunate to read "OpenBSD is excluded from this issue because it eliminated this risk X years ago..."&lt;/p&gt;
&lt;p&gt;By implementing a NetBSD system, I know I’m installing a system that isn’t in a rush to release new versions and will likely run for years with only a few package updates and security patches, when necessary. And it’s quite rare for a patch to be required.&lt;/p&gt;
&lt;p&gt;By implementing a FreeBSD system, I know I’ll have ZFS at my disposal, a fast and efficient hypervisor like bhyve, and a native, mature jail system that will ensure service and setup separation coherently and precisely. Not as “add-ons” but built into the system itself.&lt;/p&gt;
&lt;p&gt;Moving to the BSDs means, in my experience, reaching systems that are more stable, more easily upgradable, and more consistent in their parts. They don’t chase the hype of the moment, much like the early days of Linux. In the case of FreeBSD, this also means moving to native ZFS and boot environments, giving me greater peace of mind when it comes to upgrades.&lt;/p&gt;
&lt;p&gt;The initial strategy was to migrate what would soon need updates anyway, what would be moved (thus requiring a new setup), and what was causing problems and deserved a deeper dive.&lt;/p&gt;
&lt;p&gt;First, I decided to migrate to FreeBSD the hypervisors not directly accessed by clients, especially on leased servers. The approach was to create a twin machine (to have an objective performance and stability comparison - it wouldn’t make sense to compare it to a different machine), install FreeBSD, bridge it with the production server, install vm-bhyve, and start copying VMs from Proxmox, reconfiguring the main parameters in the configuration file. In some cases, I already used ZFS on Proxmox, so the first part of the migration was a simple zfs-send/receive. In other cases (e.g., when using Ceph), I made an intermediate move by live-migrating the storage to ZFS and then proceeding as in the first case. The first noticeable effect was a reduction in resources used by the host to handle VM traffic - as expected, only FreeBSD’s basic processes and bhyve were running - but at the same time, there was a significant increase in I/O performance, further enhanced by switching the virtual disk driver from virtio to NVMe. This allowed for in-depth testing and revealed that FreeBSD suffers less under heavy I/O (the VMs on Linux tended to block their I/O, an effect I didn’t notice on FreeBSD) and showed significantly lower loads. In short, it handled the load better.&lt;/p&gt;
&lt;p&gt;As an experiment, I decided to migrate two hosts (each with about 10 VMs) of a client - where I had full control—without telling them, over a weekend. By Tuesday, they called me, concerned: they had noticed a massive performance boost and were worried I had upgraded their hardware without approval, thinking that, "given the performance boost", it would cost them a lot. After explaining what I had done, they asked me to run further tests and progressively continue migrating everything. 20 hosts, all based on (sometimes slightly older) versions of Proxmox. And so I did, but taking advantage of their open-mindedness, I went a step further.&lt;/p&gt;
&lt;p&gt;Many of their VMs handled simple workloads - PHP websites, or Java-based management systems (running on Tomcat), device monitoring software for industrial machinery, etc. The decision to use VMs had been made by their highly competent internal IT staff to separate environments and dependencies. It was a perfect use case for jails. I decided to try, VM by VM, to replicate the setups inside FreeBSD jails. In some cases, for convenience, I managed to run everything directly in Linux jails (with &lt;a href="https://wiki.freebsd.org/Linuxulator"&gt;Linuxulator&lt;/a&gt;); in others, it was impossible, so I recreated the setups in individual jails. They immediately noticed an effect: faster operations. It didn’t surprise me. We avoided double buffering (VM and OS), so all the saved RAM could be used by ZFS for its cache and by the host to run other services. Some VMs remained as VMs (e.g., Zimbra). By the end of the operation, I had drastically reduced the number of VMs, replaced by jails, and consequently, the number of hosts. From 20 down to 11 - with a significant monthly cost saving.&lt;/p&gt;
&lt;p&gt;The road was now clear, and I progressively continued down this path. One of the most interesting anecdotes: a client told me that they used to start an operation before taking a coffee break, around 15 minutes, to find the task almost done by the time they returned. After the migration, they shared that they launched the process, grabbed their things, and the task was already complete. An estimated reduction from about 18 minutes to 6 minutes on average. I didn’t investigate too much, but I suspect a combination of factors, with the predominant one &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;being bhyve’s NVMe driver&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The main challenge I often face is ideological. Some people are used to thinking that the ideal solution is &lt;strong&gt;X&lt;/strong&gt; - and believe that &lt;strong&gt;X&lt;/strong&gt; is the only solution for their problems. Often, &lt;strong&gt;X&lt;/strong&gt; is the hype of the moment (a few years ago, I fought to convince people that VMware wasn’t necessary and that Proxmox would be a great solution; today, Proxmox is on everyone’s lips - but it’s not the only solution). Often, &lt;strong&gt;X&lt;/strong&gt; is a "cloud" cluster with Kubernetes - running WordPress on it. Even for hosting a law firm’s website, which will be updated every five years.&lt;/p&gt;
&lt;p&gt;When I ask, "Okay, but why? Who will manage it? Where will your data really be, and who will safeguard it?", I get blank faces. They hadn’t considered these questions. No one had even mentioned them. "But everyone I spoke to proposed this type of solution...". It’s like at the beginning with Windows, when I proposed *BSD or Linux. Or later with VMware, when I proposed Proxmox. Or now with Kubernetes, when I propose the BSDs. &lt;/p&gt;
&lt;p&gt;But the simplest solutions are the easiest to maintain and manage over time. My experience has taught me that setting something up is often the easiest part. The hardest part is returning to it after 1, 5, 10 years. Keeping it running, updating it, stabilizing it. For many, IT isn’t their business, but a tool to achieve their goals. A Kubernetes cluster is fantastic, but it requires maintenance. Or it’s external - so it’s no longer ours. We’ve lost control of the data. For many, it’s unnecessary to complicate things. And with every additional layer, we’re creating more problems.&lt;/p&gt;
&lt;p&gt;No &lt;em&gt;BSD system has ever surprised me during an update or a simple reboot. I’ve never encountered, for example, a network interface renaming from &lt;/em&gt;enx3e3300c9e14e&lt;em&gt; to &lt;/em&gt;enp10s0f0np0* on Linux, effectively locking me out of a server. ix0 will remain ix0.&lt;/p&gt;
&lt;p&gt;I’ve never had to recompile ZFS on FreeBSD, only to find that the module wouldn’t load, blocking the filesystem from mounting after reboot.&lt;/p&gt;
&lt;p&gt;Many of the developers I work with have embraced the challenge. Most of them are passionate about technology, and learning a new operating method has been very interesting. Almost all of them, after experiments, were positive and, in fact, began explicitly requesting "jails" instead of Docker hosts. They started using &lt;a href="https://bastillebsd.org/"&gt;BastilleBSD&lt;/a&gt; to clone "template" jails and deploy them. They learned to access ZFS automatic snapshots and recover lost files. They learned to manage the resources at their disposal without repeating the usual mantra of "we need mooooar powaaaar!" every time there’s a problem, a slowdown, or a storage overload. They’ve returned to trying to understand what’s happening, rather than just assembling pieces, libraries, and containers without considering the effects.&lt;/p&gt;
&lt;p&gt;Others, however, struggled, but remained positive nonetheless. And that’s okay. &lt;strong&gt;I solve problems&lt;/strong&gt; and can’t force my solutions on everyone.&lt;/p&gt;
&lt;p&gt;In other cases, the challenge wasn’t technical but "commercial". Often, decision-makers have little to no technical knowledge. Linux sells well. "Cloud" sells even better. A "NetBSD-based solution", unfortunately, has less commercial appeal today. So, they want what they can sell, without focusing too much on the advantages of alternative solutions.&lt;/p&gt;
&lt;p&gt;Linux today is subject to many compliance requirements - I’m often asked which version of OpenSSH I’m running - and they complain when the version (the latest from OpenBSD) isn’t considered "secure" because it doesn’t match their procedures (e.g., OpenSSH_9.2p1 Debian-2+deb12u3). They don’t understand when I explain that the "Debian" part refers to the Linux distribution, not a release. Those who prepare these documents are often, sadly, unaware of what they’re asking for. They just have a checklist.&lt;/p&gt;
&lt;p&gt;The transition is ongoing, and as I see opportunities, I’m migrating from Linux to the BSDs whenever possible. Today, I can say that &lt;strong&gt;78% of the hypervisors I manage run on FreeBSD, and 66% of the workloads (VPS, jails, hosts, etc.) are running on one of the BSDs&lt;/strong&gt; - including solutions like OPNsense. None of the clients have experienced major issues. No one has complained about performance or reliability. All feedback has been positive. Many appreciate moving away from IT monocultures, especially when problems arise - following the recent severe SSH vulnerability, many clients contacted me, worried. It was nice to tell some of them that the exposed SSH service was running OpenBSD’s version, and that OpenBSD wasn’t affected as they had developed a secure mechanism back in 2001. They appreciated it. They want more setups based on OpenBSD.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;I'm Stefano Marinelli, I solve problems. And I love solving problems using BSD systems.&lt;/strong&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 03 Oct 2024 08:53:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/10/03/i-solve-problems-eurobsdcon/</guid><category>eurobsdcon</category><category>bsdcan</category><category>freebsd</category><category>netbsd</category><category>openbsd</category><category>zfs</category><category>server</category><category>ownyourdata</category></item><item><title>Moving an entire FreeBSD installation to a new host or VM in a few easy steps</title><link>https://it-notes.dragas.net/2024/09/16/moving-freebsd-installation-new-host-vm/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="Moving an entire FreeBSD installation to a new host or VM in a few easy steps"&gt;&lt;/p&gt;&lt;p&gt;FreeBSD, especially when installed on ZFS, is incredibly simple to manage, &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;back up&lt;/a&gt;, and move to a different system.&lt;/p&gt;
&lt;p&gt;I often find myself having to move an entire system from one host to another, from a physical host to a VM, or vice versa, from a VM to a physical host.&lt;/p&gt;
&lt;p&gt;By following the approach of keeping the operating system clean and running all services within jails, this operation is usually quite simple: install the operating system on the new host, perform the few necessary configurations (like pf and network interfaces), transfer the jail datasets, restart the jails on the new host, and update the DNS.&lt;/p&gt;
&lt;p&gt;However, sometimes I need to move the entire host, including the original operating system. This article will describe this operation when the FreeBSD system uses ZFS as its root file system. This process isn't much more difficult, but there are a few details to keep in mind. Below is a breakdown of the two bootloader scenarios:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;UEFI Bootloader&lt;/li&gt;
&lt;li&gt;BIOS Bootloader (non-UEFI)&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;UEFI Bootloader&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Start from an ISO or image of &lt;a href="https://mfsbsd.vx.sk/"&gt;mfsbsd&lt;/a&gt;. Partition the disk according to the layout of the source system. If the source system is a standard FreeBSD (ZFS) installation, partition the destination disk (assumed to be da0) as follows. &lt;strong&gt;WARNING&lt;/strong&gt;: The first command will destroy any existing partition table on the disk.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;gpart destroy -F /dev/da0
gpart create -s gpt da0
gpart add -t efi -s 200M -l efiboot0 da0
newfs_msdos -F 32 -c 1 /dev/da0p1
mount -t msdosfs /dev/da0p1 /mnt
mkdir -p /mnt/EFI/BOOT
cp /boot/loader.efi /mnt/EFI/BOOT/BOOTX64.efi
umount /mnt
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;Create additional partitions for swap and ZFS:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;gpart add -a 1m -t freebsd-swap -s 2g -l swap0 da0
gpart add -a 1m -t freebsd-zfs -l zfs0 da0
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;Create the ZFS pool:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool create -O compression=zstd -O atime=off zroot /dev/gpt/zfs0
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;On the source system, create a snapshot of all datasets:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs snapshot -r zroot@snap01
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; You will need to connect to the source system using a user with privileges over the entire filesystem. For simplicity, I'll use &lt;code&gt;root&lt;/code&gt;, but it's not recommended to leave &lt;code&gt;root&lt;/code&gt; accessible via SSH. Limit access to private keys only. To allow root login over SSH, edit &lt;code&gt;/etc/ssh/sshd_config&lt;/code&gt; and set &lt;code&gt;PermitRootLogin yes&lt;/code&gt;. After moving the server, &lt;strong&gt;remember to disable root access&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If &lt;a href="https://www.maier-komor.de/mbuffer.html"&gt;mbuffer&lt;/a&gt; is installed on the source system, it can help improve the performance of the transfer. If not, remove it from the next pipe command, which should be run from the destination system. The &lt;em&gt;-s&lt;/em&gt; will set the block-size, the &lt;em&gt;-m&lt;/em&gt; will set the buffer size (128MB, here):&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;ssh root@sourceip &amp;quot;zfs send -RLv zroot@snap01 | mbuffer -s 128k -m 128M&amp;quot; | zfs receive -F zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;You have two options now:&lt;/li&gt;
&lt;li&gt;If the source server was idle and no further synchronization is required, proceed to the next step.&lt;/li&gt;
&lt;li&gt;If the transfer took a long time and services on the source server need to be stopped (e.g., databases), stop what you can, take another snapshot (e.g., &lt;code&gt;zfs snapshot -r zroot@snap02&lt;/code&gt;), and synchronize the differences by running another incremental send-receive from the destination server:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;ssh root@sourceip &amp;quot;zfs send -RLv -i zroot@snap01 zroot@snap02 | mbuffer -s 128k -m 128M&amp;quot; | zfs receive -F zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;Set the boot filesystem:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool set bootfs=zroot/ROOT/default zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;BIOS Bootloader&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;Start from an ISO or image of &lt;a href="https://mfsbsd.vx.sk/"&gt;mfsbsd&lt;/a&gt;. Partition the disk according to the layout of the source system. If the source system is a standard FreeBSD (ZFS) installation, partition the destination disk (assumed to be &lt;code&gt;da0&lt;/code&gt;) as follows. &lt;strong&gt;WARNING:&lt;/strong&gt; The first command will destroy any existing partition table on the disk.&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;gpart destroy -F /dev/da0
gpart create -s gpt da0
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;Create the necessary partitions:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;gpart add -a 1m -t freebsd-boot -s 512k -l boot da0
gpart add -a 1m -t freebsd-swap -s 2g -l swap0 da0
gpart add -a 1m -t freebsd-zfs -l zfs0 da0
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;Create the ZFS pool:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool create -O compression=zstd -O atime=off zroot /dev/gpt/zfs0
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;On the source system, create a snapshot of all datasets:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs snapshot -r zroot@snap01
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Important:&lt;/strong&gt; You will need to connect to the source system using a user with privileges over the entire filesystem. For simplicity, I'll use &lt;code&gt;root&lt;/code&gt;, but it's not recommended to leave &lt;code&gt;root&lt;/code&gt; accessible via SSH. Limit access to private keys only. To allow root login over SSH, edit &lt;code&gt;/etc/ssh/sshd_config&lt;/code&gt; and set &lt;code&gt;PermitRootLogin yes&lt;/code&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If &lt;code&gt;mbuffer&lt;/code&gt; is installed on the source system, it can help improve the performance of the transfer. If not, remove it from the next pipe command, which should be run from the destination system:&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;ssh root@sourceip &amp;quot;zfs send -RLv zroot@snap01 | mbuffer -s 128k -m 128M&amp;quot; | zfs receive -F zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;You have two options now:&lt;/li&gt;
&lt;li&gt;If the source server was idle and no further synchronization is required, proceed to the next step.&lt;/li&gt;
&lt;li&gt;If the transfer took a long time and services on the source server need to be stopped (e.g., databases), stop what you can, take another snapshot (e.g., &lt;code&gt;zfs snapshot -r zroot@snap02&lt;/code&gt;), and synchronize the differences by running another incremental send-receive from the destination server:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;ssh root@sourceip &amp;quot;zfs send -RLv -i zroot@snap01 zroot@snap02 | mbuffer -s 128k -m 128M&amp;quot; | zfs receive -F zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;Install the bootloader on the destination server’s disk:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
&lt;/code&gt;&lt;/pre&gt;

&lt;ul&gt;
&lt;li&gt;Set the boot filesystem:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool set bootfs=zroot/ROOT/default zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At this point, everything should be ready for boot. Reboot and verify that the various mountpoints (e.g., swap, etc.) and network interface settings are correct. If everything works fine, the server will be an exact copy of the original.&lt;/p&gt;
&lt;p&gt;Moving or duplicating an entire FreeBSD system is simple, fast, and reliable. The ability to use ZFS incremental replication is an excellent method for minimizing downtime, especially for large systems. The initial copy will happen "live" while the source system continues to run without issues. Subsequent incrementals will be much faster, making the switch-over process seamless and efficient.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 16 Sep 2024 09:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/09/16/moving-freebsd-installation-new-host-vm/</guid><category>freebsd</category><category>server</category><category>zfs</category><category>tutorial</category><category>hosting</category><category>ownyourdata</category><category>series</category><category>tipsandtricks</category></item><item><title>Automating ZFS Snapshots for Peace of Mind</title><link>https://it-notes.dragas.net/2024/08/21/automating-zfs-snapshots-for-peace-of-mind/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="Automating ZFS Snapshots for Peace of Mind"&gt;&lt;/p&gt;&lt;p&gt;One feature I couldn't live without anymore is snapshots. As system administrators, we often find ourselves in situations where we've made a mistake, need to revert to a previous state, or need access to a log that has been rotated and disappeared. Since I started using ZFS, all of this has become incredibly simple, and I feel much more at ease when making any modifications.&lt;/p&gt;
&lt;p&gt;However, since I don't always remember to create a manual snapshot before starting to work, I use an automatic snapshot system. For this type of snapshot, I use the &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;excellent &lt;code&gt;zfs-autobackup&lt;/code&gt; tool&lt;/a&gt; - which I also &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;use for backups&lt;/a&gt;. The goal is to have a single, flexible, and configurable tool without having to learn different syntaxes.&lt;/p&gt;
&lt;h2&gt;Why zfs-autobackup?&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;zfs-autobackup&lt;/code&gt; has several advantages that make it perfect (or nearly so) for my purpose:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;It operates based on "tags" set on individual datasets. I don't have to specify the dataset; I just assign a specific tag (of my choice) to the datasets, and &lt;code&gt;zfs-autobackup&lt;/code&gt; will operate on those datasets, transparently with respect to others. This ensures it will work even on datasets in different zpools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It's extremely flexible in management. For example, by setting the correct tag to "zroot", it will automatically manage all underlying datasets. However, it's possible to exclude some for more granular snapshot management.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It works well on both FreeBSD and Linux - I use it with satisfaction on both platforms.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Different tags allow for different levels of data retention and operation. For example, the "mylocalsnap" tag will be for local snapshots, while "backup_offsite" will be for backups that will be copied off-site. The two tags (and related snapshots) will be independent, even though they operate on the same datasets.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Installation&lt;/h2&gt;
&lt;p&gt;On FreeBSD, installation is straightforward, as there's a ready-made package:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install py311-zfs-autobackup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On Linux, it will depend on the specific distribution. Being in Python, it will always be possible to &lt;a href="https://github.com/psy0rz/zfs_autobackup/wiki"&gt;install it using pip&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Once installed, you just need to assign the tag to the dataset. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs set autobackup:mylocalsnap=true zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will set a tag called "mylocalsnap" on zroot and underlying datasets, i.e., on the entire main file system of FreeBSD.&lt;/p&gt;
&lt;h2&gt;Usage&lt;/h2&gt;
&lt;p&gt;Now, you just need to run &lt;code&gt;zfs-autobackup&lt;/code&gt;, specifying both the tag and the snapshot retention criteria:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;/usr/local/bin/zfs-autobackup mylocalsnap --keep-source 5min1h,1h1d
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this case, it will take a (recursive) snapshot of the datasets that have the "mylocalsnap" tag set to true, keeping one snapshot every 5 minutes for an hour, and one every hour for a day.&lt;/p&gt;
&lt;p&gt;On subsequent executions of &lt;code&gt;zfs-autobackup&lt;/code&gt;, snapshots that don't meet the previous retention criteria will be deleted.&lt;/p&gt;
&lt;p&gt;After running this command, here's the result:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;root@fbsnap:~ # zfs list -t snapshot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
zroot@mylocalsnap-20240820150115                  0B      -    96K  -
zroot/ROOT@mylocalsnap-20240820150115             0B      -    96K  -
zroot/ROOT/default@mylocalsnap-20240820150115     0B      -  1.00G  -
zroot/home@mylocalsnap-20240820150115             0B      -    96K  -
zroot/tmp@mylocalsnap-20240820150115              0B      -   104K  -
zroot/usr@mylocalsnap-20240820150115              0B      -    96K  -
zroot/usr/ports@mylocalsnap-20240820150115        0B      -    96K  -
zroot/usr/src@mylocalsnap-20240820150115          0B      -    96K  -
zroot/var@mylocalsnap-20240820150115              0B      -    96K  -
zroot/var/audit@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/crash@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/log@mylocalsnap-20240820150115          0B      -   144K  -
zroot/var/mail@mylocalsnap-20240820150115         0B      -    96K  -
zroot/var/tmp@mylocalsnap-20240820150115          0B      -    96K  -
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As you can see, snapshots have been taken of all datasets with the tag set.&lt;/p&gt;
&lt;h2&gt;Automation&lt;/h2&gt;
&lt;p&gt;To automate the process, simply modify the &lt;code&gt;/etc/crontab&lt;/code&gt; file and add a line like this:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;*/5     *       *       *       *       root    /usr/local/bin/zfs-autobackup mylocalsnap --keep-source 5min1h,1h1d
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, wait a few minutes and check again:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;root@fbsnap:~ # zfs list -t snapshot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
zroot@mylocalsnap-20240820150115                  0B      -    96K  -
zroot/ROOT@mylocalsnap-20240820150115             0B      -    96K  -
zroot/ROOT/default@mylocalsnap-20240820150115   212K      -  1.00G  -
zroot/ROOT/default@mylocalsnap-20240820151000   128K      -  1.00G  -
zroot/home@mylocalsnap-20240820150115             0B      -    96K  -
zroot/tmp@mylocalsnap-20240820150115             72K      -   104K  -
zroot/tmp@mylocalsnap-20240820151000              0B      -   104K  -
zroot/usr@mylocalsnap-20240820150115              0B      -    96K  -
zroot/usr/ports@mylocalsnap-20240820150115        0B      -    96K  -
zroot/usr/src@mylocalsnap-20240820150115          0B      -    96K  -
zroot/var@mylocalsnap-20240820150115              0B      -    96K  -
zroot/var/audit@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/crash@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/log@mylocalsnap-20240820150115         64K      -   144K  -
zroot/var/log@mylocalsnap-20240820151000         60K      -   144K  -
zroot/var/mail@mylocalsnap-20240820150115         0B      -    96K  -
zroot/var/tmp@mylocalsnap-20240820150115         64K      -    96K  -
zroot/var/tmp@mylocalsnap-20240820151000          0B      -    96K  -
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If everything went as it should, you'll notice that unmodified datasets won't have new snapshots, while those that have been modified since the previous manual execution (e.g., zroot/var/log) will contain both the previous snapshot and the automatic one.&lt;/p&gt;
&lt;h2&gt;Recovering Files from Snapshots&lt;/h2&gt;
&lt;p&gt;There are various ways to recover a file from the previous snapshot. One option is to restore the entire snapshot to the current dataset, but this may not be the best option as it will perform a complete restore.&lt;/p&gt;
&lt;p&gt;Another alternative is to go to the hidden snapshot directory. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;root@fbsnap:~ # cd /var/log/.zfs/snapshot/mylocalsnap-20240820151000/
root@fbsnap:/var/log/.zfs/snapshot/mylocalsnap-20240820151000 # ls -l
total 39
-rw-------  1 root wheel     872 Aug 20 15:06 auth.log
-rw-r--r--  1 root wheel   79079 Aug 20 13:19 bsdinstall_log
-rw-------  1 root wheel    5401 Aug 20 15:10 cron
-rw-r--r--  1 root wheel      63 Aug 20 13:20 daemon.log
-rw-------  1 root wheel      63 Aug 20 13:20 debug.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 devd.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 lpd-errs
-rw-r-----  1 root wheel      63 Aug 20 13:20 maillog
-rw-r--r--  1 root wheel   17043 Aug 20 15:00 messages
-rw-r-----  1 root network    63 Aug 20 13:20 ppp.log
-rw-------  1 root wheel      63 Aug 20 13:20 security
-rw-r--r--  1 root wheel     197 Aug 20 15:00 utx.lastlogin
-rw-r--r--  1 root wheel     187 Aug 20 15:00 utx.log
-rw-------  1 root wheel      63 Aug 20 13:20 xferlog
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;From here, you can read and recover any file present in the individual snapshot. The snapshots are read-only, so you won't be able to write to them.&lt;/p&gt;
&lt;h2&gt;Creating a Writable Copy of a Snapshot&lt;/h2&gt;
&lt;p&gt;Should you need a read-write copy of a specific snapshot, you can use the zfs clone command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;root@fbsnap:~ # zfs clone zroot/var/log@mylocalsnap-20240820151000 zroot/recover
root@fbsnap:~ # ls -l /zroot/recover/
total 39
-rw-------  1 root wheel     872 Aug 20 15:06 auth.log
-rw-r--r--  1 root wheel   79079 Aug 20 13:19 bsdinstall_log
-rw-------  1 root wheel    5401 Aug 20 15:10 cron
-rw-r--r--  1 root wheel      63 Aug 20 13:20 daemon.log
-rw-------  1 root wheel      63 Aug 20 13:20 debug.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 devd.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 lpd-errs
-rw-r-----  1 root wheel      63 Aug 20 13:20 maillog
-rw-r--r--  1 root wheel   17043 Aug 20 15:00 messages
-rw-r-----  1 root network    63 Aug 20 13:20 ppp.log
-rw-------  1 root wheel      63 Aug 20 13:20 security
-rw-r--r--  1 root wheel     197 Aug 20 15:00 utx.lastlogin
-rw-r--r--  1 root wheel     187 Aug 20 15:00 utx.log
-rw-------  1 root wheel      63 Aug 20 13:20 xferlog
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This creates a new dataset zroot/recover that is a writable copy of the snapshot. You can now modify these files as needed, without affecting the original snapshot or the live filesystem.&lt;/p&gt;
&lt;h2&gt;Cleaning Up&lt;/h2&gt;
&lt;p&gt;Sometimes you may want to delete all snapshots generated by &lt;code&gt;zfs-autobackup&lt;/code&gt;. There are various ways, but the quickest can be to use a simple pipe:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs list -t snapshot -o name | grep -i mylocalsnap | xargs -n 1 zfs destroy -vr
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;By implementing automatic ZFS snapshots, you can work with peace of mind, knowing that you can always revert changes or recover lost files. This setup provides an excellent balance between data protection and system performance.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 21 Aug 2024 08:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/08/21/automating-zfs-snapshots-for-peace-of-mind/</guid><category>zfs</category><category>freebsd</category><category>linux</category><category>backup</category><category>data</category><category>filesystems</category><category>snapshots</category><category>recovery</category></item><item><title>From Cloud Chaos to FreeBSD Efficiency</title><link>https://it-notes.dragas.net/2024/07/04/from-cloud-chaos-to-freebsd-efficiency/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/datacenter.webp" alt="From Cloud Chaos to FreeBSD Efficiency"&gt;&lt;/p&gt;&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;A few months ago, a client asked me to take care of their Kubernetes cluster (hosted on AWS and GCP). In their opinion, the costs were exorbitantly high for relatively simple and lean websites. Sure, they had many visits, but nothing too excessive development-wise.&lt;/p&gt;
&lt;p&gt;I kindly declined. Unfortunately, their situation is all too common these days: they hired developers accustomed to working that way, convinced that a system administrator is now unnecessary because "the cloud has infinite potential." They were used to considering optimization as secondary because "we have infinite power" (and this is already a spoiler for the ending).&lt;/p&gt;
&lt;p&gt;Being open to dialogue and new experiences, they asked for my opinion on the matter. We talked for a while, and I explained that, in my view, for the type of setup they had (standard, with various replicas and variants, but primarily based on two platforms), it didn't make sense. I saw it as complicating things. An over-engineering of something simple. Like taking a cruise ship to cross a river.&lt;/p&gt;
&lt;p&gt;They then asked me to create something simple that would serve as a development server and for backups, to understand what kind of solution I had in mind.&lt;/p&gt;
&lt;h2&gt;The Solution&lt;/h2&gt;
&lt;p&gt;So, I started building everything. I began with FreeBSD 13.2-RELEASE, but in the meantime, 14.0-RELEASE came out, so that’s the version I delivered.&lt;/p&gt;
&lt;p&gt;I installed the operating system on a physical server, leased from one of the main European providers. Benefiting from one of their auctions (good deals can be found on weekends), they found a sufficiently powerful machine, with 128GB of RAM, 2 NVMe drives of 1TB each, and two spinning disks of 2TB each for less than 100 euros per month. They also took another, less powerful one for additional backups and to back up the first one.&lt;/p&gt;
&lt;h2&gt;Implementation&lt;/h2&gt;
&lt;p&gt;I decided to keep the host as clean as possible and concentrated the services in jails (managed by BastilleBSD) and VMs. The machine was divided as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A series of bridges - to be used for different projects. Jails of the same project and/or type use the same bridge and can communicate with each other, sharing some resources (MariaDB, etc.).&lt;/li&gt;
&lt;li&gt;A bhyve VM with &lt;a href="https://alpinelinux.org/"&gt;Alpine Linux&lt;/a&gt; - in my opinion, the best distribution for running Docker containers. Do we really need systemd just to launch Docker? They mainly use it as a pre-production test bench, connected via VPN to their company LAN. It is the core of their "online" development, i.e., outside their computers. It has 32GB of RAM, 200GB of disk (obviously bhyve is configured with NVMe drivers), and 4 cores assigned.&lt;/li&gt;
&lt;li&gt;A VNET jail with a reverse proxy (nginx) - they know how to modify virtual hosts and generate certificates with certbot, pointing to the underlying jails.&lt;/li&gt;
&lt;li&gt;A series of "empty" VNET jails, to be cloned, for each type of setup (they mainly have CMS based on WordPress and Laravel, so with all dependencies inside - nginx, php, redis, etc. except the databases).&lt;/li&gt;
&lt;li&gt;A VNET jail with MariaDB installed, to be cloned, to be attached to different projects as needed.&lt;/li&gt;
&lt;li&gt;zfs-autobackup performs local snapshots, keeping: one every 15 minutes for 3 hours, one per hour for 24 hours, one per day for 3 days.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Backups &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;are also performed using zfs-autobackup&lt;/a&gt; and, in case of disaster recovery in rapid times, a zfs-send (and corresponding zfs-receive) every 10 minutes on another machine (the other, smaller one, also taken at auction), with the same bridges, firewall rules, BastilleBSD, and bhyve installed - ready to start in case of disaster. Being a test server, we didn't consider to implement a proper HA - at the moment, it wouldn't make sense.&lt;/p&gt;
&lt;p&gt;They also have another job with zfs-autobackup that performs an additional backup on a server (Debian in their offices). &lt;a href="https://my-notes.dragas.net/posts/2024/who-is-the-real-owner-of-your-data/"&gt;Safe data, in my opinion, are those in storage under your b...ench&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I delivered everything to them and gave a brief course to the more experienced devs on how to manage things. No explanation on the Alpine Linux VM, but I showed them the jails, how to clone, configure, and manage them.&lt;/p&gt;
&lt;h2&gt;Real-world Testing&lt;/h2&gt;
&lt;p&gt;I didn't hear from them anymore. After a few weeks, one of the devs contacted me urgently because a junior unfortunately made a mistake and deleted an entire project from one of the jails. I explained that the local snapshots were restorable with a command, and he was thrilled. He restored both the development jail and the one with the database made two minutes before the "mishap" and they restarted immediately.&lt;/p&gt;
&lt;p&gt;I realized that this event would change some of their procedures and criteria.&lt;/p&gt;
&lt;p&gt;I hadn't heard from anyone for months. This morning, I received a call from their manager, whom I hadn't heard from since the beginning, and he told me how things had been going these months.&lt;/p&gt;
&lt;h2&gt;Lessons Learned&lt;/h2&gt;
&lt;p&gt;First, this person has good communication and commercial skills but little technical background. He is open-minded and tends to study carefully what is proposed to him. He doesn't discard any solution a priori, without having touched its pros and cons.&lt;/p&gt;
&lt;p&gt;They had leased servers with cPanel and were inserting their content inside them. The devs who arrived a few years ago suggested making a technological transition, eliminating these "obsolete" servers and "outdated" methodologies, pushing everything to the cloud and containerizing everything. When we first talked, he told me how they were "lucky to make that transition because their load had increased enormously and the old servers probably wouldn't have handled the load", instead autoscaling saved them. I had some reservations about autoscaling without particular controls, but clearly, I cannot impose my choices on others.&lt;/p&gt;
&lt;p&gt;To cut a long story short: seeing what happened with that junior dev's mistake (and the simplicity with which it was possible to restart immediately), they decided to increase the use of FreeBSD jails and reduce, at least on secondary loads, the use of their Cloud managed with Kubernetes. As they transitioned to jails, however, they noticed some slowdowns. These slowdowns worsened day by day. According to the devs, it would have been appropriate to go back to having, again, autoscaling ("we need moar powaaaaar!!!") but, fortunately, their boss decided to investigate carefully. They realized that these workloads (based on &lt;a href="https://laravel.com/"&gt;Laravel&lt;/a&gt;) were storing sessions on files. Over time, these millions of files (several gigabytes per day) slowed everything down because, for specific operations, Laravel scanned the entire directory. In other words, on the "cloud," they needed much more power than necessary (and much more disk space, but that was cheaper) to carry this load, which was, in fact, unnecessary. After realizing this, they moved the sessions to Redis. Needless to say, everything became extremely faster, even compared to the previous setup on Kubernetes and autoscaling.&lt;/p&gt;
&lt;p&gt;At that point, it was clear that one of the problems with their setup is (as often happens) poor optimization. Today, there's a tendency to rush, "throw in" functions, features, libraries, plugins, etc. without considering the interactions and consequences. If it works, it's fine. Even if it increases computational complexity exponentially just to, for example, change the color of an icon (absurd example, but to give an idea).&lt;/p&gt;
&lt;p&gt;They then started moving even the main Laravel workloads (thanks to the optimization implemented). At this point, they began moving some of the WordPress sites even though they were extremely concerned. In the cluster, every day, at fairly irregular intervals, the load would rise and everything would slow down until autoscaling started scaling up to the imposed limits. CPU at 100% on all containers, and the devs noticed that the load came from a series of "php" processes. Recreating the containers helped for some minutes, but did not solve the problem.&lt;/p&gt;
&lt;p&gt;To their great surprise, all this did not happen on the FreeBSD jails. The load was significantly lower, without any of these spikes. Satisfied, they decided to use this as their final setup. One of the devs, however, wanted to get to the bottom of it and decided to run a test: he moved some of these WordPress sites to the Alpine VM, on Docker. At that point, the spikes resumed, saturating the CPU of the Alpine machine.&lt;/p&gt;
&lt;p&gt;Without going into details, they eventually realized that there was a vulnerability in one (or more) of the many plugins installed on the WordPress sites, which was being exploited to inject a process, probably a cryptominer. The name given to the process was "php" - so the devs, not being system experts, did not worry about understanding better whether it was really php or another process pretending to be it. On FreeBSD, all this did not happen because the injected executable could not run - there was no &lt;a href="https://docs.freebsd.org/en/books/handbook/linuxemu/"&gt;Linux compatibility&lt;/a&gt; activated on the server.&lt;/p&gt;
&lt;p&gt;Until then, they considered these (expensive) spikes as organic and did not worry too much about them. Paying to have their friendly intruders mine.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;They asked me to help, as much as possible, to &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;move other services to FreeBSD&lt;/a&gt;. It won't be easy, probably we will need to use bhyve a lot, but they decided that this is the platform they want to focus on in the coming years.&lt;/p&gt;
&lt;p&gt;Undoubtedly, this is a success story of FreeBSD and, indirectly, of correct and careful management of one's resources. Too often today, there is the superficial belief that the cloud, with its "infinite" resources, is the solution to all problems. And that Kubernetes is the best solution for everything. I, on the other hand, have always believed that there is the right tool for everything. You can hammer a nail with a screwdriver, but it's not the most suitable and efficient tool.&lt;/p&gt;
&lt;p&gt;Today they spend about 1/10 of what they used to spend before, they have more control over their data and the tools they use. Undoubtedly, all this was also caused by poor optimization and control by those who manage the infrastructure, but the question is: how often do people decide that, in the end, it is okay to spend more (especially if it is someone else's money) rather than go crazy for hours behind such a situation? While having defined and limited resources (albeit elevated) poses different problems - but of optimization. And in the age of energy and resource savings, it might be wise to give more importance to optimization.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Abundance led to waste&lt;/em&gt;.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 04 Jul 2024 08:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/07/04/from-cloud-chaos-to-freebsd-efficiency/</guid><category>freebsd</category><category>zfs</category><category>backup</category><category>data</category><category>filesystems</category><category>snapshots</category><category>recovery</category><category>networking</category><category>security</category><category>server</category><category>hosting</category><category>linux</category><category>ownyourdata</category><category>jail</category><category>virtualization</category><category>alpine</category><category>bhyve</category><category>docker</category></item><item><title>Enhancing FreeBSD Stability with ZFS Pool Checkpoints</title><link>https://it-notes.dragas.net/2024/07/01/enhancing-freebsd-stability-with-zfs-pool-checkpoints/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="Enhancing FreeBSD Stability with ZFS Pool Checkpoints"&gt;&lt;/p&gt;&lt;p&gt;ZFS offers many interesting features, and one of the most widely used is the ability to create and transfer snapshots of entire datasets, even recursively. This approach is useful for &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;backups or maintaining a specific “point in time” for datasets&lt;/a&gt;. For example, on FreeBSD, automatic snapshots of the dataset containing the root file system have been taken with each system upgrade for several releases. This way, thanks to Boot Environments, if there are any problems, it is possible to reboot from a previous clone.&lt;/p&gt;
&lt;p&gt;However, sometimes we might need something more. Local snapshots do not protect against the deletion of entire datasets or the activation of new features that could potentially cause problems or incompatibilities.&lt;/p&gt;
&lt;p&gt;A very useful tool that I have successfully used for some time is the pool checkpoint feature. This feature, imported from Illumos to FreeBSD in 2018, allows creating a sort of snapshot of the entire pool, including features, metadata, etc.&lt;/p&gt;
&lt;p&gt;The checkpoint is different from snapshots of individual datasets. It is not possible to have more than one checkpoint, and some operations like &lt;code&gt;remove&lt;/code&gt;, &lt;code&gt;attach&lt;/code&gt;, &lt;code&gt;detach&lt;/code&gt;, &lt;code&gt;split&lt;/code&gt;, and &lt;code&gt;reguid&lt;/code&gt; will be impossible when a checkpoint exists. This also has a side effect: if there is a checkpoint, deleting a dataset will not release free space because the data will still be physically present in the storage thanks to the checkpoint.&lt;/p&gt;
&lt;p&gt;Additionally, checkpoints are detected by the FreeBSD boot loader. When booting the system, the boot loader will offer the option to perform a "Rewind ZFS checkpoint" and boot from that point, effectively discarding everything that occurred after the checkpoint. This option can be particularly useful in emergencies or when you need to quickly undo recent changes.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://man.freebsd.org/cgi/man.cgi?zpool-checkpoint"&gt;Creating a checkpoint is very simple&lt;/a&gt;. Just use the command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zpool checkpoint &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The operation is usually quick. When a checkpoint is present, the command &lt;code&gt;zpool status&lt;/code&gt; will show its details. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;pool: zroot
state: ONLINE
scan: scrub repaired 0B in 00:00:12 with 0 errors on Fri May 17 13:27:14 2024
checkpoint: created Sun Jun 30 12:30:51 2024, consumes 1.34M
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      ada1p4    ONLINE       0     0     0

errors: No known data errors
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To delete the checkpoint, you can use the command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zpool checkpoint -d &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To rollback state to checkpoint and remove the checkpoint:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zpool import --rewind-to-checkpoint &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;To mount the pool read only (without rolling back the data):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zpool import --read-only=on --rewind-to-checkpoint &amp;lt;pool&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It is therefore possible to generate a checkpoint automatically via cron or manually when necessary, for example, before an operating system upgrade.&lt;/p&gt;
&lt;p&gt;For more technical details, I suggest reading &lt;a href="https://freebsdfoundation.org/wp-content/uploads/2019/01/ZPool-Checkpoint.pdf"&gt;this excellent article by Serapheim Dimitropoulos&lt;/a&gt;, published in the FreeBSD Journal in January 2019.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 01 Jul 2024 09:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/07/01/enhancing-freebsd-stability-with-zfs-pool-checkpoints/</guid><category>freebsd</category><category>zfs</category><category>backup</category><category>data</category><category>filesystems</category><category>snapshots</category><category>recovery</category></item><item><title>Proxmox vs FreeBSD: Which Virtualization Host Performs Better?</title><link>https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="Proxmox vs FreeBSD: Which Virtualization Host Performs Better?"&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: Skip to the &lt;a href="/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/#heading-11"&gt;Conclusion&lt;/a&gt; for a summary. &lt;/p&gt;
&lt;h3&gt;Preamble&lt;/h3&gt;
&lt;p&gt;I have always been passionate about virtualization and have consistently used it. &lt;/p&gt;
&lt;p&gt;The first solution I installed on my infrastructures (and those of clients) was &lt;a href="https://it-notes.dragas.net/2023/08/27/that-old-netbsd-server-running-since-2010/"&gt;Xen on NetBSD&lt;/a&gt;, with great success. I then used Xen on Linux and, since 2012, OpenNebula, followed by Proxmox in 2013. &lt;a href="https://it-notes.dragas.net/categories/proxmox/"&gt;Proxmox&lt;/a&gt; has always given me great satisfaction, and even today I consider it a valuable platform that I install gladly. I have also used other hypervisors like &lt;a href="https://xcp-ng.org/"&gt;XCP-ng&lt;/a&gt; but less frequently, and in recent years, I have started to make extensive use of bhyve. &lt;/p&gt;
&lt;p&gt;About two and a half years ago, &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;we began a progressive process of migrating our servers (and those of our clients) from Linux to FreeBSD&lt;/a&gt;, &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;using jails&lt;/a&gt; (when possible) or VMs on bhyve. In some cases, &lt;a href="https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/"&gt;migrating setups from Proxmox to FreeBSD&lt;/a&gt; resulted in performance improvements, even with the same hardware. In some instances, 
I migrated VMs without notifying clients, and they contacted me a few days later to inquire if we had new hardware because they noticed better performance. &lt;/p&gt;
&lt;p&gt;After years, I decided to conduct a test to determine if this was just a perception or if there was a technical basis behind it. Of course, &lt;em&gt;this test has no scientific validity&lt;/em&gt;, and the results were obtained on specific hardware and at a specific time, so on different hardware, workload, and situations, the results could be entirely opposite. 
However, I tried to have as scientific and objective an approach as possible since I am comparing two solutions that I care about and use daily. &lt;/p&gt;
&lt;h3&gt;Hardware and Test Conditions&lt;/h3&gt;
&lt;p&gt;I often see comparative tests done on VMs from various providers. In my opinion, this comparison makes no sense because a VM from any provider shares its hardware with many other VMs, so the results will vary depending on the load of the "neighbors" and will never be reliable. &lt;/p&gt;
&lt;p&gt;For this test, I decided to take a physical server with the following characteristics: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Intel Core i7-6700 &lt;/li&gt;
&lt;li&gt;2x SSD M.2 NVMe 512 GB &lt;/li&gt;
&lt;li&gt;4x RAM 16384 MB DDR4 &lt;/li&gt;
&lt;li&gt;NIC 1 Gbit Intel I219-LM &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The hardware is not recent, but still very widespread. On more recent hardware, the results might differ, but the test will be based on this configuration. &lt;/p&gt;
&lt;p&gt;I installed &lt;a href="https://www.proxmox.com/en/"&gt;Proxmox 8.2.2&lt;/a&gt; starting from the Debian template of the provider and &lt;a href="https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm"&gt;manually installed it following the instructions&lt;/a&gt;. I created a partition for Proxmox and left one partition free on each of the two NVME drives to create (at different times) the ZFS pool (in mirror) and the LVM on top of the Linux software raid. &lt;/p&gt;
&lt;p&gt;After all the tests, I installed &lt;a href="https://www.freebsd.org/"&gt;FreeBSD&lt;/a&gt; 14.1-RELEASE on ZFS on the same host, using &lt;em&gt;bsdinstall&lt;/em&gt; from an &lt;a href="https://mfsbsd.vx.sk/"&gt;mfsbsd image&lt;/a&gt; since the provider does not directly support installing FreeBSD from its panel or rescue mode. &lt;/p&gt;
&lt;p&gt;In both installations, I always trimmed the NVME drives before starting the tests, and in the case of ZFS, I set (both on Proxmox and FreeBSD) compression to zstd and atime to off. 
No other changes were made compared to the standard installation. &lt;/p&gt;
&lt;p&gt;On FreeBSD, the VM was created and managed with &lt;a href="https://github.com/churchers/vm-bhyve"&gt;vm-bhyve (devel)&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;On Proxmox, I tested the physical host on ZFS and ext4 and the VM on ZFS and LVM as LVM is the standard and most common setup in Proxmox. &lt;/p&gt;
&lt;p&gt;On FreeBSD, I tested the host on ZFS and the VM with both virtio and nvme drivers, on zvol, and as an image file within a ZFS dataset. &lt;/p&gt;
&lt;p&gt;I used &lt;a href="https://github.com/akopytov/sysbench"&gt;sysbench&lt;/a&gt; installed from the official Debian repository (on Proxmox and VM) and from the FreeBSD package on the respective host. &lt;/p&gt;
&lt;p&gt;The VMs, both on Proxmox and FreeBSD, have nearly identical characteristics and default configuration (apart from the nvme drivers set on bhyve, and for that reason, I also tested virtio).&lt;/p&gt;
&lt;p&gt;For those who want to reproduce my tests, here are the detailed configurations of the VMs used in bhyve:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD bhyve VM Configuration with NVMe Driver&lt;/strong&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;loader=&amp;quot;uefi&amp;quot;
cpu=4
memory=4096M
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;nvme&amp;quot;
disk0_name=&amp;quot;disk0.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;FreeBSD bhyve VM Configuration with virtio Driver&lt;/strong&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;loader=&amp;quot;uefi&amp;quot;
cpu=4
memory=4096M
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=&amp;quot;disk0.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Proxmox VM Configuration&lt;/strong&gt;:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Component&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Details&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4.00 GiB [balloon=0]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Processors&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4 (1 sockets, 4 cores) [x86-64-v2-AES]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;BIOS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Default (SeaBIOS)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Display&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Default&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Machine&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Default (i440fx)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SCSI Controller&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VirtIO SCSI single&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CD/DVD Drive (ide2)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;local:iso/debian-12.5.0-amd64-netinst.iso,media=cdrom,size=629M&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hard Disk (scsi0)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;zfspool:vm-100-disk-0,cache=writeback,discard=on,iothread=1,size=50G,ssd=1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Network Device (net0)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;virtio=BC:24:11:22:3D:F0,bridge=vmbr0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;In all the configurations, I used Debian 12 as the VM operating system, with the file system on ext4.&lt;/p&gt;
&lt;p&gt;I chose Debian 12 as it is a stable, widespread, and modern Linux distribution. I did not test a FreeBSD VM because, in my setups, &lt;a href="https://it-notes.dragas.net/2023/11/27/migrating-from-vm-to-hierarchical-jails-freebsd/"&gt;I tend not to virtualize FreeBSD on FreeBSD but to use nested jails&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;All tests were performed multiple times, and I took the median results. CPU and RAM were tested only on the first VM (on Proxmox (ZFS) and FreeBSD (ZFS and nvme)) as they are not dependent on the underlying storage. Storage performance, on the other hand, was tested on all configurations.&lt;/p&gt;
&lt;h3&gt;CPU and RAM Tests on VMs&lt;/h3&gt;
&lt;p&gt;On both VMs:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;sysbench --test=cpu --cpu-max-prime=20000 run
sysbench --test=memory run
&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;Comparative Results&lt;/h4&gt;
&lt;h5&gt;CPU Test&lt;/h5&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Configuration&lt;/th&gt;
&lt;th&gt;Events per Second&lt;/th&gt;
&lt;th&gt;Total Time (s)&lt;/th&gt;
&lt;th&gt;Latency (avg) (ms)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Proxmox&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;498.08&lt;/td&gt;
&lt;td&gt;10.0010&lt;/td&gt;
&lt;td&gt;2.01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;473.65&lt;/td&gt;
&lt;td&gt;10.0019&lt;/td&gt;
&lt;td&gt;2.11&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h5&gt;CPU Percentage Analysis&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Difference in Events per Second&lt;/strong&gt;: ((498.08 - 473.65) / 498.08 approx -4.91\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in Total Time&lt;/strong&gt;: ((10.0019 - 10.0010) / 10.0010 approx +0.009\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in Latency (avg)&lt;/strong&gt;: ((2.11 - 2.01) / 2.01 approx +4.98\%)&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;RAM Test&lt;/h5&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Configuration&lt;/th&gt;
&lt;th&gt;Total Operations&lt;/th&gt;
&lt;th&gt;Operations per Second&lt;/th&gt;
&lt;th&gt;Total MiB Transferred&lt;/th&gt;
&lt;th&gt;MiB/sec&lt;/th&gt;
&lt;th&gt;Latency (avg) (ms)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Proxmox&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;64777227&lt;/td&gt;
&lt;td&gt;6476757.59&lt;/td&gt;
&lt;td&gt;63259.01&lt;/td&gt;
&lt;td&gt;6324.96&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;68621063&lt;/td&gt;
&lt;td&gt;6861139.06&lt;/td&gt;
&lt;td&gt;67012.76&lt;/td&gt;
&lt;td&gt;6700.33&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h5&gt;RAM Percentage Analysis&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Difference in Total Operations&lt;/strong&gt;: ((68621063 - 64777227) / 64777227 approx +5.94\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in Operations per Second&lt;/strong&gt;: ((6861139.06 - 6476757.59) / 6476757.59 approx +5.94\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in Total MiB Transferred&lt;/strong&gt;: ((67012.76 - 63259.01) / 63259.01 approx +5.93\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in MiB/sec&lt;/strong&gt;: ((6700.33 - 6324.96) / 6324.96 approx +5.93\%)&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;CPU and RAM Comparative Results Table&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test&lt;/th&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Proxmox (KVM)&lt;/th&gt;
&lt;th&gt;FreeBSD (bhyve)&lt;/th&gt;
&lt;th&gt;Difference (%)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;Events/s&lt;/td&gt;
&lt;td&gt;498.08&lt;/td&gt;
&lt;td&gt;473.65&lt;/td&gt;
&lt;td&gt;-4.91&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Time (s)&lt;/td&gt;
&lt;td&gt;10.0010&lt;/td&gt;
&lt;td&gt;10.0019&lt;/td&gt;
&lt;td&gt;+0.009&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;2.01&lt;/td&gt;
&lt;td&gt;2.11&lt;/td&gt;
&lt;td&gt;+4.98&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;Ops&lt;/td&gt;
&lt;td&gt;64777227&lt;/td&gt;
&lt;td&gt;68621063&lt;/td&gt;
&lt;td&gt;+5.94&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Ops/s&lt;/td&gt;
&lt;td&gt;6476757.59&lt;/td&gt;
&lt;td&gt;6861139.06&lt;/td&gt;
&lt;td&gt;+5.94&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;MiB&lt;/td&gt;
&lt;td&gt;63259.01&lt;/td&gt;
&lt;td&gt;67012.76&lt;/td&gt;
&lt;td&gt;+5.93&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;MiB/s&lt;/td&gt;
&lt;td&gt;6324.96&lt;/td&gt;
&lt;td&gt;6700.33&lt;/td&gt;
&lt;td&gt;+5.93&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Interpretation of CPU and RAM Results&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;CPU Performance&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;The VM on FreeBSD has slightly lower CPU performance compared to Proxmox (-4.91% in events per second).&lt;/li&gt;
&lt;li&gt;The total execution time is nearly identical, with a negligible difference.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The average latency is slightly higher on FreeBSD (+4.98%).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;RAM Performance&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;The VM on FreeBSD has better RAM performance compared to Proxmox (+5.94% in operations and MiB/sec).&lt;/li&gt;
&lt;li&gt;The average latency is identical in both configurations.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In summary, while Proxmox provides more consistent CPU performance, FreeBSD demonstrates superior memory performance. The choice between Proxmox and FreeBSD may depend on the specific workload requirements and the importance of consistent performance versus higher throughput.&lt;/p&gt;
&lt;h3&gt;I/O Performance Tests&lt;/h3&gt;
&lt;p&gt;The test has been conducted using sysbench, with this command line:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;sysbench --test=fileio --file-total-size=30G prepare
sysbench --test=fileio --file-total-size=30G --file-test-mode=rndrw  --max-time=300 --max-requests=0 run
&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;I/O Comparative Performance Data with Percentage Differences&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;VM on Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;VM on Proxmox (LVM)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, NVMe)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, Virtio)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (zvol)&lt;/th&gt;
&lt;th&gt;Host FreeBSD (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ext4)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File creation speed (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;407.82&lt;/td&gt;
&lt;td&gt;461.52&lt;/td&gt;
&lt;td&gt;1467.83&lt;/td&gt;
&lt;td&gt;1398.81&lt;/td&gt;
&lt;td&gt;1333.64&lt;/td&gt;
&lt;td&gt;1625.67&lt;/td&gt;
&lt;td&gt;968.64&lt;/td&gt;
&lt;td&gt;633.13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reads per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;650.09&lt;/td&gt;
&lt;td&gt;504.80&lt;/td&gt;
&lt;td&gt;11183.44&lt;/td&gt;
&lt;td&gt;806.93&lt;/td&gt;
&lt;td&gt;11834.53&lt;/td&gt;
&lt;td&gt;1234.62&lt;/td&gt;
&lt;td&gt;920.95&lt;/td&gt;
&lt;td&gt;498.37&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writes per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;433.40&lt;/td&gt;
&lt;td&gt;336.54&lt;/td&gt;
&lt;td&gt;7455.62&lt;/td&gt;
&lt;td&gt;537.95&lt;/td&gt;
&lt;td&gt;7889.69&lt;/td&gt;
&lt;td&gt;823.08&lt;/td&gt;
&lt;td&gt;613.96&lt;/td&gt;
&lt;td&gt;332.25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;fsyncs per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1387.08&lt;/td&gt;
&lt;td&gt;1076.97&lt;/td&gt;
&lt;td&gt;23858.08&lt;/td&gt;
&lt;td&gt;1721.79&lt;/td&gt;
&lt;td&gt;25247.36&lt;/td&gt;
&lt;td&gt;2634.01&lt;/td&gt;
&lt;td&gt;1964.96&lt;/td&gt;
&lt;td&gt;1063.19&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Read throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;10.16&lt;/td&gt;
&lt;td&gt;7.89&lt;/td&gt;
&lt;td&gt;174.74&lt;/td&gt;
&lt;td&gt;12.61&lt;/td&gt;
&lt;td&gt;184.91&lt;/td&gt;
&lt;td&gt;19.29&lt;/td&gt;
&lt;td&gt;14.39&lt;/td&gt;
&lt;td&gt;7.79&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Write throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;6.77&lt;/td&gt;
&lt;td&gt;5.26&lt;/td&gt;
&lt;td&gt;116.49&lt;/td&gt;
&lt;td&gt;8.41&lt;/td&gt;
&lt;td&gt;123.28&lt;/td&gt;
&lt;td&gt;12.86&lt;/td&gt;
&lt;td&gt;9.59&lt;/td&gt;
&lt;td&gt;5.19&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total events&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;741163&lt;/td&gt;
&lt;td&gt;575588&lt;/td&gt;
&lt;td&gt;12749157&lt;/td&gt;
&lt;td&gt;919952&lt;/td&gt;
&lt;td&gt;13491459&lt;/td&gt;
&lt;td&gt;1407592&lt;/td&gt;
&lt;td&gt;1049894&lt;/td&gt;
&lt;td&gt;568277&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.40&lt;/td&gt;
&lt;td&gt;0.52&lt;/td&gt;
&lt;td&gt;0.02&lt;/td&gt;
&lt;td&gt;0.33&lt;/td&gt;
&lt;td&gt;0.02&lt;/td&gt;
&lt;td&gt;0.21&lt;/td&gt;
&lt;td&gt;0.29&lt;/td&gt;
&lt;td&gt;0.53&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;95th percentile latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2.30&lt;/td&gt;
&lt;td&gt;3.25&lt;/td&gt;
&lt;td&gt;0.06&lt;/td&gt;
&lt;td&gt;1.58&lt;/td&gt;
&lt;td&gt;0.05&lt;/td&gt;
&lt;td&gt;1.32&lt;/td&gt;
&lt;td&gt;1.79&lt;/td&gt;
&lt;td&gt;2.71&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Max latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;22.65&lt;/td&gt;
&lt;td&gt;32.30&lt;/td&gt;
&lt;td&gt;35.49&lt;/td&gt;
&lt;td&gt;13.60&lt;/td&gt;
&lt;td&gt;77.53&lt;/td&gt;
&lt;td&gt;9.03&lt;/td&gt;
&lt;td&gt;9.47&lt;/td&gt;
&lt;td&gt;17.39&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total test time (s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;300.0475&lt;/td&gt;
&lt;td&gt;300.1147&lt;/td&gt;
&lt;td&gt;300.0020&lt;/td&gt;
&lt;td&gt;300.0226&lt;/td&gt;
&lt;td&gt;300.0012&lt;/td&gt;
&lt;td&gt;300.0416&lt;/td&gt;
&lt;td&gt;300.0159&lt;/td&gt;
&lt;td&gt;300.1381&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Percentage Differences Compared to VM on Proxmox (ZFS)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;VM on Proxmox (LVM)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, NVMe)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, Virtio)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (zvol)&lt;/th&gt;
&lt;th&gt;Host FreeBSD (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ext4)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File creation speed (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+13.18%&lt;/td&gt;
&lt;td&gt;+259.77%&lt;/td&gt;
&lt;td&gt;+242.99%&lt;/td&gt;
&lt;td&gt;+227.02%&lt;/td&gt;
&lt;td&gt;+298.62%&lt;/td&gt;
&lt;td&gt;+137.52%&lt;/td&gt;
&lt;td&gt;+55.25%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reads per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.34%&lt;/td&gt;
&lt;td&gt;+1619.98%&lt;/td&gt;
&lt;td&gt;+24.13%&lt;/td&gt;
&lt;td&gt;+1720.45%&lt;/td&gt;
&lt;td&gt;+89.92%&lt;/td&gt;
&lt;td&gt;+41.67%&lt;/td&gt;
&lt;td&gt;-23.34%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writes per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.35%&lt;/td&gt;
&lt;td&gt;+1620.26%&lt;/td&gt;
&lt;td&gt;+24.12%&lt;/td&gt;
&lt;td&gt;+1720.42%&lt;/td&gt;
&lt;td&gt;+89.91%&lt;/td&gt;
&lt;td&gt;+41.66%&lt;/td&gt;
&lt;td&gt;-23.34%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;fsyncs per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.36%&lt;/td&gt;
&lt;td&gt;+1620.02%&lt;/td&gt;
&lt;td&gt;+24.13%&lt;/td&gt;
&lt;td&gt;+1720.18%&lt;/td&gt;
&lt;td&gt;+89.90%&lt;/td&gt;
&lt;td&gt;+41.66%&lt;/td&gt;
&lt;td&gt;-23.35%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Read throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.34%&lt;/td&gt;
&lt;td&gt;+1619.88%&lt;/td&gt;
&lt;td&gt;+24.11%&lt;/td&gt;
&lt;td&gt;+1719.98%&lt;/td&gt;
&lt;td&gt;+89.86%&lt;/td&gt;
&lt;td&gt;+41.63%&lt;/td&gt;
&lt;td&gt;-23.33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Write throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.30%&lt;/td&gt;
&lt;td&gt;+1620.68%&lt;/td&gt;
&lt;td&gt;+24.22%&lt;/td&gt;
&lt;td&gt;+1720.97%&lt;/td&gt;
&lt;td&gt;+89.96%&lt;/td&gt;
&lt;td&gt;+41.65%&lt;/td&gt;
&lt;td&gt;-23.33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total events&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.34%&lt;/td&gt;
&lt;td&gt;+1620.16%&lt;/td&gt;
&lt;td&gt;+24.12%&lt;/td&gt;
&lt;td&gt;+1720.31%&lt;/td&gt;
&lt;td&gt;+89.92%&lt;/td&gt;
&lt;td&gt;+41.65%&lt;/td&gt;
&lt;td&gt;-23.31%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+30.00%&lt;/td&gt;
&lt;td&gt;-95.00%&lt;/td&gt;
&lt;td&gt;-17.50%&lt;/td&gt;
&lt;td&gt;-95.00%&lt;/td&gt;
&lt;td&gt;-47.50%&lt;/td&gt;
&lt;td&gt;-27.50%&lt;/td&gt;
&lt;td&gt;+32.50%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;95th percentile latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+41.30%&lt;/td&gt;
&lt;td&gt;-97.39%&lt;/td&gt;
&lt;td&gt;-31.30%&lt;/td&gt;
&lt;td&gt;-97.83%&lt;/td&gt;
&lt;td&gt;-42.61%&lt;/td&gt;
&lt;td&gt;-22.17%&lt;/td&gt;
&lt;td&gt;+17.83%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Max latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+42.60%&lt;/td&gt;
&lt;td&gt;+56.69%&lt;/td&gt;
&lt;td&gt;-39.96%&lt;/td&gt;
&lt;td&gt;+242.30%&lt;/td&gt;
&lt;td&gt;-60.13%&lt;/td&gt;
&lt;td&gt;-58.19%&lt;/td&gt;
&lt;td&gt;-23.22%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total test time (s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+0.02%&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.01%&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.01%&lt;/td&gt;
&lt;td&gt;+0.01%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Percentage Differences Compared to VM on Proxmox (LVM) as this is the standard Proxmox setup&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;VM on Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, NVMe)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, Virtio)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (zvol)&lt;/th&gt;
&lt;th&gt;Host FreeBSD (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ext4)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File creation speed (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-11.64%&lt;/td&gt;
&lt;td&gt;+218.04%&lt;/td&gt;
&lt;td&gt;+203.09%&lt;/td&gt;
&lt;td&gt;+188.97%&lt;/td&gt;
&lt;td&gt;+252.24%&lt;/td&gt;
&lt;td&gt;+109.88%&lt;/td&gt;
&lt;td&gt;+37.18%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reads per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.78%&lt;/td&gt;
&lt;td&gt;+2115.42%&lt;/td&gt;
&lt;td&gt;+59.85%&lt;/td&gt;
&lt;td&gt;+2244.40%&lt;/td&gt;
&lt;td&gt;+144.58%&lt;/td&gt;
&lt;td&gt;+82.44%&lt;/td&gt;
&lt;td&gt;-1.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writes per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.78%&lt;/td&gt;
&lt;td&gt;+2115.37%&lt;/td&gt;
&lt;td&gt;+59.85%&lt;/td&gt;
&lt;td&gt;+2244.35%&lt;/td&gt;
&lt;td&gt;+144.57%&lt;/td&gt;
&lt;td&gt;+82.43%&lt;/td&gt;
&lt;td&gt;-1.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;fsyncs per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.79%&lt;/td&gt;
&lt;td&gt;+2115.30%&lt;/td&gt;
&lt;td&gt;+59.87%&lt;/td&gt;
&lt;td&gt;+2244.30%&lt;/td&gt;
&lt;td&gt;+144.58%&lt;/td&gt;
&lt;td&gt;+82.45%&lt;/td&gt;
&lt;td&gt;-1.28%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Read throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.77%&lt;/td&gt;
&lt;td&gt;+2114.70%&lt;/td&gt;
&lt;td&gt;+59.82%&lt;/td&gt;
&lt;td&gt;+2243.60%&lt;/td&gt;
&lt;td&gt;+144.49%&lt;/td&gt;
&lt;td&gt;+82.38%&lt;/td&gt;
&lt;td&gt;-1.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Write throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.71%&lt;/td&gt;
&lt;td&gt;+2114.64%&lt;/td&gt;
&lt;td&gt;+59.89%&lt;/td&gt;
&lt;td&gt;+2243.73%&lt;/td&gt;
&lt;td&gt;+144.49%&lt;/td&gt;
&lt;td&gt;+82.32%&lt;/td&gt;
&lt;td&gt;-1.33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total events&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.77%&lt;/td&gt;
&lt;td&gt;+2114.98%&lt;/td&gt;
&lt;td&gt;+59.83%&lt;/td&gt;
&lt;td&gt;+2243.94%&lt;/td&gt;
&lt;td&gt;+144.55%&lt;/td&gt;
&lt;td&gt;+82.40%&lt;/td&gt;
&lt;td&gt;-1.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-23.08%&lt;/td&gt;
&lt;td&gt;-96.15%&lt;/td&gt;
&lt;td&gt;-36.54%&lt;/td&gt;
&lt;td&gt;-96.15%&lt;/td&gt;
&lt;td&gt;-59.62%&lt;/td&gt;
&lt;td&gt;-44.23%&lt;/td&gt;
&lt;td&gt;+1.92%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;95th percentile latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-29.23%&lt;/td&gt;
&lt;td&gt;-98.15%&lt;/td&gt;
&lt;td&gt;-51.38%&lt;/td&gt;
&lt;td&gt;-98.46%&lt;/td&gt;
&lt;td&gt;-59.38%&lt;/td&gt;
&lt;td&gt;-44.92%&lt;/td&gt;
&lt;td&gt;-16.62%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Max latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-29.88%&lt;/td&gt;
&lt;td&gt;+9.88%&lt;/td&gt;
&lt;td&gt;-57.89%&lt;/td&gt;
&lt;td&gt;+140.03%&lt;/td&gt;
&lt;td&gt;-72.04%&lt;/td&gt;
&lt;td&gt;-70.68%&lt;/td&gt;
&lt;td&gt;-46.16%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total test time (s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.04%&lt;/td&gt;
&lt;td&gt;-0.03%&lt;/td&gt;
&lt;td&gt;-0.04%&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.03%&lt;/td&gt;
&lt;td&gt;+0.01%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Analysis of Performance Data&lt;/h4&gt;
&lt;p&gt;The performance data collected from various configurations of Proxmox and FreeBSD provides a comprehensive view of the I/O capabilities and highlights some significant differences. Here is an analysis of the key findings:&lt;/p&gt;
&lt;h5&gt;Comparative Analysis&lt;/h5&gt;
&lt;h6&gt;Hypothesis on NVMe Performance and fsync&lt;/h6&gt;
&lt;p&gt;An important observation from my tests is that VMs with the bhyve NVMe driver show significantly higher performance compared to the same VMs with the virtio driver or compared to the physical host system. This difference initially led me to hypothesize that the bhyve NVMe driver might not correctly respect fsync operations, returning a positive result before the underlying file system has confirmed the final write. However, this was just a theory based on benchmark results and is not supported by concrete data. Furthermore, some developers have reviewed the code and found no evidence to suggest this behavior, and I personally have never encountered any potential issues that would indicate such a problem.&lt;/p&gt;
&lt;p&gt;Specifically, I observed that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The VM with the virtio driver has performance comparable to Proxmox.&lt;/li&gt;
&lt;li&gt;The VM with the NVMe driver, whether on a ZFS dataset or zvol, shows performance superior to the physical FreeBSD host.&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Host Physical Systems and Filesystems&lt;/h5&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;File Creation Speed&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; shows the highest file creation speed at 1625.67 MiB/s, which is +68.03% compared to Host Proxmox (ZFS) and +156.72% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; has a file creation speed of 633.13 MiB/s, which is -34.62% compared to Host Proxmox (ZFS).&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Read and Write Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; demonstrates the highest read and write operations per second with 1234.62 reads/s and 823.08 writes/s.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reads per second: +34.06% compared to Host Proxmox (ZFS) and +147.80% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;li&gt;Writes per second: +34.04% compared to Host Proxmox (ZFS) and +147.61% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; shows a lower performance with 498.37 reads/s and 332.25 writes/s.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ZFS)&lt;/strong&gt; has 920.95 reads/s and 613.96 writes/s.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;fsync Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; achieves the highest fsync operations per second at 2634.01 fsyncs/s, which is +34.02% compared to Host Proxmox (ZFS) and +147.73% compared to Host Proxmox (ext4).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; has a lower performance with 1063.19 fsyncs/s.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ZFS)&lt;/strong&gt; achieves 1964.96 fsyncs/s.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; again leads in throughput with 19.29 MiB/s read and 12.86 MiB/s write.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Read throughput: +34.03% compared to Host Proxmox (ZFS) and +147.53% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;li&gt;Write throughput: +34.08% compared to Host Proxmox (ZFS) and +147.79% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; has the lowest throughput with 7.79 MiB/s read and 5.19 MiB/s write.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ZFS)&lt;/strong&gt; has 14.39 MiB/s read and 9.59 MiB/s write.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; shows the lowest average latency at 0.21 ms and 95th percentile latency at 1.32 ms.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Average latency: -27.59% compared to Host Proxmox (ZFS) and -60.38% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;li&gt;95th percentile latency: -26.27% compared to Host Proxmox (ZFS) and -51.29% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; has the highest average latency at 0.53 ms and 95th percentile latency at 2.71 ms.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ZFS)&lt;/strong&gt; has an average latency of 0.29 ms and 95th percentile latency of 1.79 ms.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h5&gt;VMs vs Physical Hosts&lt;/h5&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;File Creation Speed&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; demonstrates an outstanding file creation speed at 1467.83 MiB/s (+218.04% compared to VM on Proxmox (LVM) and +259.77% compared to VM on Proxmox (ZFS)).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; achieves 1333.64 MiB/s, which is also significantly higher than VM on Proxmox (LVM) and VM on Proxmox (ZFS).&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Read and Write Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; shows exceptional performance with 11183.44 reads/s and 7455.62 writes/s.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; also performs excellently with 11834.53 reads/s and 7889.69 writes/s.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;fsync Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; achieves 23858.08 fsyncs/s, and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; achieves 25247.36 fsyncs/s, both significantly higher than any other configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; achieves the highest throughput with 174.74 MiB/s read and 116.49 MiB/s write.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; also has high throughput at 184.91 MiB/s read and 123.28 MiB/s write.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; shows very low average latency at 0.02 ms and 95th percentile latency at 0.06 ms.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; has similarly low latencies, indicating fast response times for I/O operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h5&gt;VM Configurations Comparison&lt;/h5&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;File Creation Speed&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Among VMs, &lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; leads, followed by &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt;, and then &lt;strong&gt;VM on FreeBSD (ZFS, Virtio)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Read and Write Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; both outperform &lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; and &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt; configurations significantly.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; outperforms &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt; in read and write operations.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;fsync Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; have significantly higher fsync operations compared to &lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; and &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; have the highest throughput, followed by &lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; and then &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; show the lowest latencies among the VMs, indicating faster response times.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; shows lower latencies compared to &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Cache Settings and Performance Influence&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Cache settings can significantly influence the performance of virtualization systems. In my setup, I did not modify the cache settings for the NVMe and virtio drivers, keeping the default settings. It is possible that the observed performance differences are also due to how different operating systems manage the caches of NVMe devices. I encourage other system administrators to explore the cache settings of their systems to see if changes in this area can influence benchmark results.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Regarding RAM and CPU, the performance of the VMs is comparable. There are slight differences in favor of Proxmox for CPU and FreeBSD for RAM, but in my opinion, these differences are so negligible that they wouldn't sway the decision towards one solution or the other.&lt;/p&gt;
&lt;p&gt;The I/O performance data clearly indicates that VM on FreeBSD with NVMe and ZFS outperforms all other configurations by a significant margin. This is evident in the file creation speed, read/write operations per second, fsync operations per second, throughput, and latency metrics. &lt;/p&gt;
&lt;p&gt;When comparing physical hosts, Host FreeBSD (ZFS) demonstrates excellent performance, particularly in comparison to Host Proxmox (ZFS) and Host Proxmox (ext4). &lt;/p&gt;
&lt;p&gt;When comparing VMs, VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) configurations stand out as the top performers. &lt;/p&gt;
&lt;p&gt;The VM using virtio on FreeBSD also shows strong performance, albeit not as high as the NVMe configuration. It significantly outperforms Proxmox configurations in terms of file creation speed, read/write operations per second, and throughput, while maintaining competitive latencies.&lt;/p&gt;
&lt;p&gt;The virtio driver provides a stable and reliable option, making it a suitable choice for environments where the NVMe driver cannot be used. This makes FreeBSD with virtio a balanced option for virtualization, offering both high performance and reliability.&lt;/p&gt;
&lt;p&gt;By examining these performance metrics, users can make informed decisions about their virtualization and storage configurations to optimize their systems for specific workloads and performance requirements.&lt;/p&gt;
&lt;p&gt;In light of these tests and experiments, I can therefore affirm that my sensations (and those of many users) of greater "snappiness" of the VMs on FreeBSD can be confirmed. Certainly, Proxmox is a stable solution, rich in features, battle-tested, and has many other valid points, but FreeBSD, especially with the nvme driver, demonstrates very high performance and a very low overhead in installation and operation.&lt;/p&gt;
&lt;p&gt;I will continue to use both solutions with great satisfaction, but I will be even more encouraged to implement virtualization servers based on FreeBSD and bhyve.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 10 Jun 2024 05:53:45 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/</guid><category>freebsd</category><category>linux</category><category>proxmox</category><category>kvm</category><category>bhyve</category><category>hosting</category><category>filesystems</category><category>virtualization</category><category>zfs</category><category>debian</category><category>server</category></item><item><title>How we are migrating (many of) our servers from Linux to FreeBSD - Part 3 - Proxmox to FreeBSD</title><link>https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="How we are migrating (many of) our servers from Linux to FreeBSD - Part 3 - Proxmox to FreeBSD"&gt;&lt;/p&gt;&lt;p&gt;In recent years, &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;we've been migrating many of our servers from Linux to FreeBSD&lt;/a&gt; as part of our consolidation and optimization efforts. Specifically, we've been &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;moving services that were previously deployed using Docker onto FreeBSD&lt;/a&gt;, and it has proven to be a great choice for handling workloads efficiently.&lt;/p&gt;
&lt;p&gt;To this end, we've also been migrating many of our virtual machines (VMs) to FreeBSD, deploying services within FreeBSD jails. In some cases, these jails have even replaced entire VMs and run bare metal. Although we prefer to move to native FreeBSD whenever possible, sometimes it's not the best option for all the services we offer. As a result, one of our most critical physical servers has been left behind for years.&lt;/p&gt;
&lt;div class="hc-toc"&gt;&lt;/div&gt;

&lt;p&gt;This server was a Proxmox server that we installed many years ago and updated to version 6.4. It hosted some critical services, but upgrading to Proxmox 7.x posed some challenges. In particular, &lt;a href="https://forum.proxmox.com/threads/unified-cgroup-v2-layout-upgrade-warning-pve-6-4-to-7-0/"&gt;some of the LXC containers required tweaks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Unfortunately, this server was quite old, with only four physical disks and 64 GB of RAM. It was located in an OVH data center and had been running well until one of the disks started to malfunction once a week, on Sundays. This would trigger a RAID reconstruction that kept the system busy for about two days.&lt;/p&gt;
&lt;p&gt;Despite my preference for simple setups, this server had been deployed gradually over many years, and everything was tied together. As a result, unraveling the system to resolve the issues was not a simple task. &lt;em&gt;Sometimes the combination of simple things can make everything complex&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;The Proxmox Server&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://www.proxmox.com/en/"&gt;Proxmox&lt;/a&gt; server was configured as the central hub for various services, including primary DNS, web hosting, VOIP, and more. It featured several bridges, each with its own specific purpose, and was connected to a virtual machine running &lt;a href="https://mikrotik.com"&gt;MikroTik CHR&lt;/a&gt;. This machine was responsible for consolidating all incoming VPNs from the MikroTik devices we managed, both ours and those belonging to our clients. Additionally, it provided a series of bridges to manage these devices and all server management VPNs and other services. The Proxmox server also housed several virtual machines running Linux, FreeBSD, OpenBSD, and NetBSD, as well as LXC containers.&lt;/p&gt;
&lt;p&gt;Over the last two years, we've been migrating most of these virtual machines and containers to FreeBSD-based VMs, which feature their own specific jails. Consequently, most of the VMs we've had to move were BSD-based, while only five Linux VMs remained. The LXC containers hosted a range of services, including servers managed by &lt;a href="https://www.virtualmin.com"&gt;Virtualmin&lt;/a&gt;, a large installation of &lt;a href="https://www.zimbra.com"&gt;Zimbra&lt;/a&gt; (which was hosted within an LXC container running CentOS 7), as well as some minor Alpine Linux-based machines. We located all these virtual machines and containers in a LAN created and managed by CHR. All public IPs were managed by CHR, which relied on NAT mappings to establish communication between them. CHR had thus become the heart of our system, and if it experienced any issues, it could potentially take down the entire system. Fortunately, it remained stable for years.&lt;/p&gt;
&lt;h3&gt;Migration - first steps&lt;/h3&gt;
&lt;p&gt;The first step I took was to install FreeBSD on the new server. Easy peasy. The next step was to find a way for the CHR to migrate to the new server (under &lt;a href="https://bhyve.org"&gt;bhyve&lt;/a&gt;) and continue to manage all the public IPs of the original server. The problem is that OVH, with its failover IPs, &lt;a href="https://it-notes.dragas.net/2022/01/14/freebsd-assign-ovh-failover-ips-to-freebsd-jails/"&gt;ties a specific MAC address to each individual IP address&lt;/a&gt;. Therefore, the only way was to create a bridge on the FreeBSD server (on the Proxmox server, I already had the bridge on the physical network card) and create an L2 tunnel between the two servers - I used OpenVPN with tap interfaces, specifically inserted into the bridges. I could have used other methods and techniques, but I wanted to experiment with a setup that could allow, if necessary, to bridge a larger number of physical and virtual servers even if the IPs are all mapped to a single server. OVH does not allow, in fact, the splitting of classes, so a move must be made for the entire class, not for a single IP address.&lt;/p&gt;
&lt;p&gt;Initially, MikroTik CHR 7 did not boot on bhyve. In the end, &lt;a href="https://it-notes.dragas.net/2023/03/21/creating-a-mikrotik-chr-routeros-7-bhyve-vm-in-freebsd-2/"&gt;I managed to make it work&lt;/a&gt;, but I had other problems, probably related to the MTU of the interfaces. So I thought about taking the opportunity to unbind the LXC containers and VMs from CHR and remove MikroTik from the setup. With RouterOS version 7, in fact, Wireguard-based VPNs are also supported, so within a few days, it was possible to update the few routers still on 6.x and recreate some VPNs using Wireguard. I mapped both the VMs and LXC containers directly to their respective public IPs, greatly simplifying the steps. Everything worked perfectly.&lt;/p&gt;
&lt;p&gt;The next step was to test the first migrations, starting from the VMs already on FreeBSD. For simplicity, I created a new FreeBSD VM in bhyve and copied (via zfs-send and zfs-receive) the datasets related to &lt;a href="https://bastillebsd.org"&gt;BastilleBSD&lt;/a&gt;. All services are installed in jails managed by Bastille, so this was enough to have, in a short time, a new operating server equivalent to the previous one. At that point, I shut down the original server, connected the VM to the bridge linked to the tunnel (after modifying its MAC address), turned on the new FreeBSD VM (on bhyve), and everything started to work correctly - but from the new physical server.&lt;/p&gt;
&lt;p&gt;One by one, I moved all the FreeBSD VMs. For Linux, NetBSD, and OpenBSD, I simply copied the images and pointed bhyve to them. Some small specific configuration on vm-bhyve and everything started to work correctly. &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;Where possibile&lt;/a&gt;, I replaced the “virtio” with “nvme” as &lt;a href="https://klarasystems.com/articles/virtualization-showdown-freebsd-bhyve-linux-kvm/"&gt;it performs much better on bhyve&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Migration - LXC containers to Virtual Machines&lt;/h3&gt;
&lt;p&gt;For LXC containers, I initially thought of creating an Alpine Linux virtual machine, installing LXD, and copying each individual container. It worked for some of them, but for others, I started to encounter strange issues, similar to those that would have required manual intervention to upgrade from Proxmox 6.x to 7.x. As is often the case with Linux-based solutions, compatibility is not always preserved between updates, so I would have had to fine-tune all the containers, which I didn't feel like doing. The containers had been created (at the time) to optimize RAM usage on the Proxmox machine, but to date, they have caused more problems than benefits. In some cases, certain processes got "stuck," making it impossible to "reboot" the LXC container, requiring the entire physical node to be rebooted. If they had been virtual machines, I could have given a "kill" command from the virtualizer (to the respective KVM process, in that case) and restarted it.&lt;/p&gt;
&lt;p&gt;For greater compatibility and ease of future management, I decided to convert the LXC containers into actual VMs on bhyve. The process was simple:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating an empty VM with vm-bhyve and booting the VM with SystemRescueCD.&lt;/li&gt;
&lt;li&gt;Creating destination partitions and file systems in the VM, then doing a complete rsync of the original LXC container.&lt;/li&gt;
&lt;li&gt;Adjusting the fstab file, installing the kernel on the destination VM, and creating the initrd (some containers were already copies of VMs, so the kernel remained installed and updated, even though it wasn't being used. The initrd, on the other hand, did not include the &lt;em&gt;nvme&lt;/em&gt; or &lt;em&gt;virtio&lt;/em&gt; drivers, so I had to regenerate it anyway.)&lt;/li&gt;
&lt;li&gt;Adjusting the bhyve vm configuration file, doing one last rsync after shutting down the services, shutting down the original LXC container, and starting the bhyve VM.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Everything worked correctly, so one by one, I moved all the containers. The largest one ended up on another physical node (also FreeBSD with bhyve) temporarily because the space on the new server was not sufficient to contain it. It didn't need to be on this server, so no problem.&lt;/p&gt;
&lt;p&gt;One by one, the LXC containers started on the new server. Apart from some minor adjustments to the destination VMs (different network interface names, etc.), I didn't encounter any particular problems even after several days. Everything works perfectly.&lt;/p&gt;
&lt;p&gt;At the very end, I re-created the MikroTik CHR VM. I’ll keep this setup separate for now, as strictly tied to eoip interfaces. This was the main reason why I haven’t performed the migration before. Things were too tied together and I had to untie everything, step by step.&lt;/p&gt;
&lt;h3&gt;…and then one of the Linux VMs started to freeze&lt;/h3&gt;
&lt;p&gt;Several Linux VMs are just the basis on which Docker runs. One of them (not even among the busiest) started, every 12/15 hours, to completely freeze. It stopped responding to ping, and it was impossible to give any type of command from the console. In a word: &lt;em&gt;stuck&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Searching the web, I found some references to this problem and, observing the errors of an ssh session that was left connected (stuck, but still showing the last error), I found it to be a problem &lt;a href="https://forums.freebsd.org/threads/bhyve-debian-with-docker-unstable.87956/"&gt;similar to the one described in this post&lt;/a&gt;, namely:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;&amp;quot;watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [khugepaged:67]&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I tried various solutions such as changing the storage driver, the number of cores, the distribution (from Alpine to Debian), etc., but none of these operations solved the issue. I also noticed that the problem occurs with all Linux VMs, but only those with a recent kernel (&amp;gt; 5.10.x) freeze, while the others continue to work. The problem does not occur, however, with the *BSDs.&lt;/p&gt;
&lt;p&gt;In the end, I:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reduced the number of cores to 1 for the VMs that did not have a high load (some remained with multiple cores), hypothesising a problem with allocating cores that were too busy&lt;/li&gt;
&lt;li&gt;Gave the command: "&lt;em&gt;/usr/bin/echo 60 &amp;gt; /proc/sys/kernel/watchdog_thresh&lt;/em&gt;" to the VM.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The VM became stable, and I have not seen that error/warning on any other machine since. I will investigate further, but I believe it is a problem related to the Linux kernel, which, for some reason, generates a kernel panic if particular situations of CPU concurrency are generated.&lt;/p&gt;
&lt;h3&gt;The End…and a nice OOM!&lt;/h3&gt;
&lt;p&gt;After moving everything, I was finally able to migrate the entire class of OVH IPs from one physical server to another. The operation was quite quick, but in order to avoid problems, I notified all users and performed the operation on a Sunday and during off-peak hours. The whole process took about 10 minutes and there were no hitches of any kind.&lt;/p&gt;
&lt;p&gt;For safety reasons, I kept the Proxmox machine active for a few more days, but there was no need to use it. However, after a couple of days, I encountered a problem: the largest VM, in some cases, was being "killed" because FreeBSD generated an OOM. I had never seen, from FreeBSD 13.0 onwards, any OOM related to "abuse" of RAM usage by ZFS, but in this case, it actually happened.&lt;/p&gt;
&lt;p&gt;In the end, I understood that ZFS, on FreeBSD, is able to release memory, but not quickly enough to manage any "spikes" in individual VMs. In fact, the VMs do not know the situation of the physical host's RAM, so they will tend to occupy all the space allotted to them (even if only for caching). A sudden spike (i.e. if you create and launch a new VM) could cause a sudden increase in RAM usage by the bhyve process, and FreeBSD could be forced to kill it, even if part of the RAM is only ARC cache. While Proxmox supports HA (i.e., control over whether the VM is running), vm-bhyve only launches the VM (bhyve process). I should manage it with tools like &lt;em&gt;&lt;a href="https://mmonit.com/monit/"&gt;monit&lt;/a&gt;&lt;/em&gt;, but for now, I preferred to simply set limits on ZFS RAM usage using "vfs.zfs.arc_max", and there have been no more problems.&lt;/p&gt;
&lt;h3&gt;Final considerations&lt;/h3&gt;
&lt;p&gt;The operation was long but linear. The most complex part was unraveling all the configurations related to MikroTik CHR and the VPNs linked to each individual LXC machine/container. Once everything was implemented on a dedicated VM, the operation was fairly straightforward.&lt;/p&gt;
&lt;p&gt;The hardware specifications of the destination physical server are slightly better than the starting one, but the final performance of the setup has greatly improved. The VMs are very responsive (even those that were previously LXC containers running directly on bare metal) and, thanks to ZFS, I can make local snapshots every 5 minutes. In addition, every 10 minutes, I can copy (using the excellent zfs-autobackup) all the VMs and jails to other nodes &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;both as a backup and as an immediate restart in case of disaster&lt;/a&gt;. I just need to map the IPs, and everything will start working very quickly. Proxmox also allows you to perform this type of operation with ZFS, but you still need to have Proxmox (in a compatible version) on the target machine. With the current setup, I only need any FreeBSD node that supports bhyve.&lt;/p&gt;
&lt;p&gt;Proxmox is an excellent tool, well-developed, open-source, efficient, and stable. We manage many installations, including complex ones (&lt;a href="https://it-notes.dragas.net/2020/06/29/create-automatic-snapshots-on-cephfs/"&gt;ceph clusters&lt;/a&gt;, etc.), and it has never let us down. However, not all tools are ideal for all situations, and for setups like the one described, the new configuration based on FreeBSD has shown significantly interesting performance and greater management and maintenance granularity.&lt;/p&gt;
&lt;p&gt;Virtualizing on vm-bhyve is not complex, but it is certainly not comparable, at the current state, to the simplicity of using a clean and complete interface like Proxmox's. A complete HA system is still missing (sure, it's achievable manually, but...), as well as complete management web interface. However, for knowledgeable users, it is undoubtedly a powerful tool that allows you to have excellent FreeBSD as a base. I'm totally satisfied with my migration and the result is far better than I expected.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 14 Mar 2023 13:00:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/</guid><category>freebsd</category><category>alpine</category><category>data</category><category>bhyve</category><category>filesystems</category><category>docker</category><category>ha</category><category>hardware</category><category>hosting</category><category>linux</category><category>lxc</category><category>networking</category><category>ovh</category><category>proxmox</category><category>recovery</category><category>restore</category><category>server</category><category>snapshots</category><category>virtualization</category><category>web</category><category>zfs</category><category>backup</category><category>jail</category><category>container</category><category>mikrotik</category><category>ownyourdata</category><category>series</category></item><item><title>Creating an Alpine Linux VM on bhyve - with root on ZFS (optionally encrypted)</title><link>https://it-notes.dragas.net/2022/11/01/creating-an-alpine-vm-on-bhyve-with-root-on-zfs-optionally-encrypted/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/alps.webp" alt="Creating an Alpine Linux VM on bhyve - with root on ZFS (optionally encrypted)"&gt;&lt;/p&gt;&lt;p&gt;Bhyve is great - and we’re using it with a lot of guest operating systems.&lt;/p&gt;
&lt;p&gt;One of my favourite Linux distributions is &lt;a href="https://alpinelinux.org"&gt;Alpine Linux&lt;/a&gt; - it’s great as a docker or lxc/lxd host, is light, stable and easily manageable.&lt;/p&gt;
&lt;p&gt;On FreeBSD, &lt;a href="https://github.com/churchers/vm-bhyve"&gt;vm-bhyve&lt;/a&gt; already provides a good template for Alpine Linux, but it’s based on the plain standard image with boot on ext4.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;So we need the &lt;a href="https://alpinelinux.org/downloads/"&gt;alpine-extended iso&lt;/a&gt; -&lt;/strong&gt; with zfs module.&lt;/p&gt;
&lt;p&gt;Let’s create a new Alpine Linux bhyve VM:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;vm create -t alpine -s 50G -m 4G -c 2 alpinevm&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now let’s configure it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;vm configure alpinevm&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let’s change some options: &lt;em&gt;vmlinuz-vanilla&lt;/em&gt; and &lt;em&gt;initramfs-vanilla&lt;/em&gt; should be changed to &lt;em&gt;vmlinuz-lts&lt;/em&gt; and &lt;em&gt;initramfs-lts&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;More, we should instruct grub to boot from ZFS. The configuration should be similar to this:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;[…]
grub_install0=&amp;quot;linux /boot/vmlinuz-lts initrd=/boot/initramfs-lts alpine_dev=cdrom:iso9660 modules=loop,squashfs,sd-mod,usb-storage,sr-mod&amp;quot;
grub_install1=&amp;quot;initrd /boot/initramfs-lts&amp;quot;
grub_run0=&amp;quot;linux /boot/vmlinuz-lts root=rpool/ROOT/alpine rootfstype=zfs modules=ext4,zfs&amp;quot;
grub_run1=&amp;quot;initrd /boot/initramfs-lts&amp;quot;
[…]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It’s now time to start with the installation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;vm install alpinevm &lt;em&gt;alpine-extended.iso&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now, follow &lt;a href="https://wiki.alpinelinux.org/wiki/Root_on_ZFS_with_native_encryption"&gt;this Alpine Linux Wiki guide&lt;/a&gt;. Remember that we’re dealing with “&lt;em&gt;vda&lt;/em&gt;” devices, not &lt;em&gt;"sda"&lt;/em&gt;, so change them accordingly. If you don’t want to have an encrypted rootfs dataset, just avoid the encryption line in the zpool create command.&lt;/p&gt;
&lt;p&gt;At the end of the installation procedure, reboot and enjoy your new Alpine Linux bhyve VM.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 01 Nov 2022 15:47:35 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2022/11/01/creating-an-alpine-vm-on-bhyve-with-root-on-zfs-optionally-encrypted/</guid><category>alpine</category><category>linux</category><category>server</category><category>tutorial</category><category>zfs</category><category>freebsd</category><category>hosting</category><category>filesystems</category><category>bhyve</category><category>virtualization</category></item><item><title>How we are migrating (many of) our servers from Linux to FreeBSD - Part 2 - Backups and Disaster Recovery</title><link>https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="How we are migrating (many of) our servers from Linux to FreeBSD - Part 2 - Backups and Disaster Recovery"&gt;&lt;/p&gt;&lt;p&gt;After &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;my post on why we’re migrating (most of) our servers from Linux to FreeBSD&lt;/a&gt;, I’ve started to &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;write about how we’re doing it&lt;/a&gt;. After covering a basic installation (we’re doing a massive use of jails), I’m going now to describe how we’re performing backups.&lt;/p&gt;
&lt;p&gt;Backup is not a tool. Backup is not a software you can buy.  &lt;a href="https://it-notes.dragas.net/2020/08/05/searching-for-a-perfect-backup-solution/"&gt;Backup is a &lt;em&gt;strategy&lt;/em&gt; you need to study and implement&lt;/a&gt; to be able to solve your specific problems. You need to understand what you’re doing, otherwise you’ll always have a &lt;strong&gt;Schrödinger’s Backup&lt;/strong&gt;  - it may work or not and if you don’t test it well enough (i.e. restore) you’ll find out when it’s too late.&lt;/p&gt;
&lt;p&gt;We’re performing backups in many different ways but, for our physical and virtual FreeBSD servers, we have a dual approach. We need both a “ready to use” backup (that will be described here, useful for a fast disaster recovery or prompt restore of specific jails) and a “colder”, more space efficient backup that can be kept for months (or years), &lt;a href="https://it-notes.dragas.net/tags/borg/"&gt;more similar to the borg approach on previous posts&lt;/a&gt;. Generally speaking, we store our OS (and jails) on ZFS, so I’ll describe this kind of approach here.&lt;/p&gt;
&lt;h2&gt;Disaster recovery backup - ZFS send/receive&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://bastillebsd.org"&gt;BastilleBSD&lt;/a&gt; creates its datasets and mounts them on /usr/local/bastille . There are no databases, all the jails’ configurations are inside that mountpoint so it’s quite easy to backup and restore all the jails or any single jail in one go. More, the “everything-in-a-jail” approach simplifies the restore process as you don’t need to restore the &lt;em&gt;entire&lt;/em&gt; host OS, just install an empty FreeBSD server, install BastilleBSD and restore the jails. Or add the jails to already existing FreeBSD systems.&lt;/p&gt;
&lt;p&gt;We normally use FreeBSD (or Linux with ZFS) backup servers, well protected and encrypted at rest. For the ZFS send/receive approach, our servers are &lt;strong&gt;NOT&lt;/strong&gt; reachable from the outside. We can ssh into them only using a VPN - they’re too precious to be exposed on the World &lt;em&gt;Wild&lt;/em&gt; Web - or, if strictly needed, we expose ssh only using keys, no passwords. We perform the backups using a &lt;em&gt;pull&lt;/em&gt; strategy: the backup server connects to the production servers, gets the data, disconnects. The production servers have NO ACCESS to the main backup server. Should they ever be seriously compromised, the backup is safe.&lt;/p&gt;
&lt;p&gt;There are many tools that can help to set up this kind of configuration. I’ve tried many of them and found that they all have some good and bad points. The one I decided to use for our servers is &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;zfs-autobackup&lt;/a&gt;. It’s easy to use, everything can be set via command line and has a good cron (or Jenkins) output, useful to understand if everything is right.&lt;/p&gt;
&lt;p&gt;Let’s consider two servers, one is called “ProdA” and the other is called “Bck” - we obviously want to backup the ProdA into Bck.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;Installing ProdA has been covered on a previous post&lt;/a&gt;, Bck is quite simple and outside the scope of this post. We just need a protected zfs FreeBSD (or Linux) server. That’s all. Let’s assume that ProdA has a BastilleBSD zfs dataset (and children datasets), with jails and everything needed, as configured in the last post. We now need to install the needed software. On Bck:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;pkg install py311-zfs-autobackup mbuffer
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On ProdA:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;pkg install mbuffer
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;mbuffer will be used as a ram buffer to avoid read/write spikes (or slowdowns) while sending/receiving the snapshots.&lt;/p&gt;
&lt;p&gt;It’s time to prepare the destination dataset. Assuming that Bck has a zroot base dataset, we’ll be creating (as root) a zroot/backups/ProdA&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs create -p zroot/backups/ProdA
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Ok, let’s now go to ProdA&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;We want to create an unprivileged user that will send the data. &lt;strong&gt;We don’t want to allow Bck to connect as root&lt;/strong&gt;, even if it’s trusted and secure. Let’s create a user called “backupper”. Then, we need to give backupper the right permissions:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs allow -u backupper send,snapshot,hold,mount,destroy zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;Note: if you want Bck to be able to delete the snapshots on ProdA, backupper needs the destroy permission. That means this user can destroy the whole system as can ALSO destroy zroot (or any source dataset you decide). If you’re afraid of this, different approaches must be used (i.e.: local root performing snapshot/cleanups and Bck only transferring them, not hard to achieve with zfs-autobackup). Considering that the Bck is safe, secure and protected, we can tolerate this weakness. Just be sure nobody can break the “backupper” user. Do not use password, use ssh keys and treat this user with the same care you'd use with root.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Now, as root on ProdA:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs set autobackup:bck_server=true zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;We’re setting a custom property, called “autobackup:bck_server”, allowing the zroot (and children) dataset to be backed up by zfs-autobackup. zfs-autobackup will search for all datasets with that property set to “true” (also on different pools) and will backup them. If there’s a specific dataset you don’t want to backup, just set it to “false”. Or if you don’t want to backup the entire zroot but, for example, only “zroot/bastille” (and children),  just set autobackup:bck_server=true for that dataset.&lt;/p&gt;
&lt;h3&gt;ssh config&lt;/h3&gt;
&lt;p&gt;zfs-autobackup will connect via ssh and zfs-autobackup will try to connect as root. Moreover, even after exchanging the ssh key, Bck will connect many times to ProdA to send its zfs commands (one connection per command). Ssh session initiation is quite long, so there will be some latency. In order to (greatly) speed up this time,&lt;/p&gt;
&lt;p&gt;&lt;em&gt;“You can make your ssh connections persistent and greatly speed up zfs-autobackup:&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;On the server that initiates the backup add this to your ~/.ssh/config:&lt;/em&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;Host ProdA
User backupper
ControlPath ~/.ssh/control-master-%r@%h:%p
ControlMaster auto
ControlPersist 3600
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;em&gt;(Taken from &lt;a href="https://github.com/psy0rz/zfs_autobackup/wiki/Performance"&gt;https://github.com/psy0rz/zfs_autobackup/wiki/Performance&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;It's now time to go back to Bck and issue a command like this (one line):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;/usr/local/bin/zfs-autobackup --ssh-source ProdA bck_server zroot/backups/ProdA --zfs-compressed --no-progress --verbose --buffer 32M --keep-source 0 --no-holds  --set-properties readonly=on --clear-refreservation --keep-target 1d1w,1w1m,1m6m  --destroy-missing 30d --clear-mountpoint
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Bck will connect to ProdA, perform the snapshots and start transferring. The most interesting options I used here are:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt; --keep-source 0 (only the last snapshot will be kept on ProdA)
 --set-properties readonly=on (be sure the Bck clone is read only, so we will be able to perform an incremental/differential backup next time)
 --keep-target 1d1w,1w1m,1m6m (keep one backup per day for one week, one per week for one month, one per month for six months)
 --destroy-missing 30d (if we've deleted a dataset, keep it for 30 days before removing it from Bck)
 --clear-mountpoint (do not mount the dataset in Bck, as it will cause problems sooner or later)
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The first copy will be slow as it'll need to send all the data. The second will be quite fast as only the differences will be transferred.&lt;/p&gt;
&lt;h2&gt;How to perform a disaster recovery&lt;/h2&gt;
&lt;p&gt;Ok, your dataset (or datasets) has gone. You need to replace it with the last external backup.  You have to retransfer the copy into ProdA (or another FreeBSD host, no difference). Connect to Bck and search for the snapshot you want to restore (&lt;em&gt;zfs list -t snapshot&lt;/em&gt; will help). Once identified (one line):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs send -R zroot/backups/ProdA/zroot/bastille/bastille/jails/t1@bck_server@20220528005830 | mbuffer -4 -s 128k -m 32M | ssh root@ProdA &amp;quot;zfs receive -F -x canmount -x readonly zroot/bastille/bastille/jails/t1&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Note the &lt;em&gt;-x canmount -x readonly&lt;/em&gt;  flags. Remember that we altered the canmount and readonly properties of the transferred datasets during the backup, so we must restore them into a normal state.&lt;/p&gt;
&lt;p&gt;Once finished, ProdA (or the other, restored host) will show t1 as an available jail and you'll be able to start it.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 30 May 2022 03:03:52 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/</guid><category>freebsd</category><category>data</category><category>filesystems</category><category>jail</category><category>linux</category><category>backup</category><category>restore</category><category>borg</category><category>recovery</category><category>snapshots</category><category>tutorial</category><category>container</category><category>server</category><category>security</category><category>zfs</category><category>ownyourdata</category><category>series</category></item><item><title>How we are migrating (many of) our servers from Linux to FreeBSD - Part 1 - System and jails setup</title><link>https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/datacenter.webp" alt="How we are migrating (many of) our servers from Linux to FreeBSD - Part 1 - System and jails setup"&gt;&lt;/p&gt;&lt;p&gt;&lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;After my post on why we’re migrating (many of) our servers to FreeBSD&lt;/a&gt;, I’ve received a lot of feedback. Many questions, many comments. Many e-mails from Linux users asking &lt;em&gt;how&lt;/em&gt; we’re migrating, how jails can replace lxc or (in part) Docker, and how we’re monitoring and &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;performing backups/restores&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I’ll write some posts on how we’re doing it. Of course, it won’t cover all the use cases, but it surely will work for the most common ones.&lt;/p&gt;
&lt;p&gt;Let’s start with a general idea of the setup I’m going to describe. One of the things I’ve always tried to do is to leave the host Operating System (VM Hypervisors like XCP-NG or &lt;a href="https://it-notes.dragas.net/tags/proxmox/"&gt;Proxmox Server&lt;/a&gt;, FreeBSD with jails and/or Bhyve, &lt;a href="https://it-notes.dragas.net/2021/11/03/alpine-linux-and-lxd-perfect-setup-part-1-btrfs-file-system/"&gt;Alpine Linux lxc host&lt;/a&gt;, Debian with &lt;a href="https://it-notes.dragas.net/tags/docker/"&gt;docker&lt;/a&gt; etc.) as simple and empty as possible.&lt;/p&gt;
&lt;p&gt;That’s why we’re keeping the same setup here, with a FreeBSD host system with as few packages (or ports) as possible. This will ensure easier upgrades, &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;easier backup/restore procedures&lt;/a&gt; and a smaller attack surface. Unused services or executables can be problematic, so let’s keep them out of our setup.&lt;/p&gt;
&lt;p&gt;I’m going to describe our “basic” FreeBSD host and some jails that will contain the services. There are many jail management systems out there like &lt;a href="https://bastillebsd.org"&gt;BastilleBSD&lt;/a&gt;, &lt;a href="https://cbsd.io"&gt;cbsd&lt;/a&gt;, &lt;a href="https://iocage.readthedocs.io/en/latest/"&gt;iocage&lt;/a&gt;, the old &lt;a href="https://www.freebsd.org/cgi/man.cgi?query=ezjail"&gt;ezjail&lt;/a&gt;, etc. I love simple (but powerful) solutions, without any database of configured/running jails as it may be a problem in case of backup/recovery. I love the BastilleBSD approach, so that’s the one we’ve been using for our servers. BastilleBSD is a collection of shell scripts, is small, doesn’t need any database, is actively developed, doesn’t need any kind of dependency and works both on ZFS and UFS. iocage, for example, needs ZFS so UFS servers are out and doesn’t seem to be actively developed anymore. Moreover, BastilleBSD doesn’t interfere with other jail systems, so you can mix and match whatever you like. Still, I recommend to choose a jail management system and stick with that. We've done it, and we’re using BastilleBSD.&lt;/p&gt;
&lt;p&gt;I won’t describe basic FreeBSD installation. It’s straightforward, easy, fast and &lt;a href="https://docs.freebsd.org/en/books/handbook/bsdinstall/"&gt;it’s full of good documentation out there&lt;/a&gt;. The most critical decision is the choice of the file system you’re going to use. We’re using ZFS when possible, as it gives us a lot of good opportunities. For more resource constrained systems (or specific situations where ZFS isn’t recommended nor applicable), we just stick with UFS. It has snapshots, too, so backups and restores are quite easy, too.&lt;/p&gt;
&lt;p&gt;Once you’ve installed FreeBSD, you have a fully functional host system. From now on, I’ll assume you’ve installed on ZFS.&lt;/p&gt;
&lt;p&gt;First of all, we upgrade our hosts systems to the last security patches. It’s just a matter of:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;freebsd-update fetch
freebsd-update install
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After this, It’d be better to reboot to be sure that the kernel has been upgraded if needed. As it’s an empty and just installed system, we usually don’t use the Boot Environments for now. Reinstalling is just a matter of minutes and we’re not doing a release upgrade.&lt;/p&gt;
&lt;p&gt;Now: ports or packages? &lt;em&gt;We usually use packages&lt;/em&gt;: the host system will be simple and empty and we don’t need customised build options. So quarterly packages are more than enough.&lt;/p&gt;
&lt;p&gt;First of all, let’s install the “pkg” package management system.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;pkg install
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;BastilleBSD&lt;/h2&gt;
&lt;p&gt;Now let’s use pkg to install BastilleBSD:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;pkg install bastille
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The first step is to configure BastilleBSD. All we usually do is: set the ZFS options. &lt;em&gt;bastille_zfs_enable=“YES”&lt;/em&gt; and &lt;em&gt;bastille_zfs_pool=“zroot/bastille”&lt;/em&gt; and create the dataset:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;zfs create zroot/bastille
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This way, BastilleBSD will create its base dataset in &lt;em&gt;zroot/bastille/bastille&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Why the nested bastille/bastille dataset?&lt;/em&gt; Because we sometimes use zroot/bastille to store some information about the underlying jails, so we prefer to keep a nested dataset. BastilleBSD will create its datasets there and will mount the needed datasets in /usr/local/bastille so you will find everything there.&lt;/p&gt;
&lt;p&gt;Let’s complete the Bastille installation &lt;a href="https://bastillebsd.org/getting-started/"&gt;as described in the official documentation&lt;/a&gt;. There are many approaches: loopback, vnet with local interface, vnet with existing bridge. The loopback approach is the easiest and more portable. Generally speaking we tend to use it as it’s easier to deal with when we have to perform an emergency restore to another host. I’ll write more about it in the next posts.&lt;/p&gt;
&lt;p&gt;Now let’s bootstrap the FreeBSD 14.1-RELEASE so Bastille will download the needed files and create its base system, then it will apply the security patches.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;bastille bootstrap 14.1-RELEASE update
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;NOTE: BastilleBSD supports other jailed operating systems and, recently, some jailed Linux distributions.&lt;/p&gt;
&lt;p&gt;After a while, the system will be ready for its first jail. Now, we generally install a reverse proxy, in order to expose it to web traffic. The reverse proxy will be able to connect to other jails and forward the traffic. It's a good example, so let's do it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;bastille create reverseproxy 14.1-RELEASE 192.168.0.1 bastille0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;So we’re creating a jail called reverseproxy, the FreeBSD version is 14.1-RELEASE (of the jail, must be the same or older than the host OS), the jail ip and the loopback interface to use to assign this IP for the jail. The jail will have a dataset (in jails/reverseproxy) with jail configuration, redirect configuration, fstab, etc. and another child dataset (jails/reverseproxy/root) with its root file system. Jails can be thin or thick: &lt;a href="https://bastille.readthedocs.io/en/latest/"&gt;BastilleBSD documentation is good, so you can go deeper here.&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;At this point, a “&lt;em&gt;bastille list -a&lt;/em&gt;” should show the jail and it should be running. Now we can enter this jail:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;bastille console reverseproxy
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;and install Nginx (and certbot, if needed):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;pkg install py311-certbot-nginx nginx
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Configure your nginx (and certbot). As you might guess, I won’t describe it here.&lt;/p&gt;
&lt;p&gt;Let’s ensure Nginx will be started at jail launch, so:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;service nginx enable
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;and let’s start it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;service nginx start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Let’s go back to the host and let’s ensure that all the connections to the host ports 80 (http) and 443 (https) will be redirected to the reverseproxy jail:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;bastille rdr reverseproxy tcp 80 80
bastille rdr reverseproxy tcp 443 443
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;No output should be shown, but you can check:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;bastille rdr ALL list
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Congratulations, you’ve exposed your first jail.&lt;/p&gt;
&lt;p&gt;You can create all the jails you want. They will be created and stored in child datasets of zroot/bastille/bastille - this is great for backup purposes, but I’ll describe more about it in another article. They'll also be able to communicate using their private ip addresses. If you've used vnet, you'll need to perform some deeper network configurations (and you can use pf inside the jail!), if using the default loopback device you'll be sharing the host network stack.&lt;/p&gt;
&lt;h2&gt;Blacklistd&lt;/h2&gt;
&lt;p&gt;FreeBSD base system has some interesting tools and they get automatically installed. One of those is &lt;a href="https://www.freebsd.org/cgi/man.cgi?query=blacklistd&amp;amp;sektion=8&amp;amp;format=html"&gt;blacklistd&lt;/a&gt;. If you’ve used &lt;a href="https://www.fail2ban.org/wiki/index.php/Main_Page"&gt;fail2ban&lt;/a&gt;, &lt;a href="http://denyhosts.sourceforge.net"&gt;denyhosts&lt;/a&gt; or similar tools, you know what it’s useful for. But it’s integrated and is light. Fail2ban, for example, tends to become heavy and huge as it’s reading from log files. Blacklistd gets notified by the daemon it’s protecting, so load is lower. To enable blacklistd, add to /etc/rc.conf:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;blacklistd_enable=&amp;quot;YES&amp;quot;
blacklistd_flags=&amp;quot;-r&amp;quot; 
pflog_enable=&amp;quot;YES&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;(I’m assuming you’ve already added &lt;em&gt;pf_enable=“YES”&lt;/em&gt; when you’ve installed BastilleBSD. Otherwise, you should add this, too, if you’re using pf.&lt;/p&gt;
&lt;p&gt;Now you should add this to pf.conf:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;anchor &amp;quot;blacklistd/*&amp;quot; in on $ext_if
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Where $ext_if is your external interface.&lt;/p&gt;
&lt;p&gt;Last step, let’s get to /etc/ssh/sshd_config and enable blacklistd uncommenting:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;UseBlacklist yes
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now reload/start pf, sshd, start pflog and blacklistd and wait. After some time, &lt;em&gt;blacklistctl dump -r&lt;/em&gt; will show you some data.&lt;/p&gt;
&lt;p&gt;Of course there are many more steps to do, the host should be hardened, network should be configured and firewalled, etc. but it's a basic idea of how we're keeping our host as standard as possible and, then, create the services inside the jails.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Sat, 05 Feb 2022 10:10:47 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/</guid><category>freebsd</category><category>networking</category><category>security</category><category>server</category><category>tutorial</category><category>zfs</category><category>container</category><category>jail</category><category>hosting</category><category>ownyourdata</category><category>series</category></item><item><title>Why we're migrating (many of) our servers from Linux to FreeBSD</title><link>https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/checkmate.webp" alt="Why we&amp;#x27;re migrating (many of) our servers from Linux to FreeBSD"&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;More about this &lt;a href="https://it-notes.dragas.net/2024/10/03/i-solve-problems-eurobsdcon/"&gt;in the article I wrote to accompany my talk at EuroBSDCon 2024&lt;/a&gt;&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Please note: This article was (unexpectedly) quite popular and many interesting opinions came out from the Hacker News thread that it generated. If you're interested, have a look here:&lt;/em&gt; &lt;a href="https://news.ycombinator.com/item?id=30057549"&gt;https://news.ycombinator.com/item?id=30057549&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;There's an &lt;a href="https://www.dragas.net/posts/perche-migrare-i-server-da-linux-a-freebsd/"&gt;Italian version of this article here&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;I've been a Linux (or GNU/Linux, for the purists) user since 1996. I've been a FreeBSD user since 2002. I have always successfully used both operating systems, each for specific purposes. I have found, on average, BSD systems to be more stable than their Linux equivalents. By stability, I don't mean uptime (too much uptime means too few kernel security updates, which is wrong). I mean that things work as they should, that they don't "break" from one update to the next, and that you don't have to revise everything because of a missing or modified basic command.&lt;/p&gt;
&lt;p&gt;I've always been for development and innovation as long as it doesn't (necessarily, automatically and unreasonably) break everything that is already in place. And the road that the various Linux distributions are taking seems to be that of modifying things that work just for the sake of it or to follow the diktats of the Kernel and those who manage it - but not only.&lt;/p&gt;
&lt;p&gt;Some time ago we started a complex, continuous and not always linear operation, that is to migrate, where possible, most of the servers (ours and of our customers) from Linux to FreeBSD.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Why FreeBSD?&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;There are many alternative operating systems to Linux and the *BSD family is varied and complete. &lt;a href="https://www.freebsd.org/"&gt;FreeBSD&lt;/a&gt;, in my opinion, today is the "all rounder" system par excellence, i.e. well refined and suitable both for use on large servers and small embedded systems. The other BSDs have strengths that, in some fields, make them particularly suitable but FreeBSD, in my humble opinion, is suitable (almost) for every purpose.&lt;/p&gt;
&lt;p&gt;So back to the main topic of this article, why am I migrating many of the servers we manage to FreeBSD? The reasons are many, I will list some of them with corresponding explanations.&lt;/p&gt;
&lt;h2&gt;The system is consistent - kernel and userland are created and managed by the same team&lt;/h2&gt;
&lt;p&gt;One of the fundamental problems with Linux is that (we shall remember) it is a kernel, everything else is created by different people/companies. On more than one occasion Linus Torvalds as well as other leading Linux kernel developers have remarked that they care about the development of the kernel itself, not how users will use it. In the technical decisions, therefore, they don't take into account what is the real use of the systems but that the kernel will go its own path. This is a good thing, as the development of the Linux kernel is not "held back" by the struggle between distributions and software solutions, but at the same time it is also a disadvantage. In FreeBSD, the kernel and its userland (i.e. all the components of the base operating system) are developed by the same team and there is, therefore, a strong cohesion between the parties. In many Linux distributions it was necessary to "deprecate" ifconfig in favor of ip because new developments in the kernel were no longer supported by ifconfig, without breaking compatibility with other (previous) kernel versions or having functions (on the same network interface) managed by different tools. In FreeBSD, with each release of the operating system, there are both kernel and userland updates, so these changes are consistently incorporated and documented, making the tools compatible with their kernel-side updates.&lt;/p&gt;
&lt;p&gt;In other words, in FreeBSD there is no need to " revolutionise" everything every few years and changes are made primarily in the form of additions that can enrich (and not break) each update. If a modification was to change the way it interacts with network devices, ifconfig would be modified to take advantage of that and remain compatible with the "old" syntax. In the long-term, this kind of approach is definitely appreciated by system administrators who find themselves with a linear, consistent, and always well-documented update path.&lt;/p&gt;
&lt;h2&gt;FreeBSD development is (still) driven by technical interests, not strictly commercial ones.&lt;/h2&gt;
&lt;p&gt;Linux and related distributions now have contributions from many companies, many of which (e.g. Red Hat) push (justifiably) in the direction of what is convenient for them, their products, and their services. Being big contributors to the project they have a big clout so, indeed, their solutions often become de-facto standards. Consider systemd - was there really a need for such a system? While it brought some advantages, it added some complexity to an otherwise extremely simple and functional system. &lt;a href="https://www.howtogeek.com/675569/why-linuxs-systemd-is-still-divisive-after-all-these-years/"&gt;It remains divisive to this day&lt;/a&gt;, with many asking, "but was it really necessary? Did the advantages it brought balance the disadvantages?". 70 binaries just for initialising and logging and a million and a half lines of code just for that? But Red Hat threw the rock...and many followed along. Because sometimes it's nice to follow the trend, the hype of a specific solution.&lt;/p&gt;
&lt;p&gt;Even FreeBSD has big companies behind it, collaborating in a more or less direct way. The license is more permissive, so not everyone who uses it commercially contributes to it, but knowing that FreeBSD is &lt;a href="https://papers.freebsd.org/2019/fosdem/looney-netflix_and_freebsd/"&gt;at the base of Netflix CDNs&lt;/a&gt;, &lt;a href="https://news.ycombinator.com/item?id=22028689"&gt;Whatsapp servers&lt;/a&gt; (waiting for Meta to replace them, for internal coherence reasons, with Linux servers), &lt;a href="https://en.wikipedia.org/wiki/PlayStation_4_system_software"&gt;Sony Playstations&lt;/a&gt; and, in part, &lt;a href="https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/BSD/BSD.html"&gt;macOS, iOS, iPadOS, etc.&lt;/a&gt; surely gives confidence on its level. These realities, however, do not have enough clout to drive the development of the core team.&lt;/p&gt;
&lt;h2&gt;Linux has Docker, Podman, lxc, lxd, etc. but... FreeBSD has jails!&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://docs.freebsd.org/en/books/handbook/jails/"&gt;FreeBSD jails&lt;/a&gt; are very powerful tools for jailing and separating services. There is controversy about Docker not running on FreeBSD but I believe (like many others) that FreeBSD has a more powerful tool. Jails are older and more mature - and by far - than any containerization solution on Linux. Jails are efficient and are well integrated throughout the operating system. All major commands (ps, kill, top, etc.) are able to display jail information as well. There are many management tools but, in fact, they all do the same thing: they interact with FreeBSD base and create custom configuration files. Personally I'm very comfortable with &lt;a href="https://bastillebsd.org/"&gt;BastilleBSD&lt;/a&gt; but there are a lot of very good tools as well as a sufficiently simple manual management. When I need Docker I launch a Linux machine - often &lt;a href="https://www.alpinelinux.org/"&gt;Alpine&lt;/a&gt;, which I think is a great minimalist distribution, or &lt;a href="https://www.debian.org/"&gt;Debian&lt;/a&gt;. But I'm moving a lot of services from Docker to a dedicated jail on FreeBSD. &lt;a href="https://www.dragas.net/posts/docker-e-la-nuova-separazione-dei-servizi/"&gt;Docker containers&lt;/a&gt; are a great tool for rapid (and consistent) software deployment, but it's not all fun and games. Containers, for example, rely on images that sometimes age and are no longer updated. &lt;a href="https://www.infoq.com/news/2020/12/dockerhub-image-vulnerabilities/"&gt;This is a security issue&lt;/a&gt; that should not be overlooked.&lt;/p&gt;
&lt;h2&gt;Linux has ext4, xfs, btrfs... (and zfs, but with some manual intervention). FreeBSD has UFS2 and ZFS&lt;/h2&gt;
&lt;p&gt;UFS2 is still a very good and efficient file system and, when configured to use softupdates, &lt;a href="https://it-notes.dragas.net/2024/06/04/freebsd-tips-and-tricks-creating-snapshots-with-ufs/"&gt;capable of performing live snapshots of the file system&lt;/a&gt;. This is great for backups. Ext4 and XFS do not support snapshots &lt;a href="https://it-notes.dragas.net/2020/06/30/searching-for-a-perfect-backup-solution-borg-and-restic/"&gt;except through external tools&lt;/a&gt; (like DattoBD or snapshots through the volume manager). This works, of course, but it is not native. Btrfs is great in its intentions but still not as stable as it should be after all these years of development. FreeBSD supports ZFS natively in the base system and this brings many advantages: separation of datasets for jails, as well as &lt;a href="https://vermaden.files.wordpress.com/2018/11/nluug-zfs-boot-environments-reloaded-2018-11-15.pdf"&gt;Boot Environments&lt;/a&gt;, to make snapshots before upgrades/changes and to be able to boot (from bootloader) even on a different BE, etc.&lt;/p&gt;
&lt;h2&gt;The FreeBSD boot procedure is cleaner and simpler.&lt;/h2&gt;
&lt;p&gt;Linux has always used excellent tools such as grub, lilo (now outdated), etc.. FreeBSD has always used &lt;a href="https://docs.freebsd.org/en/books/handbook/boot/"&gt;a very linear and consistent boot system&lt;/a&gt;, with its own bootloader and dedicated boot partition. Whether on mbr, gpt, etc. things are very similar and consistent. I've never had a problem getting a FreeBSD system to boot after a move or recovery from backup. On Linux, however, grub has sometimes given me problems, even after a simple kernel security update.&lt;/p&gt;
&lt;h2&gt;FreeBSD's network stack is (still) superior to Linux's - and, often, so is its performance.&lt;/h2&gt;
&lt;p&gt;Meta has been &lt;a href="https://bsd.slashdot.org/story/14/08/06/1731218/facebook-seeks-devs-to-make-linux-network-stack-as-good-as-freebsds"&gt;trying to bring the performance of the Linux network stack up to the level of FreeBSD's&lt;/a&gt; for years. Many will ask why, then, not move services to FreeBSD. Large companies with huge datacenters can't change solutions overnight, and their engineers, at any level, are Linux experts. They have invested heavily in btrfs, in Linux, in their specifics. Clearly, upon acquiring Whatsapp, they preferred to migrate the "few" Whatsapp servers to Linux and move them to their datacenters. Regarding the real system performance (i.e. disregarding benchmarks, useful only up to a certain point), FreeBSD shines, especially under high load conditions. Where Linux starts to gasp (ex: waiting for I/O) with 100% CPU, FreeBSD has lower processor load and room for more stuff. In the real world (of my servers and load types), I sometimes experienced severe system slowdowns due to high I/O, even if the data to be processed was not read/write dependent. On FreeBSD this does not happen, and if something is blocking, it blocks THAT operation, not the rest of the system. When performing backups or other important operations this factor becomes extremely important to ensure proper (and stable) system performance.&lt;/p&gt;
&lt;h2&gt;Straightforward system performance analysis&lt;/h2&gt;
&lt;p&gt;FreeBSD, in the base system, has all the tools to analyze possible problems and system loads. "vmstat" , in a single line, tells me if the machine is struggling for CPU, for I/O or for Ram. "gstat -a" shows me how much, disk by disk, partition by partition, the storage is active, also in percentage with reference to its performance. "top", then, also has support for figuring out, process by process, how much I/O is being used ("m" option). On Linux, to get the same results, you have to install specific applications, different from distribution to distribution.&lt;/p&gt;
&lt;h2&gt;Bhyve is more incomplete (but more efficient) than KVM&lt;/h2&gt;
&lt;p&gt;For my purposes, Bhyve is a great virtualization tool. KVM is definitely more complete but since I don't have any special or specific needs not covered by Bhyve on FreeBSD, I found (on average) &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;better performance with this combination&lt;/a&gt;. On FreeBSD, however, &lt;a href="https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html"&gt;KSM&lt;/a&gt; is missing which, in some cases, can be very useful.&lt;/p&gt;
&lt;p&gt;Will I abandon Linux for FreeBSD? Obviously not, just as I haven't for the last 20 years. Both have their uses, their space, their strengths. But if up to now I have had 80% Linux and 20% FreeBSD, &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;the perspective is to invert the percentages of use and, where possible, directly implement solutions based on FreeBSD&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;NOTE: this article has been translated from its &lt;a href="https://www.dragas.net/posts/perche-migrare-i-server-da-linux-a-freebsd/"&gt;Italian original version&lt;/a&gt;. Even if it's been reviewed and adapted, there might be some errors.&lt;/em&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 24 Jan 2022 06:55:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/</guid><category>freebsd</category><category>linux</category><category>filesystems</category><category>hosting</category><category>jail</category><category>lxc</category><category>server</category><category>snapshots</category><category>zfs</category><category>btrfs</category><category>container</category><category>alpine</category><category>ownyourdata</category><category>series</category></item><item><title>Efficient backup of lxc containers in Proxmox - ZFS</title><link>https://it-notes.dragas.net/2022/01/20/efficient-backup-of-lxc-containers-in-proxmox-zfs/</link><description>&lt;p&gt;&lt;img src="https://images.unsplash.com/photo-1592946879272-bc79c290b1e5?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDI3fHxjb250YWluZXJ8ZW58MHx8fHwxNjQyMjMzMzI4&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Efficient backup of lxc containers in Proxmox - ZFS"&gt;&lt;/p&gt;&lt;p&gt;I've already written about some of my backup strategies in Proxmox. Proxmox Backup Server is an option, but it's not always the best option, especially if you're using lxc containers.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://it-notes.dragas.net/2020/10/06/efficient-backup-of-lxc-containers-in-proxmox/"&gt;LVM and Ceph RBD baked containers have already been covered&lt;/a&gt; in another post, but one of the (many) great options, if you use Proxmox, is ZFS. I extensively use ZFS both on FreeBSD and Linux (&lt;a href="https://it-notes.dragas.net/2020/06/28/btrfs-automatic-snapshots-and-remote-backups/"&gt;and always wished that BTRFS could reach the same level of reliability&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;When I don't need a networked file system (ceph) or want to use LVM, I tend to install Proxmox VMs and lxc containers on ZFS.. Let's now focus on backing up lxc containers.&lt;/p&gt;
&lt;p&gt;Proxmox uses ZFS datasets for lxc containers' storage so you'll find all your files on &lt;em&gt;/poolname/subvol-x-disk-y .&lt;/em&gt; We can easily backup as we've done in my previous article, we just need a different method for taking snapshots of all those datasets.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;ZFS datasets have a hidden &lt;em&gt;.zfs&lt;/em&gt; directory that contains all the snapshots that currently exist of that specific dataset&lt;/strong&gt;. &lt;em&gt;ls&lt;/em&gt; won't show it, but you can &lt;em&gt;cd&lt;/em&gt; and it will be working.&lt;/p&gt;
&lt;p&gt;Of course we can use native zfs send/receive or a tool like &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;&lt;em&gt;zfs-autobackup&lt;/em&gt;&lt;/a&gt;, which I use daily for local snapshots and remote replication, but we want to save the files, not the zfs dataset, so we can be able to backup to a different file system. Any file system. So we will be using &lt;a href="https://it-notes.dragas.net/2020/06/30/searching-for-a-perfect-backup-solution-borg-and-restic/"&gt;borg&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Let's suppose our ZFS pool is named "&lt;em&gt;proxzfs&lt;/em&gt;". Here is a suggested script. Of course, &lt;em&gt;this is my script, it works for me and I'm not responsible if it doesn't work for you/destroys all your data/eats your server/etc.&lt;/em&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;#!/bin/bash

/usr/sbin/zfs snapshot -r proxzfs@forborg

REPOSITORY=yourpath/server/whatever:borgrepository/
TAG=mytag
borg create -v --stats --compression lz4 --progress    \
   $REPOSITORY::$TAG'-{now:%Y-%m-%dT%H:%M:%S}'          \
   /proxzfs/*/.zfs/snapshot/forborg/  \
   --exclude '*subvolYouMayWantToExclude-disk-0*'

/usr/sbin/zfs destroy -vrR proxzfs@forborg

borg prune -v $REPOSITORY --stats --prefix $TAG'-' \
   --keep-daily=31 --keep-weekly=4 --keep-monthly=12
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This small script will create a &lt;em&gt;@forborg&lt;/em&gt; snapshot for any dataset it will find under "&lt;em&gt;proxzfs&lt;/em&gt;", then will fire up borg and ask it to traverse the &lt;em&gt;forborg&lt;/em&gt; snapshots automatically mounted inside the .&lt;em&gt;zfs&lt;/em&gt; directory of any dataset.&lt;/p&gt;
&lt;p&gt;After that, it will destroy the '&lt;em&gt;forborg&lt;/em&gt;' snapshots and execute a borg prune_._ That will delete the old backups, according to the policy you have established. This step can be avoided here but I prefer to perform it after a backup so my repository is always consistent with my policy.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 20 Jan 2022 07:05:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2022/01/20/efficient-backup-of-lxc-containers-in-proxmox-zfs/</guid><category>proxmox</category><category>borg</category><category>container</category><category>linux</category><category>snapshots</category><category>backup</category><category>tutorial</category><category>lxc</category><category>zfs</category></item></channel></rss>