<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"><channel xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><title>IT Notes - linux</title><link>https://it-notes.dragas.net/categories/linux/</link><description>Articles in category linux</description><language>en</language><lastBuildDate>Mon, 22 Dec 2025 08:43:02 +0000</lastBuildDate><atom:link href="https://it-notes.dragas.net/categories/linux/feed.xml" rel="self" type="application/rss+xml"></atom:link><item><title>Installing Void Linux on ZFS with Hibernation Support</title><link>https://it-notes.dragas.net/2025/12/22/void-linux-zfs-hibernation-guide/</link><description>&lt;p&gt;&lt;img src="https://upload.wikimedia.org/wikipedia/commons/thumb/0/02/Void_Linux_logo.svg/960px-Void_Linux_logo.svg.png" alt="Installing Void Linux on ZFS with Hibernation Support"&gt;&lt;/p&gt;&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;FreeBSD continues to make strides in desktop support, but Linux still holds an advantage in hardware compatibility. After running openSUSE Tumbleweed on my mini PC for several months, I decided it was time to switch to a solution I could control more closely. Not because Tumbleweed doesn't work well - it works great! - but I prefer having direct control over what happens on my machine. And I want native ZFS, because I prefer it over btrfs and it allows me to manage snapshots, backups, and rollbacks just as I do on FreeBSD, using the same tools and procedures.&lt;/p&gt;
&lt;p&gt;The choice of &lt;a href="https://voidlinux.org/"&gt;Void Linux&lt;/a&gt; comes from its BSD-like approach: modular and free of unnecessary complexity. This makes it an excellent solution for this type of setup.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://docs.zfsbootmenu.org/"&gt;ZFSBootMenu&lt;/a&gt; is an extremely powerful tool. It provides an experience similar to FreeBSD's boot loader and natively supports ZFS. I strongly recommend reading the documentation and exploring its features, as some of them - like the built-in SSH daemon - can be genuine lifesavers in recovery scenarios.&lt;/p&gt;
&lt;h2&gt;Prerequisites and Audience&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;This guide is not for absolute beginners.&lt;/strong&gt; If you're new to Linux or Unix-like operating systems, you'd be better served by a ready-to-use distribution like &lt;a href="https://www.opensuse.org/"&gt;openSUSE&lt;/a&gt; Leap (or Tumbleweed for a rolling distribution), &lt;a href="https://linuxmint.com/"&gt;Linux Mint&lt;/a&gt;, &lt;a href="https://www.debian.org/"&gt;Debian&lt;/a&gt;, &lt;a href="https://ubuntu.com/"&gt;Ubuntu&lt;/a&gt;, or &lt;a href="https://manjaro.org/"&gt;Manjaro&lt;/a&gt;. The purpose of this article is to demonstrate a stable, upgradeable, and reasonably secure base setup for users already comfortable with system administration. It uses the &lt;strong&gt;glibc&lt;/strong&gt; variant of Void Linux. The &lt;em&gt;&lt;a href="https://docs.voidlinux.org/installation/musl.html"&gt;musl&lt;/a&gt;&lt;/em&gt; version requires different commands, for example for locale generation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Use at your own risk.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This guide synthesizes instructions from several sources:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.zfsbootmenu.org/en/latest/guides/void-linux/uefi.html"&gt;Void Linux (UEFI) from ZFSBootMenu&lt;/a&gt; - which doesn't address swap. Using a zvol for swap (not the best solution) prevents hibernation and resume. Our approach uses a separate encrypted swap partition that enables proper resume.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://docs.voidlinux.org/installation/guides/fde.html"&gt;Void Linux Full Disk Encryption&lt;/a&gt; - excellent for btrfs or ext4, but we want ZFS. We'll borrow the swap configuration approach from here.&lt;/li&gt;
&lt;li&gt;&lt;a href="https://compactbunker.org/p/install-void-linux/"&gt;Install Void Linux with a desktop environment + Flatpaks&lt;/a&gt; - for the desktop portion.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;If your setup differs from what's described here (NVMe disk, UEFI boot, Secure Boot disabled), consult the linked guides for explanations and variations.&lt;/p&gt;
&lt;h3&gt;Installation Script (Optional)&lt;/h3&gt;
&lt;p&gt;If you want to reproduce this setup quickly, I maintain a script that automates the procedure described in this guide: disk partitioning, ZFS pool and dataset creation, encrypted swap for hibernation resume, dracut configuration, and ZFSBootMenu EFI setup. An optional KDE Plasma desktop installation is also supported.&lt;/p&gt;
&lt;p&gt;The script is interactive and will ask for the required parameters (target disk, timezone and keymap, passphrases, desktop options). &lt;a href="https://brew.bsd.cafe/stefano/void-zfs-hibernation"&gt;Requirements, usage instructions, and known limitations are documented in the repository README&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;That said, I still recommend going through the manual process at least once. Understanding each step is part of the value of this setup, especially when troubleshooting or adapting it to different hardware.&lt;/p&gt;
&lt;h2&gt;Boot Environment&lt;/h2&gt;
&lt;p&gt;Since ZFS isn't supported by the base Void Linux image, we'll use &lt;a href="https://github.com/leahneukirchen/hrmpf/releases"&gt;hrmpf&lt;/a&gt;, an excellent rescue system based on Void Linux that includes ZFS support out of the box.&lt;/p&gt;
&lt;p&gt;After booting, you can either proceed directly or SSH into the machine to continue remotely. I generally prefer SSH since it makes copy-paste operations much easier - especially when dealing with UUIDs and long commands. To enable SSH access, set a root password and allow root login:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;passwd
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Edit &lt;code&gt;/etc/ssh/sshd_config&lt;/code&gt; and enable:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;PermitRootLogin yes
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Restart the SSH daemon:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;sv restart sshd
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Find the machine's IP address:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;ip addr
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You can now connect via SSH from another device.&lt;/p&gt;
&lt;h2&gt;Initial Setup&lt;/h2&gt;
&lt;p&gt;Set up the environment variables and generate a host ID - we need it for ZFS:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;source /etc/os-release
export ID

zgenhostid -f 0x00bab10c
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Disk Configuration&lt;/h2&gt;
&lt;p&gt;Identify your target disk and set up the partition variables. This approach keeps everything consistent and reduces errors:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# Set the base disk - adjust this to match your system
export DISK=&amp;quot;/dev/nvme0n1&amp;quot;

# For NVMe disks, partitions are named like nvme0n1p1, nvme0n1p2, etc.
# For SATA/SAS disks (sda, sdb), partitions are named sda1, sda2, etc.
# Set the partition separator accordingly:
export PART_SEP=&amp;quot;p&amp;quot;  # Use &amp;quot;p&amp;quot; for NVMe, empty string &amp;quot;&amp;quot; for SATA/SAS

# Define partition numbers
export BOOT_PART=&amp;quot;1&amp;quot;
export SWAP_PART=&amp;quot;2&amp;quot;
export POOL_PART=&amp;quot;3&amp;quot;

# Build full device paths
export BOOT_DEVICE=&amp;quot;${DISK}${PART_SEP}${BOOT_PART}&amp;quot;
export SWAP_DEVICE=&amp;quot;${DISK}${PART_SEP}${SWAP_PART}&amp;quot;
export POOL_DEVICE=&amp;quot;${DISK}${PART_SEP}${POOL_PART}&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Verify your configuration before proceeding:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;echo &amp;quot;Boot device: $BOOT_DEVICE&amp;quot;
echo &amp;quot;Swap device: $SWAP_DEVICE&amp;quot;
echo &amp;quot;Pool device: $POOL_DEVICE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Wipe the Disk&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Warning: This operation will irreversibly destroy all data on the selected disk. Double-check that you've selected the correct disk and be sure to have a complete backup of your system!&lt;/strong&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool labelclear -f &amp;quot;$DISK&amp;quot;

wipefs -a &amp;quot;$DISK&amp;quot;
sgdisk --zap-all &amp;quot;$DISK&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Create Partitions&lt;/h2&gt;
&lt;h3&gt;EFI System Partition&lt;/h3&gt;
&lt;p&gt;If you're not using UEFI boot, adapt this procedure following the appropriate guide linked at the beginning of this post:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;sgdisk -n &amp;quot;${BOOT_PART}:1m:+512m&amp;quot; -t &amp;quot;${BOOT_PART}:ef00&amp;quot; &amp;quot;$DISK&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Swap Partition&lt;/h3&gt;
&lt;p&gt;The swap partition should be slightly larger than your RAM to support hibernation. When you hibernate, the entire contents of RAM are written to swap, so you need enough space to hold it all plus some overhead. In this example, I have 16 GB of RAM, so I'm creating an 18 GB swap partition:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;sgdisk -n &amp;quot;${SWAP_PART}:0:+18g&amp;quot; -t &amp;quot;${SWAP_PART}:8200&amp;quot; &amp;quot;$DISK&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;ZFS Pool Partition&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;sgdisk -n &amp;quot;${POOL_PART}:0:-10m&amp;quot; -t &amp;quot;${POOL_PART}:bf00&amp;quot; &amp;quot;$DISK&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Set Up ZFS Encryption&lt;/h2&gt;
&lt;p&gt;Encrypting the disk is strongly recommended, especially for laptops. Replace &lt;code&gt;SomeKeyphrase&lt;/code&gt; with a strong passphrase that's easy to type. Keep in mind that during early boot, the keyboard layout might default to US, so choose a passphrase that's easy to type on a US keyboard layout:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;echo 'SomeKeyphrase' &amp;gt; /etc/zfs/zroot.key
chmod 000 /etc/zfs/zroot.key
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Create the ZFS Pool&lt;/h2&gt;
&lt;p&gt;Create the pool with conservative, well-tested options:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool create -f -o ashift=12 \
 -O compression=lz4 \
 -O acltype=posixacl \
 -O xattr=sa \
 -O relatime=on \
 -O encryption=aes-256-gcm \
 -O keylocation=file:///etc/zfs/zroot.key \
 -O keyformat=passphrase \
 -o autotrim=on \
 -o compatibility=openzfs-2.2-linux \
 -m none zroot &amp;quot;$POOL_DEVICE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Create ZFS Datasets&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ -o canmount=noauto zroot/ROOT/${ID}
zfs create -o mountpoint=/home zroot/home

zpool set bootfs=zroot/ROOT/${ID} zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Export and Reimport for Installation&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zpool export zroot
zpool import -N -R /mnt zroot
zfs load-key -L prompt zroot

zfs mount zroot/ROOT/${ID}
zfs mount zroot/home

udevadm trigger
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Install the Base System&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;XBPS_ARCH=x86_64 xbps-install \
  -S -R https://mirrors.servercentral.com/voidlinux/current \
  -r /mnt base-system
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Copy Host Configuration&lt;/h2&gt;
&lt;p&gt;Copy the files we generated earlier to the new system:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cp /etc/hostid /mnt/etc
mkdir -p /mnt/etc/zfs
cp /etc/zfs/zroot.key /mnt/etc/zfs
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configure Encrypted Swap&lt;/h2&gt;
&lt;p&gt;Now we'll set up the encrypted swap partition. This is where the hibernation magic happens - by using a separate LUKS-encrypted partition instead of a ZFS zvol, we can properly resume from hibernation.&lt;/p&gt;
&lt;p&gt;Format the swap partition with LUKS:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cryptsetup luksFormat --type luks1 &amp;quot;$SWAP_DEVICE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Open the encrypted partition, create the swap filesystem, and activate it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cryptsetup luksOpen &amp;quot;$SWAP_DEVICE&amp;quot; cryptswap
mkswap /dev/mapper/cryptswap
swapon /dev/mapper/cryptswap
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Preserve Variables for Chroot&lt;/h2&gt;
&lt;p&gt;Before entering the chroot, save the disk variables so they remain available inside the new environment:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; /mnt/root/disk-vars.sh
export DISK=&amp;quot;$DISK&amp;quot;
export PART_SEP=&amp;quot;$PART_SEP&amp;quot;
export BOOT_PART=&amp;quot;$BOOT_PART&amp;quot;
export SWAP_PART=&amp;quot;$SWAP_PART&amp;quot;
export POOL_PART=&amp;quot;$POOL_PART&amp;quot;
export BOOT_DEVICE=&amp;quot;$BOOT_DEVICE&amp;quot;
export SWAP_DEVICE=&amp;quot;$SWAP_DEVICE&amp;quot;
export POOL_DEVICE=&amp;quot;$POOL_DEVICE&amp;quot;
export ID=&amp;quot;$ID&amp;quot;
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Enter the Chroot Environment&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xchroot /mnt
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;From this point forward, all commands are executed inside the new system.&lt;/p&gt;
&lt;p&gt;First, load the saved variables:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;source /root/disk-vars.sh
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configure fstab&lt;/h2&gt;
&lt;p&gt;Add the swap entry to &lt;code&gt;/etc/fstab&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;/dev/mapper/cryptswap   none            swap            defaults        0 0
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Set Up Automatic Swap Unlock&lt;/h2&gt;
&lt;p&gt;To avoid entering the swap password separately after unlocking the ZFS pool, we'll create a keyfile stored on the encrypted ZFS dataset. This is secure because the keyfile only becomes accessible after the ZFS pool is unlocked.&lt;/p&gt;
&lt;p&gt;First, install cryptsetup in the new system:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S cryptsetup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Generate a random keyfile and add it to the LUKS partition:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;dd bs=1 count=64 if=/dev/urandom of=/boot/volume.key

cryptsetup luksAddKey &amp;quot;$SWAP_DEVICE&amp;quot; /boot/volume.key

chmod 000 /boot/volume.key
chmod -R g-rwx,o-rwx /boot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Add the keyfile to &lt;code&gt;/etc/crypttab&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;echo &amp;quot;cryptswap   $SWAP_DEVICE   /boot/volume.key   luks&amp;quot; &amp;gt;&amp;gt; /etc/crypttab
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Include the keyfile and crypttab in the initramfs. Create &lt;code&gt;/etc/dracut.conf.d/10-crypt.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;install_items+=&amp;quot; /boot/volume.key /etc/crypttab &amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Basic System Configuration&lt;/h2&gt;
&lt;p&gt;Configure keyboard layout and hardware clock. Adjust the keymap and timezone to match your location:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/rc.conf
KEYMAP=&amp;quot;us&amp;quot;
HARDWARECLOCK=&amp;quot;UTC&amp;quot;
EOF

ln -sf /usr/share/zoneinfo/Europe/Rome /etc/localtime
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Configure locales:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cat &amp;lt;&amp;lt; EOF &amp;gt;&amp;gt; /etc/default/libc-locales
en_US.UTF-8 UTF-8
en_US ISO-8859-1
EOF

echo &amp;quot;LANG=en_US.UTF-8&amp;quot; &amp;gt; /etc/locale.conf

xbps-reconfigure -f glibc-locales
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Set the root password:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;passwd
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configure ZFS Boot Support&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;cat &amp;lt;&amp;lt; EOF &amp;gt; /etc/dracut.conf.d/zol.conf
nofsck=&amp;quot;yes&amp;quot;
add_dracutmodules+=&amp;quot; zfs &amp;quot;
omit_dracutmodules+=&amp;quot; btrfs &amp;quot;
install_items+=&amp;quot; /etc/zfs/zroot.key &amp;quot;
EOF
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Install ZFS:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S zfs
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configure ZFSBootMenu&lt;/h2&gt;
&lt;p&gt;Set the basic boot properties:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs set org.zfsbootmenu:commandline=&amp;quot;quiet&amp;quot; zroot/ROOT
zfs set org.zfsbootmenu:keysource=&amp;quot;zroot/ROOT/${ID}&amp;quot; zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;The Critical Step: Hibernation Support&lt;/h3&gt;
&lt;p&gt;Now we need to configure hibernation resume. This is the key insight that makes this setup work: normally, the encrypted ZFS root mounts first, and then it unlocks the swap partition. But when resuming from hibernation, the kernel needs to read the hibernation image from swap &lt;em&gt;before&lt;/em&gt; mounting the root filesystem - otherwise, the saved state would be lost.&lt;/p&gt;
&lt;p&gt;To solve this, we tell ZFSBootMenu to unlock the swap partition early, before mounting ZFS, by specifying its LUKS UUID.&lt;/p&gt;
&lt;p&gt;Get the UUID of your swap partition:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;blkid &amp;quot;$SWAP_DEVICE&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;You'll see output like:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;/dev/...: UUID=&amp;quot;xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx&amp;quot; TYPE=&amp;quot;crypto_LUKS&amp;quot; PARTUUID=&amp;quot;...&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Store the UUID in a variable for the next step:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;SWAP_UUID=$(blkid -s UUID -o value &amp;quot;$SWAP_DEVICE&amp;quot;)
echo &amp;quot;Swap UUID: $SWAP_UUID&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now set the boot parameters using the captured UUID:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs set org.zfsbootmenu:commandline=&amp;quot;rd.luks.uuid=$SWAP_UUID resume=/dev/mapper/cryptswap&amp;quot; zroot/ROOT/${ID}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Set Up EFI Boot&lt;/h2&gt;
&lt;p&gt;Create and mount the EFI partition:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;mkfs.vfat -F32 &amp;quot;$BOOT_DEVICE&amp;quot;

mkdir -p /boot/efi
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Add the EFI partition to &lt;code&gt;/etc/fstab&lt;/code&gt; using its UUID:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;BOOT_UUID=$(blkid -s UUID -o value &amp;quot;$BOOT_DEVICE&amp;quot;)
echo &amp;quot;UUID=$BOOT_UUID    /boot/efi    vfat    defaults    0 0&amp;quot; &amp;gt;&amp;gt; /etc/fstab
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Mount it:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;mount /boot/efi
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Install ZFSBootMenu&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S curl

mkdir -p /boot/efi/EFI/ZBM
curl -o /boot/efi/EFI/ZBM/VMLINUZ.EFI -L https://get.zfsbootmenu.org/efi
cp /boot/efi/EFI/ZBM/VMLINUZ.EFI /boot/efi/EFI/ZBM/VMLINUZ-BACKUP.EFI
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Configure the EFI boot entries:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S efibootmgr

efibootmgr -c -d &amp;quot;$DISK&amp;quot; -p &amp;quot;$BOOT_PART&amp;quot; \
  -L &amp;quot;ZFSBootMenu (Backup)&amp;quot; \
  -l '\EFI\ZBM\VMLINUZ-BACKUP.EFI'

efibootmgr -c -d &amp;quot;$DISK&amp;quot; -p &amp;quot;$BOOT_PART&amp;quot; \
  -L &amp;quot;ZFSBootMenu&amp;quot; \
  -l '\EFI\ZBM\VMLINUZ.EFI'
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Microcode updates&lt;/h3&gt;
&lt;p&gt;Void Linux is modular, so you may need to install additional packages for your specific hardware. For the Intel microcode, you need the non-free repo:
For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;# For Intel CPUs
xbps-install -S void-repo-nonfree 
xbps-install -S intel-ucode

# For AMD CPUs/GPUs
xbps-install -S linux-firmware-amd
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After installing microcode updates, regenerate the boot images and exit:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-reconfigure -fa
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Desktop Installation (Optional)&lt;/h2&gt;
&lt;p&gt;If all you need is a minimal system or a server, you're done and ready to reboot. For a complete desktop environment, continue with the following steps.&lt;/p&gt;
&lt;h3&gt;Install Core Desktop Packages&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S vim nano dbus elogind polkit xorg xorg-fonts xorg-video-drivers xorg-input-drivers dejavu-fonts-ttf terminus-font NetworkManager pipewire alsa-pipewire wireplumber xdg-user-dirs unzip gzip xz 7zip
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Install KDE Plasma&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S kde-plasma dolphin konsole firefox kdegraphics-thumbnailers ffmpegthumbs vlc ark kwrite discover kf6-purpose
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Enable Services&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;ln -s /etc/sv/NetworkManager /etc/runit/runsvdir/default/
ln -s /etc/sv/dbus /etc/runit/runsvdir/default/
ln -s /etc/sv/udevd /etc/runit/runsvdir/default/
ln -s /etc/sv/polkitd /etc/runit/runsvdir/default/
ln -s /etc/sv/sddm /etc/runit/runsvdir/default/
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Configure PipeWire Audio&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;mkdir -p /etc/xdg/autostart
ln -sf /usr/share/applications/pipewire.desktop /etc/xdg/autostart/

mkdir -p /etc/pipewire/pipewire.conf.d
ln -sf /usr/share/examples/wireplumber/10-wireplumber.conf /etc/pipewire/pipewire.conf.d/
ln -sf /usr/share/examples/pipewire/20-pipewire-pulse.conf /etc/pipewire/pipewire.conf.d/

mkdir -p /etc/alsa/conf.d
ln -sf /usr/share/alsa/alsa.conf.d/50-pipewire.conf /etc/alsa/conf.d
ln -sf /usr/share/alsa/alsa.conf.d/99-pipewire-default.conf /etc/alsa/conf.d
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Enable Additional Repositories and Flatpak (Optional)&lt;/h3&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;xbps-install -S void-repo-nonfree void-repo-multilib void-repo-multilib-nonfree

xbps-install -S flatpak
flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Create a Regular User and exit&lt;/h3&gt;
&lt;p&gt;For desktop use, create a non-root user with appropriate group memberships.
Replace &lt;code&gt;username&lt;/code&gt; with your desired username.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;useradd -m username
passwd username
usermod username -G video,wheel,plugdev,kvm,audio,network
exit
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Fix for NetworkManager&lt;/h3&gt;
&lt;p&gt;xchroot will bind mount /etc/resolv.conf and leave an empty file. Network Manager won't like it. So let's clean it up:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;umount -l /mnt/etc/resolv.conf 2&amp;gt;/dev/null || true

rm -f /mnt/etc/resolv.conf
ln -s /run/NetworkManager/resolv.conf /mnt/etc/resolv.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Exit and Reboot&lt;/h2&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;umount -n -R /mnt
zpool export zroot
reboot
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Post-Installation&lt;/h2&gt;
&lt;p&gt;If everything went well, after entering your ZFS encryption password, you'll be greeted by the SDDM login screen.&lt;/p&gt;
&lt;h2&gt;Testing Hibernation&lt;/h2&gt;
&lt;p&gt;To verify that hibernation works correctly, you can clock the "Hibernate" button or:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;loginctl hibernate
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The system should power off. When you turn it back on, ZFSBootMenu will prompt for the password, unlock the swap partition, detect the hibernation image, and resume your session exactly where you left off.&lt;/p&gt;
&lt;p&gt;If resume fails, check that:
1. The LUKS UUID in the ZFS commandline property matches your swap partition
2. The swap partition is large enough for your RAM
3. The dracut configuration includes the crypttab and keyfile&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;You now have a fully functional Void Linux system with native ZFS, full disk encryption, and working hibernation. The system is rolling, lightweight, and easy to maintain. Enjoy!&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 22 Dec 2025 08:43:02 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/12/22/void-linux-zfs-hibernation-guide/</guid><category>linux</category><category>desktop</category><category>zfs</category><category>server</category><category>tutorial</category><category>ownyourdata</category><category>voidlinux</category></item><item><title>Why I (still) love Linux</title><link>https://it-notes.dragas.net/2025/11/24/why-i-still-love-linux/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/terminal_htop.webp" alt="A screen showing htop"&gt;&lt;/p&gt;&lt;p&gt;I know, this title might come as a surprise to many. Or perhaps, for those who truly know me, it won’t. I am not a fanboy. The BSDs and the illumos distributions generally follow an approach to design and development that aligns more closely with the way I think, not to mention the wonderful communities around them, but that does not mean I do not use and appreciate other solutions.
I usually publish articles about how much I love the BSDs or illumos distributions, but today I want to talk about Linux (or, better, GNU/Linux) and why, despite everything, it still holds a place in my heart. This will be the first in a series of articles where I’ll discuss other operating systems.&lt;/p&gt;
&lt;h2&gt;Where It All Began&lt;/h2&gt;
&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=UnVp25-6Qao"&gt;I started right here&lt;/a&gt;, with GNU/Linux, back in 1996. It was my first real prompt after the Commodore 64 and DOS. It was my first step toward Unix systems, and it was love at first shell. I felt a sense of freedom - a freedom that the operating systems I had known up to that point (few, to be honest) had never given me. It was like a “blank sheet” (or rather, a black one) with a prompt on it. I understood immediately that this prompt, thanks to command chaining, pipes, and all the marvels of Unix and Unix-like systems, would allow me to do anything. And that sense of freedom is what makes me love Unix systems to this day.&lt;/p&gt;
&lt;p&gt;I was young, but my intuition was correct. And even though I couldn't afford to keep a full Linux installation on that computer long-term due to hardware limitations, I realized that this would be my future. A year later, a new computer arrived, allowing me to use Linux daily, for everything. And successfully, without missing Windows at all (except for a small partition, strictly for gaming).&lt;/p&gt;
&lt;p&gt;When I arrived at university, in 1998, I was one of the few who knew it. One of the few who appreciated it. One of the few who hoped to see a flourishing future for it. Everywhere. Widespread. A dream come true. I was a speaker at Linux Days, I actively participated in translation projects, and I wrote articles for Italian magazines. I was a purist regarding the "GNU/Linux" nomenclature because I felt it was wrong to ignore the GNU part - it was fundamental. Because &lt;a href="https://my-notes.dragas.net/2023/04/19/the-year-of-linux-freebsd-on-desktops-may-never-come/"&gt;perhaps the "Year of the Linux Desktop" never arrived&lt;/a&gt;, but Linux is now everywhere. On my desktop, without a doubt. But also on my smartphone (Android) and on those of hundreds of millions of people. Just as it is in my car. And in countless devices surrounding us - even if we don’t know it. And this is the true success. Let’s not focus too much on the complaint that "it’s not compatible with my device X". It is your device that is not compatible with Linux, not the other way around. Just like when, many years ago, people complained that their WinModems (modems that offloaded all processing to obscure, closed-source Windows drivers) didn't work on Linux. For "early adopters" like me, this concept has always been present, even though, fortunately, things have improved exponentially.&lt;/p&gt;
&lt;p&gt;Linux was what companies accepted most willingly (not totally, but still...): the ongoing lawsuits against the BSDs hampered their spread, and Linux seemed like that "breath of fresh air" the world needed.&lt;/p&gt;
&lt;p&gt;Linux and its distributions (especially those untethered from corporations, like Debian, Gentoo, Arch, etc.) allowed us to replicate expensive "commercial" setups at a fraction of the cost. Reliability was good, updating was simple, and there was a certain consistency. Not as marked as that of the BSDs, but sufficient.&lt;/p&gt;
&lt;p&gt;The world was ready to accept it, albeit reluctantly. Linus Torvalds, despite his sometimes harsh and undiplomatic tone, carried forward the kernel development with continuity and coherence, making difficult decisions but always in line with the project. The "move fast and break things" model was almost necessary because there was still so much to build. I also remember the era when Linux - speaking of the kernel - was designed almost exclusively for x86. The other architectures, to simplify, worked thanks to a series of adaptations that brought most behavior back to what was expected for x86.&lt;/p&gt;
&lt;p&gt;And the distributions, especially the more "arduous" ones to install, taught me a lot. The distro-hopping of the early 2000s made me truly understand partitioning, the boot procedure (Lilo first, then Grub, etc.), and for this, I must mainly thank Gentoo and Arch (and the FreeBSD handbook - but this is for another article). I learned the importance of backups the hard way, and I keep this lesson well in mind today. My Linux desktops ran mainly with Debian (initially), then Gentoo, Arch, and openSUSE (which, at the time, was still called "SUSE Linux"), Manjaro, etc. My old 486sx 25Mhz with 4MB (yes, MB) of RAM, powered by Debian, allowed me to download emails (mutt and fetchmail), news (inn + suck), program in C, and create shell scripts - at the end of the 90s.&lt;/p&gt;
&lt;h2&gt;When Linux Conquered the World&lt;/h2&gt;
&lt;p&gt;Then the first Ubuntu was launched, and many things changed. I don't know if it was thanks to Ubuntu or simply because the time was ripe, but attention shifted to Linux on the desktop as well (albeit mainly on the computers of us enthusiasts), and many companies began to contribute actively to the system or distributions.&lt;/p&gt;
&lt;p&gt;I am not against the participation of large companies in Open Source. Their contributions can be valuable for the development of Open Source itself, and if companies make money from it, good for them. If this ultimately leads to a more complete and valid Open Source product, then I welcome it! It is precisely thanks to mass adoption that Linux cleared the path for the acceptance of Open Source at all levels. I still remember when, just after graduating, I was told that Linux (and Open Source systems like the BSDs) were "toys for universities". I dare anyone to say that today!&lt;/p&gt;
&lt;p&gt;But this must be done correctly: without spoiling the original idea of the project and without hijacking (voluntarily or not) development toward a different model. Toward a different evolution. The use of Open Source must not become a vehicle for a business model that tends to close, trap, or cage the user. Or harm anyone. And if it is oriented toward worsening the product solely for one's own gain, I can only be against it.&lt;/p&gt;
&lt;h2&gt;What Changed Along the Way&lt;/h2&gt;
&lt;p&gt;And this is where, unfortunately, I believe things have changed in the Linux world (if not in the kernel itself, at least in many distributions). Innovation used to be disruptive out of necessity. Today, in many cases, disruption happens without purpose, and stability is often sacrificed for changes that do not solve real problems. Sometimes, in the name of improved security or stability, a new, immature, and unstable product is created - effectively worsening the status quo.&lt;/p&gt;
&lt;p&gt;To give an example, I am not against systemd on principle, but I consider it a tool distant from the original Unix principles - do one thing and do it well - full of features and functions that, frankly, I often do not need. I don't want systemd managing my containerization. For restarting stopped services? There are monit and supervisor - efficient, effective, and optional. And, I might add: services shouldn't crash; they should handle problems in a non-destructive way. My Raspberry Pi A+ doesn't need systemd, which occupies a huge amount of RAM (and precious clock cycles) for features that will never be useful or necessary on that platform.&lt;/p&gt;
&lt;p&gt;But "move fast and break things" has arrived everywhere, and software is often written by gluing together unstable libraries or those laden with system vulnerabilities. Not to mention so-called "vibe coding" - which might give acceptable results at certain levels, but should not be used when security and confidentiality become primary necessities or, at least, without an understanding of what has been written.&lt;/p&gt;
&lt;p&gt;We are losing much of the Unix philosophy, and many Linux distributions are now taking the path of distancing themselves from a concept of cross-compatibility ("if it works on Linux, I don't care about other operating systems"), of minimalism, of "do one thing and do it well". And, in my opinion, we are therefore losing many of the hallmarks that have distinguished its behavior over the years.&lt;/p&gt;
&lt;p&gt;In my view, this depends on two factors: a development model linked to a concept of "disposable" electronics, applied even to software, and the pressure from some companies to push development where they want, not where the project should go. Therefore, in certain cases, the GPL becomes a double-edged sword: on one hand, it protects the software and ensures that contributions remain available. On the other, it risks creating a situation where the most "influential" player can totally direct development because - unable to close their product - they have an interest in the entire project going in the direction they have predisposed. In these cases, perhaps, BSD licenses actually protect the software itself more effectively. Because companies can take and use without an obligation to contribute. If they do, it is because they want to, as in the virtuous case of Netflix with FreeBSD. And this, while it may remove (sometimes precious) contributions to the operating system, guarantees that the steering wheel remains firmly in the hands of those in charge - whether foundations, groups, or individuals.&lt;/p&gt;
&lt;h2&gt;And Why I Still Care&lt;/h2&gt;
&lt;p&gt;And so yes, despite all this, I (still) love Linux.&lt;/p&gt;
&lt;p&gt;Because it was the first Open Source project I truly believed in (and which truly succeeded), because it works, and because the entire world has developed around it. Because it is a platform on which tons of distributions have been built (and some, like Alpine Linux, still maintain that sense of minimalism that I consider correct for an operating system). Because it has distributions like openSUSE (and many others) that work immediately and without problems on my laptop (suspension and hibernation included) and on my miniPC, a fantastic tool I use daily. Because hardware support has improved immensely, and it is now rare to find incompatible hardware.&lt;/p&gt;
&lt;p&gt;Because it has been my life companion for 30 years and has contributed significantly to putting food on the table and letting me sleep soundly. Because it allowed me to study without spending insane amounts on licenses or manuals. Because it taught me, first, to think outside the box. To be free.&lt;/p&gt;
&lt;p&gt;So thank you, GNU/Linux.&lt;/p&gt;
&lt;p&gt;Even if your btrfs, after almost 18 years, still eats data in spectacular fashion.
Even if you rename my network interfaces after a reboot. 
Even though, at times, I get the feeling that you’re slowly turning into what you once wanted to defeat.&lt;/p&gt;
&lt;p&gt;Even if you are not my first choice for many workloads, I foresee spending a lot of time with you for at least the next 30 years.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 24 Nov 2025 08:52:00 +0100</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/11/24/why-i-still-love-linux/</guid><category>linux</category><category>server</category><category>sysadmin</category><category>ownyourdata</category></item><item><title>Static Web Hosting on the Intel N150: FreeBSD, SmartOS, NetBSD, OpenBSD and Linux Compared  </title><link>https://it-notes.dragas.net/2025/11/19/static-web-hosting-intel-n150-freebsd-smartos-netbsd-openbsd-linux/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="A server rack with some servers and cables"&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: This post has been updated to include &lt;strong&gt;Docker&lt;/strong&gt; benchmarks and a comparison of container overhead versus FreeBSD Jails and illumos Zones.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Some operating systems (FreeBSD and Linux) support kernel TLS (kTLS) and the related SSL_sendfile path in nginx, which can improve HTTPS performance for static files. Since this feature is not available on all the systems included in the comparison (for example NetBSD, OpenBSD and illumos), the benchmarks were run with a common baseline configuration that does not rely on kTLS. The goal is to compare the systems under similar conditions rather than to measure OS specific optimizations.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I often get very specific infrastructure requests from clients. Most of the time it is some form of hosting. My job is usually to suggest and implement the setup that fits their goals, skills and long term plans.  &lt;/p&gt;
&lt;p&gt;If there are competent technicians on the other side, and they are willing to learn or already comfortable with Unix style systems, my first choices are usually one of the BSDs or an illumos distribution. If they need a control panel, or they already have a lot of experience with a particular stack that will clearly help them, I will happily use Linux and it usually delivers solid, reliable results.  &lt;/p&gt;
&lt;p&gt;Every now and then someone asks the question I like the least:  &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“But how does it &lt;em&gt;perform&lt;/em&gt; compared to X or Y?”  &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I have never been a big fan of benchmarks. At best they capture a very specific workload on a very specific setup. They are almost never a perfect reflection of what will happen in the real world.  &lt;/p&gt;
&lt;p&gt;For example, I discovered that idle bhyve VMs seem to use fewer resources when the host is illumos than when the host is FreeBSD. It looks strange at first sight, but the illumos people are clearly working very hard on this, and the result is a very capable and efficient platform.  &lt;/p&gt;
&lt;p&gt;Despite my skepticism, from time to time I enjoy running some comparative tests. I already did it with &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;Proxmox KVM versus FreeBSD bhyve&lt;/a&gt;, and I also &lt;a href="https://it-notes.dragas.net/2025/09/19/freebsd-vs-smartos-whos-faster-for-jails-zones-bhyve/"&gt;compared Jails, Zones, bhyve and KVM&lt;/a&gt; on the same Intel N150 box. That led to the FreeBSD vs SmartOS article where I focused on CPU and memory performance on this small mini PC.  &lt;/p&gt;
&lt;p&gt;This time I wanted to do something simpler, but also closer to what I see every day: &lt;strong&gt;static web hosting.&lt;/strong&gt;  &lt;/p&gt;
&lt;p&gt;Instead of synthetic CPU or I/O tests, I wanted to measure how different operating systems behave when they serve a small static site with nginx, both over HTTP and HTTPS.  &lt;/p&gt;
&lt;p&gt;This is &lt;strong&gt;not&lt;/strong&gt; meant to be a super rigorous benchmark. I used the default nginx packages, almost default configuration, and did not tune any OS specific kernel settings. In my experience, careful tuning of kernel and network parameters can easily move numbers by several tens of percentage points. The problem is that very few people actually spend time chasing such optimizations. Much more often, once a limit is reached, someone yells “we need mooooar powaaaar” while the real fix would be to tune the existing stack a bit.&lt;/p&gt;
&lt;p&gt;So the question I want to answer here is more modest and more practical:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;With default nginx and a small static site, how much does the choice of host OS really matter on this Intel N150 mini PC?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;em&gt;Spoiler&lt;/em&gt;: less than people think, at least for plain HTTP. Things get more interesting once TLS enters the picture.&lt;/p&gt;
&lt;hr /&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;br /&gt;
These benchmarks are a snapshot of my specific hardware, network and configuration. They are useful to compare &lt;em&gt;relative&lt;/em&gt; behavior on this setup. They are not a universal ranking of operating systems. Different CPUs, NICs, crypto extensions, kernel versions or nginx builds can completely change the picture.  &lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h2&gt;Test setup&lt;/h2&gt;
&lt;p&gt;The hardware is the same Intel N150 mini PC I used in my previous tests: a small, low power box that still has enough cores to be interesting for lab and small production workloads.  &lt;/p&gt;
&lt;p&gt;On it, I installed several operating systems and environments, always on the bare metal, not nested inside each other. On each OS I installed nginx from the official packages.  &lt;/p&gt;
&lt;h3&gt;Software under test&lt;/h3&gt;
&lt;p&gt;On the host:  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;, with:&lt;br /&gt;
- a Debian 12 LX zone&lt;br /&gt;
- an Alpine Linux 3.22 LX zone&lt;br /&gt;
- a native SmartOS zone  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt; 14.3-RELEASE:&lt;br /&gt;
- nginx running inside a native jail  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;OpenBSD&lt;/strong&gt; 7.8:&lt;br /&gt;
- nginx on the host  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NetBSD&lt;/strong&gt; 10.1:&lt;br /&gt;
- nginx on the host  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Debian&lt;/strong&gt; 13.2:&lt;br /&gt;
- nginx on the host &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Alpine Linux&lt;/strong&gt; 3.22:&lt;br /&gt;
- nginx on the host&lt;br /&gt;
- Docker: Debian 13 container running on the Alpine host (ports mapped)&lt;/p&gt;
&lt;p&gt;I also tried to include &lt;strong&gt;DragonFlyBSD&lt;/strong&gt;, but the NIC in this box is not supported. Using a different NIC just for one OS would have made the comparison meaningless, so I excluded it.  &lt;/p&gt;
&lt;h3&gt;nginx configuration&lt;/h3&gt;
&lt;p&gt;In all environments:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nginx was installed from the system packages  &lt;/li&gt;
&lt;li&gt;&lt;code&gt;worker_processes&lt;/code&gt; was set to &lt;code&gt;auto&lt;/code&gt;  &lt;/li&gt;
&lt;li&gt;the web root contained the same static content  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The important part is that I used &lt;strong&gt;exactly the same &lt;code&gt;nginx.conf&lt;/code&gt; file for all operating systems and all combinations in this article&lt;/strong&gt;. I copied the same configuration file verbatim to every host, jail and zone. The only changes were the IP address and file paths where needed, for example for the TLS certificate and key.  &lt;/p&gt;
&lt;p&gt;The static content was a default build of the example site generated by &lt;a href="https://bssg.dragas.net/"&gt;&lt;strong&gt;BSSG&lt;/strong&gt;, my Bash static site generator&lt;/a&gt;. The web root was the same logical structure on every OS and container type.  &lt;/p&gt;
&lt;p&gt;There is no OS specific tuning in the configuration and no kernel level tweaks. This is very close to a “package install plus minimal config” situation.  &lt;/p&gt;
&lt;h3&gt;TLS configuration&lt;/h3&gt;
&lt;p&gt;For HTTPS I used a very simple configuration, identical on every host.  &lt;/p&gt;
&lt;p&gt;Self signed certificate created with:  &lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;openssl req -x509 -newkey rsa:4096 -nodes -keyout server.key -out server.crt -days 365 -subj &amp;quot;/CN=localhost&amp;quot;  
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Example nginx &lt;code&gt;server&lt;/code&gt; block for HTTPS (simplified):  &lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-nginx"&gt;server {  
listen 443 ssl http2;  
listen [::]:443 ssl http2;  

server_name _;  

ssl_certificate /etc/nginx/ssl/server.crt;  
ssl_certificate_key /etc/nginx/ssl/server.key;  

root /var/www/html;  
index index.html index.htm;  

location / {  
try_files $uri $uri/ =404;  
}  
}  
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The HTTP virtual host is also the same everywhere, with the root pointing to the BSSG example site.  &lt;/p&gt;
&lt;h3&gt;Load generator&lt;/h3&gt;
&lt;p&gt;The tests were run from my workstation on the same LAN:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;client host: a mini PC machine connected at 2.5 Gbit/s  &lt;/li&gt;
&lt;li&gt;switch: 2.5 Gbit/s  &lt;/li&gt;
&lt;li&gt;test tool: &lt;code&gt;wrk&lt;/code&gt;  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For each target host I ran:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;wrk -t4 -c50 -d10s http://IP&lt;/code&gt;  &lt;/li&gt;
&lt;li&gt;&lt;code&gt;wrk -t4 -c10 -d10s http://IP&lt;/code&gt;  &lt;/li&gt;
&lt;li&gt;&lt;code&gt;wrk -t4 -c50 -d10s https://IP&lt;/code&gt;  &lt;/li&gt;
&lt;li&gt;&lt;code&gt;wrk -t4 -c10 -d10s https://IP&lt;/code&gt;  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each scenario was executed multiple times to reduce noise; the numbers below are medians (or very close to them) from the runs.&lt;/p&gt;
&lt;h2&gt;The contenders&lt;/h2&gt;
&lt;p&gt;To keep things readable, I will refer to each setup as follows:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SmartOS Debian LX&lt;/strong&gt; → SmartOS host, Debian 12 LX zone  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SmartOS Alpine LX&lt;/strong&gt; → SmartOS host, Alpine 3.22 LX zone  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SmartOS Native&lt;/strong&gt; → SmartOS host, native zone  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD Jail&lt;/strong&gt; → FreeBSD 14.3-RELEASE, nginx in a jail  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OpenBSD Host&lt;/strong&gt; → OpenBSD 7.8, nginx on the host  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NetBSD Host&lt;/strong&gt; → NetBSD 10.1, nginx on the host  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Debian Host&lt;/strong&gt; → Debian 13.2, nginx on the host  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Alpine Host&lt;/strong&gt; → Alpine 3.22, nginx on the host  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Docker Container&lt;/strong&gt; → Alpine host, Debian 13 Docker container&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Everything uses the same nginx configuration file and the same static site.  &lt;/p&gt;
&lt;h2&gt;Static HTTP results&lt;/h2&gt;
&lt;p&gt;Let us start with plain HTTP, since this removes TLS from the picture and focuses on the kernel, network stack and nginx itself.  &lt;/p&gt;
&lt;h3&gt;HTTP, 4 threads, 50 concurrent connections&lt;/h3&gt;
&lt;p&gt;Approximate median &lt;code&gt;wrk&lt;/code&gt; results:  &lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Environment&lt;/th&gt;
&lt;th&gt;HTTP 50 connections&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Debian LX&lt;/td&gt;
&lt;td&gt;~46.2 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Alpine LX&lt;/td&gt;
&lt;td&gt;~49.2 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Native&lt;/td&gt;
&lt;td&gt;~63.7 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FreeBSD Jail&lt;/td&gt;
&lt;td&gt;~63.9 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenBSD Host&lt;/td&gt;
&lt;td&gt;~64.1 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NetBSD Host&lt;/td&gt;
&lt;td&gt;~64.0 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debian Host&lt;/td&gt;
&lt;td&gt;~63.8 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alpine Host&lt;/td&gt;
&lt;td&gt;~63.9 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Docker Container&lt;/td&gt;
&lt;td&gt;~63.7 k&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Two things stand out:  &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;All the native or jail/container setups on the hosts that are not LX zones cluster around 63 to 64k requests per second.  &lt;/li&gt;
&lt;li&gt;The two SmartOS LX zones sit slightly lower, in the 46 to 49k range, which is still very respectable for this hardware.  &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In other words, as long as you are on the host or in something very close to it (FreeBSD jail, SmartOS native zone, NetBSD, OpenBSD, Linux on bare metal), static HTTP on nginx will happily max out around 64k requests per second with this small Intel N150 CPU.  &lt;/p&gt;
&lt;p&gt;The Debian and Alpine LX zones on SmartOS are a bit slower, but not dramatically so. They still deliver close to 50k requests per second and, in a real world scenario, you would probably saturate the network or the client long before hitting those numbers.  &lt;/p&gt;
&lt;h3&gt;HTTP, 4 threads, 10 concurrent connections&lt;/h3&gt;
&lt;p&gt;With fewer concurrent connections, absolute throughput drops, but the relative picture is similar:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;SmartOS Native around 44k  &lt;/li&gt;
&lt;li&gt;NetBSD and Alpine Host around 34 to 35k  &lt;/li&gt;
&lt;li&gt;FreeBSD, Debian, OpenBSD around 31 to 33k  &lt;/li&gt;
&lt;li&gt;The Docker Container sits slightly lower at ~30.2k req/s, showing a small overhead from the networking layer  &lt;/li&gt;
&lt;li&gt;The SmartOS LX zones sit slightly below, around 35 to 37k req/s  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The important conclusion is simple:  &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;For plain HTTP static hosting, once nginx is installed and correctly configured, the choice between these operating systems makes very little difference on this hardware. Zones and jails add negligible overhead, LX zones add a small one.  &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you are only serving static content over HTTP, your choice of OS should be driven by other factors: ecosystem, tooling, update strategy, your own expertise and preference.  &lt;/p&gt;
&lt;h2&gt;Static HTTPS results&lt;/h2&gt;
&lt;p&gt;TLS is where things start to diverge more clearly and where CPU utilization becomes interesting.  &lt;/p&gt;
&lt;h3&gt;HTTPS, 4 threads, 50 concurrent connections&lt;/h3&gt;
&lt;p&gt;Approximate medians:  &lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Environment&lt;/th&gt;
&lt;th&gt;HTTPS 50 connections&lt;/th&gt;
&lt;th&gt;CPU notes at 50 HTTPS connections&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Debian LX&lt;/td&gt;
&lt;td&gt;~51.4 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Alpine LX&lt;/td&gt;
&lt;td&gt;~40.4 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Native&lt;/td&gt;
&lt;td&gt;~52.8 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FreeBSD Jail&lt;/td&gt;
&lt;td&gt;~62.9 k&lt;/td&gt;
&lt;td&gt;around 60% CPU idle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenBSD Host&lt;/td&gt;
&lt;td&gt;~39.7 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NetBSD Host&lt;/td&gt;
&lt;td&gt;~40.4 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debian Host&lt;/td&gt;
&lt;td&gt;~62.8 k&lt;/td&gt;
&lt;td&gt;about 20% CPU idle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alpine Host&lt;/td&gt;
&lt;td&gt;~62.4 k&lt;/td&gt;
&lt;td&gt;small idle headroom, around 7% idle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Docker Container&lt;/td&gt;
&lt;td&gt;~62.7 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;These numbers tell a more nuanced story.  &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD, Debian and Alpine on bare metal form a “fast TLS” group.&lt;/strong&gt;&lt;br /&gt;
All three sit around 62 to 63k requests per second with 50 concurrent HTTPS connections.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD does this while using significantly less CPU.&lt;/strong&gt;&lt;br /&gt;
During the HTTPS tests with 50 connections, the FreeBSD host still had around 60% CPU idle. It is the platform that handled TLS load most comfortably in terms of CPU headroom.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Debian and Alpine are close in throughput, but push the CPU harder.&lt;/strong&gt;&lt;br /&gt;
Debian still had some idle time left, Alpine even less. In practice, all three are excellent here, but FreeBSD gives you more room before you hit the wall.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SmartOS, NetBSD and OpenBSD form a “good but heavier” TLS group.&lt;/strong&gt;&lt;br /&gt;
Their HTTPS throughput is in the 40 to 52k req/s range and they reach full CPU usage at 50 concurrent connections. OpenBSD and NetBSD stabilize around 39 to 40k req/s. SmartOS native and the Debian LX zone manage slightly better (around 51 to 53k) but still with the CPU pegged.  &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;HTTPS, 4 threads, 10 concurrent connections&lt;/h3&gt;
&lt;p&gt;With lower concurrency:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;FreeBSD, Debian and Alpine still sit in roughly the 29 to 31k req/s range  &lt;/li&gt;
&lt;li&gt;SmartOS Native and LX zones are in the mid to high 30k range  &lt;/li&gt;
&lt;li&gt;The Docker Container drops slightly to ~27.8k req/s  &lt;/li&gt;
&lt;li&gt;NetBSD and OpenBSD sit around 26 to 27k req/s  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The relative pattern is the same: for this TLS workload, FreeBSD and modern Linux distributions on bare metal appear to make better use of the cryptographic capabilities of the CPU, delivering higher throughput or more headroom or both.  &lt;/p&gt;
&lt;h2&gt;What TLS seems to highlight&lt;/h2&gt;
&lt;p&gt;The HTTPS tests point to something that is not about nginx itself, but about the TLS stack and how well it can exploit the hardware.  &lt;/p&gt;
&lt;p&gt;On this Intel N150, my feeling is:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;FreeBSD, with the userland and crypto stack I am running, is very efficient at TLS here. It delivers the highest throughput while keeping plenty of CPU in reserve.  &lt;/li&gt;
&lt;li&gt;Debian and Alpine, with their recent kernels and libraries, are also strong performers, close to FreeBSD in throughput, but with less idle CPU.  &lt;/li&gt;
&lt;li&gt;NetBSD, OpenBSD and SmartOS (native and LX) are still perfectly capable of serving a lot of HTTPS traffic, but they have to work harder to keep up and they hit 100% CPU much earlier.  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This matches what I see in day to day operations: TLS performance is often less about “nginx vs something else” and more about the combination of:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the TLS library version and configuration  &lt;/li&gt;
&lt;li&gt;how well the OS uses the CPU crypto instructions  &lt;/li&gt;
&lt;li&gt;kernel level details in the network and crypto paths  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I suspect the differences here are mostly due to how each system combines its TLS stack (OpenSSL, LibreSSL and friends), its kernel and its hardware acceleration support. It would take a deeper dive into profiling and configuration knobs to attribute the gaps precisely.  &lt;/p&gt;
&lt;p&gt;In any case, on this specific mini PC, if I had to pick a platform to handle a large amount of HTTPS static traffic, FreeBSD, Debian and Alpine would be my first candidates, in that order.  &lt;/p&gt;
&lt;h2&gt;Zones, jails, containers and Docker: overhead in practice&lt;/h2&gt;
&lt;p&gt;Another interesting part of the story is the overhead introduced by different isolation technologies.  &lt;/p&gt;
&lt;p&gt;From these tests and the &lt;a href="https://it-notes.dragas.net/2025/09/19/freebsd-vs-smartos-whos-faster-for-jails-zones-bhyve/"&gt;previous virtualization article on the same N150 machine&lt;/a&gt;, the picture is consistent:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD jails behave almost like bare metal and are significantly more efficient than Docker.&lt;/strong&gt;&lt;br /&gt;
For both HTTP and HTTPS, running nginx in a jail on FreeBSD 14.3-RELEASE produces numbers practically identical to native hosts.&lt;br /&gt;
The contrast with Docker is striking: while the Docker container required 100% CPU to reach peak for the HTTP and HTTPS throughput, &lt;strong&gt;the FreeBSD jail delivered the same speed with ~60% of the CPU sitting idle&lt;/strong&gt;. In terms of performance cost per request, Jails are drastically cheaper.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SmartOS native zones are also very close to the metal.&lt;/strong&gt;&lt;br /&gt;
Static HTTP performance reaches the same 64k req/s region and HTTPS is only slightly behind the "fast TLS" group, although with higher CPU usage.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SmartOS LX zones introduce a noticeable but modest overhead.&lt;/strong&gt;&lt;br /&gt;
Both Debian and Alpine LX zones on SmartOS perform slightly worse than the native zone or FreeBSD jails. For static HTTP they are still very fast. For HTTPS the Debian LX zone remains competitive but costs more CPU, while the Alpine LX zone is slower.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Docker on Linux performs efficiently but eats the margins.&lt;/strong&gt;
I ran an additional test using a Debian 13 Docker container running on the Alpine Linux host.
At peak load (50 connections), the throughput was impressive and virtually identical to bare metal: ~63.7k req/s for HTTP and ~62.7k req/s for HTTPS.
However, there is a clear cost. First, while the bare metal host maintained a small CPU buffer (~7% idle) during the HTTPS test, Docker &lt;strong&gt;saturated the CPU to 100%&lt;/strong&gt;.
Second, at lower concurrency (10 connections), the overhead became visible. The Docker container scored ~30.2k req/s for HTTP and ~27.8k req/s for HTTPS, slightly trailing the ~31-34k and ~29-31k range of the bare metal counterparts. The abstraction layers (NAT, bridging, namespaces) are extremely efficient, but they are not completely free.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This leads to a clear conclusion on efficiency: &lt;strong&gt;FreeBSD Jails provide the highest throughput with the lowest CPU cost.&lt;/strong&gt; LX zones and Docker containers can match the speed (or come close), but they burn significantly more CPU cycles to do so.&lt;/p&gt;
&lt;h2&gt;What this means for real workloads&lt;/h2&gt;
&lt;p&gt;It is easy to get lost in tables and percentages, so let us go back to the initial question.  &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A client wants static hosting.&lt;br /&gt;
Does the choice between FreeBSD, SmartOS, NetBSD or Linux matter in terms of performance?  &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For &lt;strong&gt;plain HTTP&lt;/strong&gt; on this hardware, with nginx and the same configuration:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Not really.&lt;br /&gt;
All the native hosts and FreeBSD jails deliver roughly the same maximum throughput, in the 63 to 64k req/s range. SmartOS LX zones are slightly slower but still strong.  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For &lt;strong&gt;HTTPS&lt;/strong&gt;:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Yes, it starts to matter a bit more.  &lt;/li&gt;
&lt;li&gt;FreeBSD stands out for how relaxed the CPU is under high TLS load.  &lt;/li&gt;
&lt;li&gt;Debian and Alpine are very close in throughput, with more CPU used but still with some headroom.  &lt;/li&gt;
&lt;li&gt;SmartOS, NetBSD and OpenBSD can still push a lot of HTTPS traffic, but they reach 100% CPU earlier and stabilize at lower request rates.  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Does this mean you should always choose FreeBSD or Debian or Alpine for static HTTPS hosting?  &lt;/p&gt;
&lt;p&gt;Not necessarily.  &lt;/p&gt;
&lt;p&gt;In real deployments, the bottleneck is rarely the TLS performance of a single node serving a small static site. Network throughput, storage, logging, reverse proxies, CDNs and application layers all play a role.  &lt;/p&gt;
&lt;p&gt;However, knowing that FreeBSD and current Linux distributions can squeeze more out of a small CPU under TLS is useful when you are:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;sizing hardware for small VPS nodes that must serve many HTTPS requests  &lt;/li&gt;
&lt;li&gt;planning to consolidate multiple services on a low power box  &lt;/li&gt;
&lt;li&gt;deciding whether you can afford to keep some CPU aside for other tasks (cache, background jobs, monitoring, and so on)  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As always, the right answer depends on the complete picture: your skills, your tooling, your backups, your monitoring, the rest of your stack, and your tolerance for troubleshooting when things go sideways.  &lt;/p&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;From these small tests, my main takeaways are:  &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Static HTTP is basically solved on all these platforms.&lt;/strong&gt;&lt;br /&gt;
On a modest Intel N150, every system tested can push around 64k static HTTP requests per second with nginx set to almost default settings. For many use cases, that is already more than enough.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TLS performance is where the OS and crypto stack start to matter.&lt;/strong&gt;&lt;br /&gt;
FreeBSD, Debian and Alpine squeeze more HTTPS requests out of the N150, and FreeBSD in particular does it with a surprising amount of idle CPU left. NetBSD, OpenBSD and SmartOS need more CPU to reach similar speeds and stabilize at lower throughput once the CPU is saturated.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Jails and native zones are essentially free, LX zones cost a bit more.&lt;/strong&gt;&lt;br /&gt;
FreeBSD jails and SmartOS native zones show very little overhead for this workload. SmartOS LX zones are still perfectly usable, but if you are chasing every last request per second you will see the cost of the translation layer.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Benchmarks are only part of the story.&lt;/strong&gt;&lt;br /&gt;
If your team knows OpenBSD inside out and has tooling, scripts and workflows built around it, you might happily accept using more CPU on TLS in exchange for security features, simplicity and familiarity. The same goes for NetBSD or SmartOS in environments where their specific strengths shine.  &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I will not choose an operating system for a client just because a benchmark looks nicer. These numbers are one of the many inputs I consider. What matters most is always the combination of reliability, security, maintainability and the human beings who will have to operate the&lt;br /&gt;
system at three in the morning when something goes wrong.  &lt;/p&gt;
&lt;p&gt;Still, it is nice to know that if you put a tiny Intel N150 in front of a static site and you pick FreeBSD or a modern Linux distribution for HTTPS, you are giving that little CPU a fair chance to shine.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 19 Nov 2025 09:16:00 +0100</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/11/19/static-web-hosting-intel-n150-freebsd-smartos-netbsd-openbsd-linux/</guid><category>freebsd</category><category>smartos</category><category>illumos</category><category>linux</category><category>netbsd</category><category>openbsd</category><category>jail</category><category>zones</category><category>docker</category><category>hosting</category><category>server</category><category>sysadmin</category><category>ownyourdata</category></item><item><title>FreeBSD vs. SmartOS: Who's Faster for Jails, Zones, and bhyve VMs?</title><link>https://it-notes.dragas.net/2025/09/19/freebsd-vs-smartos-whos-faster-for-jails-zones-bhyve/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="A server rack with some servers and cables"&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;br /&gt;
These benchmarks were performed on my specific hardware and tuned for the workloads I expect to run.&lt;br /&gt;
They should not be taken as absolute or universally applicable results.&lt;br /&gt;
Different CPUs, storage, networking setups, or workload profiles could produce very different outcomes.&lt;br /&gt;
What I’m sharing here is a faithful snapshot of &lt;em&gt;my&lt;/em&gt; test environment and use case - a guidepost, not a final verdict.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Years ago, I installed a PCEngines APU at a client's site. It dutifully ran Proxmox with a few small VMs inside. It wasn't a speed demon, but it got the job done. Tasked with running in a closed, uncooled, and unsupervised server closet, it soldiered on for about seven years.&lt;/p&gt;
&lt;p&gt;Then, while I was at BSDCan, I got the call. A series of power outages and surges had finally taken their toll, and the APU was dead. It was probably just the power supply, but given its age, we decided it was time for a replacement. I set up a remote bypass to keep them running, but I knew I'd need to install something more powerful soon.&lt;/p&gt;
&lt;p&gt;I ordered a modern MiniPC-based on the low-power Intel Processor N150 platform, but with 16GB of RAM and more than enough performance to serve as a decent workstation. I have a similar one in my office running openSUSE Tumbleweed, and it works beautifully.&lt;/p&gt;
&lt;p&gt;This time, however, I decided to replace Proxmox with a different virtualization system. This decision wasn't made in a vacuum. In the past, I've put bhyve head-to-head with Proxmox, and my findings were clear: &lt;strong&gt;bhyve on FreeBSD is an extremely efficient hypervisor, &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;often outperforming KVM on Proxmox in my tests&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This positive experience is what made FreeBSD with bhyve a top contender. The other path was a KVM-style approach (which would require fewer changes to the VMs), where my options would be NetBSD or an &lt;a href="https://www.illumos.org/"&gt;illumos-based&lt;/a&gt; OS like &lt;a href="https://www.tritondatacenter.com/smartos"&gt;SmartOS&lt;/a&gt;. Since I had the new hardware on hand, I decided to run some tests to see how these different technologies stacked up against each other, and against the bare metal itself.&lt;/p&gt;
&lt;h3&gt;The Lineup: What I Put on the Test Bench&lt;/h3&gt;
&lt;p&gt;My goal was to test every reasonable option on this Intel N150 hardware. The final lineup covered the entire spectrum:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;The Baseline:&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD 14.3-RELEASE Bare Metal:&lt;/strong&gt; The ground truth for performance on this hardware.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OS-Level Virtualization (Containers):&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SmartOS Native Zone:&lt;/strong&gt; The baseline native container on SmartOS.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SmartOS LX Zone:&lt;/strong&gt; Running &lt;strong&gt;Ubuntu 24.04&lt;/strong&gt; and &lt;strong&gt;Alpine Linux&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD Native Jail:&lt;/strong&gt; The baseline native container on FreeBSD.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD Jail with Linux:&lt;/strong&gt; A jail running a &lt;strong&gt;Ubuntu 22.04&lt;/strong&gt; userland.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Full Hardware Virtualization (HVM):&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SmartOS bhyve Zone:&lt;/strong&gt; A FreeBSD guest inside the bhyve hypervisor on a SmartOS host.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SmartOS KVM Zone:&lt;/strong&gt; A FreeBSD guest inside the KVM hypervisor on a SmartOS host.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD bhyve VM:&lt;/strong&gt; A FreeBSD guest inside the bhyve hypervisor on a FreeBSD host.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;The Benchmark: My &lt;code&gt;sysbench&lt;/code&gt; Commands&lt;/h3&gt;
&lt;p&gt;To keep the comparison fair and simple, I used two core &lt;code&gt;sysbench&lt;/code&gt; commands. To ensure consistency, I even compiled &lt;code&gt;sysbench&lt;/code&gt; from scratch on the SmartOS native zone to match the versions and compile options on the other systems as closely as possible.&lt;/p&gt;
&lt;p&gt;The commands I used in each environment were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;For CPU performance:&lt;/strong&gt; &lt;code&gt;sysbench --test=cpu --cpu-max-prime=20000 run&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;For memory performance:&lt;/strong&gt; &lt;code&gt;sysbench --test=memory run&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;First Look: CPU and Memory on the Intel N150&lt;/h3&gt;
&lt;p&gt;My initial tests on the Intel N150 hardware immediately revealed some interesting trends. The &lt;code&gt;sysbench&lt;/code&gt; CPU results from any native FreeBSD environment (bare metal or jail) were on a completely different scale from the Linux and SmartOS guests, making a direct comparison meaningless.&lt;/p&gt;
&lt;p&gt;However, by excluding the incompatible FreeBSD-native results, we get a very clear picture of the overhead between the various container technologies.&lt;/p&gt;
&lt;h4&gt;Valid CPU Performance Comparison (Single Thread, Intel N150)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Host OS&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Container Tech&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Guest OS&lt;/th&gt;
&lt;th style="text-align: left;"&gt;CPU Performance (Events/sec)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Jail (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Ubuntu 22.04&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;1108.18&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;LX Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Ubuntu 24.04&lt;/td&gt;
&lt;td style="text-align: left;"&gt;1107.13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Native Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;1107.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;LX Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Alpine Linux&lt;/td&gt;
&lt;td style="text-align: left;"&gt;1022.81&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The takeaway here was clear: for CPU work, the overhead from these containers is basically a rounding error. For CPU-bound tasks, neither SmartOS Zones nor FreeBSD Jails will be a bottleneck.&lt;/p&gt;
&lt;p&gt;The memory results, which were consistent across all platforms, were far more revealing.&lt;/p&gt;
&lt;h4&gt;Overall Memory Performance Comparison (Intel Processor N150)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Host OS&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Virtualization Tech&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Guest OS&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Memory Performance (Transfer Rate)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;LX Zone (OS-level)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;Ubuntu 24.04&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;4970.54 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Native Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;SmartOS (Native)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;4549.97 MiB/sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Jail (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Ubuntu 22.04&lt;/td&gt;
&lt;td style="text-align: left;"&gt;4348.32 MiB/sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;Bare Metal&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD (Native)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;4005.08 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Native Jail (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;FreeBSD (Native)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;3990.13 MiB/sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;LX Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Alpine Linux&lt;/td&gt;
&lt;td style="text-align: left;"&gt;3803.72 MiB/sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;bhyve VM (Full HVM)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;3636.01 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;bhyve Zone (Full HVM)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;3020.15 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;KVM Zone (Full HVM)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;205.18 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;These initial numbers led to a few conclusions: a virtual layer could be a performance boost, the userland matters, and bhyve clearly outclassed the legacy KVM on SmartOS. However, one result was nagging at me: the performance gap between FreeBSD bare metal (&lt;code&gt;4005.08 MiB/sec&lt;/code&gt;) and a native bhyve VM (&lt;code&gt;3636.01 MiB/sec&lt;/code&gt;) was about 9%. This was a larger drop than I expected. It prompted a new question: was this overhead inherent to bhyve, or was it a quirk of the new N150 hardware?&lt;/p&gt;
&lt;h3&gt;Going deeper: Testing on an Intel i7-7500U&lt;/h3&gt;
&lt;p&gt;To see if more mature, better-supported hardware would tell a different story, I replicated the FreeBSD tests on an older Qotom Mini-PC powered by an Intel i7-7500U. The results were illuminating and dramatically changed the narrative.&lt;/p&gt;
&lt;h4&gt;CPU Performance Comparison (Intel i7-7500U)&lt;/h4&gt;
&lt;p&gt;Once again, the CPU tests produced strange results. The native FreeBSD environments all reported incredibly high numbers in the millions of events/sec, while the Ubuntu Linuxulator jail's result was on a completely different, incompatible scale. Frankly, given the massive discrepancy between FreeBSD-native and Linux-based environments, I'm unsure that the &lt;code&gt;sysbench&lt;/code&gt; CPU figures can be considered totally reliable in absolute terms.&lt;/p&gt;
&lt;p&gt;However, what &lt;em&gt;is&lt;/em&gt; useful is comparing the native FreeBSD results against each other. This tells us about relative overhead.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Platform&lt;/th&gt;
&lt;th style="text-align: left;"&gt;CPU Performance (Events/sec)&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Overhead vs. Bare Metal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD Bare Metal&lt;/td&gt;
&lt;td style="text-align: left;"&gt;6,377,778&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Baseline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD Native Jail&lt;/td&gt;
&lt;td style="text-align: left;"&gt;6,379,271&lt;/td&gt;
&lt;td style="text-align: left;"&gt;~0.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD bhyve VM&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;6,346,852&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;-0.48%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Even if we're skeptical of the absolute numbers, the relative comparison is crystal clear: the CPU overhead of bhyve is &lt;strong&gt;less than half a percent&lt;/strong&gt;. This is the key takeaway.&lt;/p&gt;
&lt;h4&gt;Memory Performance Comparison (Intel i7-7500U)&lt;/h4&gt;
&lt;p&gt;The memory benchmarks, in contrast, were consistent and highly informative.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Platform&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Memory Performance (Transfer Rate)&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Overhead vs. Bare Metal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;Ubuntu 22.04 Jail&lt;/td&gt;
&lt;td style="text-align: left;"&gt;4856.23 MiB/sec&lt;/td&gt;
&lt;td style="text-align: left;"&gt;+7.55%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD Native Jail&lt;/td&gt;
&lt;td style="text-align: left;"&gt;4517.73 MiB/sec&lt;/td&gt;
&lt;td style="text-align: left;"&gt;+0.05%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD Bare Metal&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;4515.24 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;Baseline&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD bhyve VM&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;4491.60 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;-0.52%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This is where the real story is. The memory performance of a bhyve VM was a mere &lt;strong&gt;0.52%&lt;/strong&gt; slower than bare metal. This is the kind of near-native performance one hopes for from a top-tier hypervisor and stands in stark contrast to the 9% drop seen on the newer N150.&lt;/p&gt;
&lt;h3&gt;Breaking Down the Results: What I Learned From Both Tests&lt;/h3&gt;
&lt;p&gt;This comprehensive two-platform analysis paints a much clearer picture.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. Hardware &lt;em&gt;Really&lt;/em&gt; Matters&lt;/strong&gt;
Performance is not an absolute. The difference between the two platforms was stark: on the mature i7-7500U, bhyve’s overhead was less than 1%, while on the newer, budget N150, it was a more significant 9%. This suggests the performance dip is likely due to missing optimizations for that specific CPU architecture, rather than a fundamental flaw in bhyve itself.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. bhyve's True Potential is Near-Native Speed&lt;/strong&gt;
The i7 tests prove that &lt;strong&gt;bhyve is an exceptionally efficient hypervisor&lt;/strong&gt; on well-supported hardware. The relative CPU overhead was a negligible -0.48%, and more importantly, the reliable memory benchmarks showed a performance drop of just 0.52% compared to bare metal. This is the gold standard for virtualization.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. FreeBSD Jails are Feather-Light&lt;/strong&gt;
On both platforms, native FreeBSD jails demonstrated almost zero performance overhead. On the i7, both CPU and memory performance were virtually identical to bare metal (a 0.05% difference). The N150 CPU tests further showed that FreeBSD's container implementation is so efficient that running a Linux userland inside a jail delivered the best CPU scores of the entire lineup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4. SmartOS Zones Are Also Extremely Efficient&lt;/strong&gt;
Just like Jails, SmartOS's native Zones proved to be remarkably lightweight. The N150 CPU tests confirm this, showing that native and LX zones have virtually identical, top-tier performance. On the memory front, the native Zone delivered performance over 13% &lt;em&gt;faster&lt;/em&gt; than the FreeBSD bare-metal baseline, pointing to the high efficiency of the illumos kernel.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;5. The Linux Userland Excels at Throughput&lt;/strong&gt;
A clear pattern emerged on both testbeds: the Ubuntu userland consistently delivered excellent benchmark results. On the CPU front, Ubuntu on both FreeBSD and SmartOS delivered the highest, and nearly identical, performance scores on the N150. For memory, the story was even more dramatic: the Ubuntu LX Zone on SmartOS was the top performer, beating bare-metal FreeBSD by nearly 25%, while the Ubuntu jail on the i7 also surpassed its host by over 7%.&lt;/p&gt;
&lt;h3&gt;Final Thoughts: The Verdict for My Client's New Server&lt;/h3&gt;
&lt;p&gt;So, what's the bottom line for my client's new MiniPC? This benchmarking journey has made the path forward much clearer.&lt;/p&gt;
&lt;p&gt;At the beginning of this process, my main question was whether to stick with a KVM-based setup or make the switch to bhyve. The performance data answers that decisively. The legacy KVM on SmartOS showed a crippling performance penalty, making it a non-starter. Given that, the extra effort to migrate the existing VMs to a bhyve-compatible format is absolutely worth it. The performance gain is just too significant to ignore.&lt;/p&gt;
&lt;p&gt;The final question, then, is &lt;em&gt;which&lt;/em&gt; host OS to use for bhyve: SmartOS or FreeBSD? This is a much tougher call, as both platforms demonstrated incredible strengths.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;, powered by the illumos kernel, was a true surprise. It delivered astonishing performance on the target N150 hardware. Its key advantage is the raw speed of its containerization for both CPU and memory tasks. The Ubuntu LX Zone not only ran flawlessly but delivered top-tier CPU scores and outperformed the bare-metal FreeBSD baseline in memory by a massive 25% margin. This points to a highly efficient kernel and offers the tantalizing prospect of running ultra-fast Linux containers alongside performant bhyve VMs on the same host.&lt;/p&gt;
&lt;p&gt;On the other hand, &lt;strong&gt;FreeBSD&lt;/strong&gt; proved its mastery of bhyve virtualization. The tests on the i7 hardware showed its implementation to be the gold standard, offering virtually zero performance overhead for full hardware virtualization. Its native Jails are equally efficient, and its Linux compatibility layer is so effective that an Ubuntu jail delivered the &lt;em&gt;fastest CPU performance&lt;/em&gt; of all containers tested on FreeBSD. For workloads that &lt;em&gt;must&lt;/em&gt; live in a full VM, FreeBSD offers the most performant and native bhyve experience, with the reasonable expectation that its support for newer hardware like the N150 will only improve over time.&lt;/p&gt;
&lt;p&gt;Ultimately, the choice comes down to the primary workload. It's a decision between the raw container speed and Linux flexibility of SmartOS versus the pure, uncompromising HVM performance of FreeBSD.&lt;/p&gt;
&lt;p&gt;But one thing is certain: thanks to this deep dive, the path forward is much clearer, and it's paved by bhyve.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Fri, 19 Sep 2025 10:50:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/09/19/freebsd-vs-smartos-whos-faster-for-jails-zones-bhyve/</guid><category>freebsd</category><category>smartos</category><category>illumos</category><category>linux</category><category>jail</category><category>bhyve</category><category>zones</category><category>data</category><category>hosting</category><category>server</category><category>sysadmin</category><category>virtualization</category><category>ownyourdata</category></item><item><title>Launching BSSG - My Journey from Dynamic CMS to Bash Static Site Generator</title><link>https://it-notes.dragas.net/2025/04/07/launching-bssg-my-journey-from-dynamic-cms-to-bash-static-site-generator/</link><description>&lt;p&gt;&lt;img src="https://unsplash.com/photos/0gkw_9fy0eQ/download?ixid=M3wxMjA3fDB8MXxhbGx8fHx8fHx8fHwxNzQzOTQ2ODgyfA&amp;force=true&amp;w=1920" alt="Photo by Patrick Fore on Unsplash"&gt;&lt;/p&gt;&lt;p&gt;I've had my own website practically forever. Back in the late '90s, I already had a web page on my ISP's server, and since at least 2001, I've had my own homepage on my own server. I've never been a great graphic designer, let alone a skilled webmaster, so I've always tried to keep things minimal and compatible.&lt;/p&gt;
&lt;p&gt;Initially, like many others, I wrote HTML pages by hand. Then I used WYSIWYG creation tools, and eventually, I landed on CMS (Content Management Systems).&lt;/p&gt;
&lt;h2&gt;The Era of Dynamic CMS&lt;/h2&gt;
&lt;p&gt;I liked &lt;a href="https://en.wikipedia.org/wiki/Content_management_system"&gt;CMS&lt;/a&gt; because they allowed me to focus on the content and not on the correctness of the generated HTML. Thanks to them, I started writing my first blog shortly afterward.&lt;/p&gt;
&lt;p&gt;Over the years, I've used many tools like PHPNuke, FlatNuke (created and developed by my friend &lt;a href="https://simonevellei.com/"&gt;Simone Vellei&lt;/a&gt;), eventually moving through Joomla and Wordpress. Wordpress always seemed like the most suitable tool for the job, and I used it for many years. Even today, mainly on the sysadmin side, I manage hundreds of Wordpress sites, and they are reasonably reliable, aside from the plugins (because &lt;a href="https://www.youtube.com/live/_IdH5YTBAGs?t=9801"&gt;the problem with Wordpress isn't the software itself, but many of the external plugins&lt;/a&gt;).&lt;/p&gt;
&lt;p&gt;But this is precisely the problem: all dynamic CMS require constant and continuous security updates because, without them, the chances of defacement are extremely high.&lt;/p&gt;
&lt;h2&gt;Discovering Static Site Generators&lt;/h2&gt;
&lt;p&gt;And that's precisely why, when I discovered Carlos Fenollosa's &lt;a href="https://github.com/cfenollosa/bashblog"&gt;bashblog&lt;/a&gt; in 2014, it immediately became clear that, indeed, there was no reason to continue down the path of dynamic CMS. I don't write often, I don't update often, there's no reason to regenerate all the content with every visit. Sure, WordPress caching plugins are often quite effective, but they are still add-ons that need to be kept up to date. And I'm not a fan of adding things to streamline. Often, less is more.&lt;/p&gt;
&lt;p&gt;So, I started using bashblog for some 'secondary' projects until, in 2015, I &lt;a href="https://www.dragas.net/posts/da-wordpress-a-pelican/"&gt;migrated my 'old' Italian blog from WordPress to Pelican&lt;/a&gt;. Shortly after, I &lt;a href="https://www.dragas.net/posts/da-pelican-a-nikola/"&gt;moved from Pelican to Nikola&lt;/a&gt;, and that blog is still generated by Nikola, although (that blog's) updates are now extremely rare (so much so that I consider it almost abandoned). I also created the first Docker container for Nikola and, for a long time, it was listed among the deployment methods on their site.&lt;/p&gt;
&lt;h2&gt;Building My Own: BSSG&lt;/h2&gt;
&lt;p&gt;But bashblog continued to fascinate me. So in 2015, for fun, I started developing my own Static Site Generator from scratch. I called it (with little imagination), &lt;a href="https://bssg.dragas.net"&gt;BSSG - Bash Static Site Generator&lt;/a&gt;. The plan was for it to be compatible with the main OSes I use, to remain sufficiently simple and straightforward (!!!), and to be tailored to my needs. I intended to use it only and exclusively for small private things, starting with a sort of diary of mine - more professional than personal - and leave the 'official' blogs to more tested and 'professional' tools.&lt;/p&gt;
&lt;p&gt;As time went by, I added some small features I liked: theming support, archives, tags (initially absent). Over time, many functions were added, and the script grew large – large enough to make me pause and ask myself some questions about the long-term stability of this solution. So, it remained only for my 'diary', which, however, grew year after year to the point where I needed to devise some kind of optimization. I then developed (more for fun than out of real necessity) a caching system. On rebuild, only what needs to be rebuilt is reconstructed, making the operation sufficiently fast even as the number of posts grows. Obviously, there are limits: using bash and external tools, the efficiency cannot be compared to that of a proper programming language.&lt;/p&gt;
&lt;h2&gt;Brief Detour: ITNBlog&lt;/h2&gt;
&lt;p&gt;And it's here that I decided, in preparation for opening a new blog (this one), to create a new tool called &lt;a href="https://itnblog.dragas.net"&gt;ITNBlog&lt;/a&gt;. I would develop it in Python and focus a bit more on performance and completeness. But ITNBlog stalled very quickly: time was limited, I'm not a full-time developer, so I realized I would spend too much time on development and too little on content creation.&lt;/p&gt;
&lt;p&gt;Therefore, in 2018, I launched this blog but using &lt;a href="https://ghost.org/"&gt;Ghost&lt;/a&gt;, a solution that gave me good results, including performance-wise. I chose Ghost because I thought that, writing content also from my phone while on the go, a real CMS would be useful. Spoiler: no, it didn't turn out that way, so a few years later I decided to migrate this blog to &lt;a href="https://gohugo.io/"&gt;Hugo&lt;/a&gt;. Nevertheless, I continued to develop ITNBlog on and off, as a hobby, without any particular ambitions.&lt;/p&gt;
&lt;p&gt;At some point, however, I found myself in a particular situation: Hugo deprecated some features, and the theme I had chosen moved forward. But I ended up in an unpleasant situation: using the latest version of Hugo and the current version of the theme would produce unacceptable output; staying with the old version of Hugo while waiting for the theme update meant making a compromise. I actually build the blog from different devices, and they all have different versions of Hugo installed. Change the theme? Feasible, but I would have had to modify almost the entire site.&lt;/p&gt;
&lt;p&gt;I considered migrating to &lt;a href="https://github.com/gyptazy/manpageblog"&gt;manpageblog&lt;/a&gt; by &lt;a href="https://gyptazy.com/"&gt;gyptazy&lt;/a&gt; – I personally love its simplicity and retro look, and it was the main candidate to replace Hugo. I also created a script and migrated all my posts into the correct format.&lt;/p&gt;
&lt;h2&gt;BSSG to the Rescue (and ITNBlog's Role)&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;That's when I realized: I would implement the few missing features needed to make ITNBlog sufficiently complete, and this blog would be published using it, ensuring I'd be committed to its development. However, ITNBlog is not mature enough to be released publicly, so for now, it will remain the engine just for my blog. Then I thought again about BSSG – development had stalled some time ago, but it was still in use – and figured that perhaps, with a little tidying up, I could release &lt;em&gt;it&lt;/em&gt;.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Because I'm tired of seeing people use dynamic CMS even to implement primarily static blogs or websites – and BSSG, despite its limitations and inefficiencies, works. And there are many themes to choose from. In short, you can install it and generate your blog in seconds.&lt;/p&gt;
&lt;h2&gt;Why Choose BSSG?&lt;/h2&gt;
&lt;p&gt;BSSG is the result of a 10-year evolution. The code isn't extremely consistent, some interesting features are missing (which I plan to implement), and it could use refactoring as the build script is monstrously large. But it works, it's portable (and much of the complexity increased precisely because of portability), and it generates sites that achieve very high accessibility and speed scores.&lt;/p&gt;
&lt;p&gt;Here are some highlights:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;✅ &lt;strong&gt;Portability:&lt;/strong&gt; Uses native OS tools (e.g., &lt;code&gt;md5sum&lt;/code&gt; on Linux, &lt;code&gt;md5&lt;/code&gt; on OpenBSD and NetBSD). Portability itself added much of the complexity!&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Simple Theming:&lt;/strong&gt; Themes are just simple CSS files, so the structure remains the same – simplifying theme switching or creating new ones. More than 50 themes &lt;a href="https://bssg.dragas.net/example"&gt;are already available&lt;/a&gt;!&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Essential Features:&lt;/strong&gt; Supports RSS feed generation, sitemap.xml, OpenGraph tags (to improve social sharing), internationalization (the blog can be in languages other than English – but not multilingual, at least for now), etc.&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Built-in Backup and Restore script:&lt;/strong&gt; It will just copy the configuration file, posts, and pages. Nothing else.&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Minimal Dependencies.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Markdown Support:&lt;/strong&gt; Posts and pages are in Markdown (CommonMark, Pandoc, and markdown.pl are supported).&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Feature Images.&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;Optional GNU Parallel Integration:&lt;/strong&gt; To speed up build times when there are many posts. This feature significantly impacts the code and has caused me numerous headaches over time. But it's optional (if &lt;code&gt;parallel&lt;/code&gt; isn't found, it proceeds traditionally) and only provides benefits when the number of posts increases: with few posts, performance actually degrades.&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;High Accessibility and Performance Scores:&lt;/strong&gt; Sites built with BSSG achieve excellent scores.&lt;/li&gt;
&lt;li&gt;✅ &lt;strong&gt;BSD Licensed:&lt;/strong&gt; Released under a BSD license.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One of the problems I've always had with all CMS and SSGs has been choosing a theme. In some cases (like Hugo), the theme heavily influences the output, which is both good and bad. Good because it makes each site unique, but bad because it makes switching themes difficult. In the past, I've sometimes found myself having to change themes because they were abandoned and no longer updated. BSSG works differently: theming comes from using a different CSS file, which makes its structure more rigid, but switching from one theme to another is trivial. To help with the choice, I created a script that will build your site using all the themes present in the &lt;code&gt;themes&lt;/code&gt; directory, just like on the examples page of the official website. This way, it will be easy to see and test your site with all available themes. If you want to add a touch of originality, you can choose the 'random' theme, and one will be chosen randomly from the list at each site regeneration.&lt;/p&gt;
&lt;h2&gt;Admin Interface (Experimental)&lt;/h2&gt;
&lt;p&gt;BSSG is in production use by some clients (for their internal sites), for whom I also created a basic admin interface (using Node Express, partly to chew on a bit of Node), but I don't feel ready to release it immediately as it's not sufficiently tested. It has an integrated Markdown editor and allows post scheduling, generating the files and launching BSSG with the right options at the right time. This could be that connecting link between traditional CMS and SSGs. There are others, but this one is tightly integrated with BSSG.&lt;/p&gt;
&lt;h2&gt;BSSG is Available Today&lt;/h2&gt;
&lt;p&gt;Starting today, BSSG is publicly available. It's not perfect, it probably doesn't make sense to do something of this complexity in bash, development will proceed slowly – but it's here, available to anyone who might find it useful.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://bssg.dragas.net"&gt;Happy blogging everyone!&lt;/a&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 07 Apr 2025 08:11:36 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/04/07/launching-bssg-my-journey-from-dynamic-cms-to-bash-static-site-generator/</guid><category>bssg</category><category>ssg</category><category>ownyourdata</category><category>freebsd</category><category>openbsd</category><category>netbsd</category><category>linux</category><category>server</category><category>web</category><category>blogging</category></item><item><title>Managing ZFS Full Pool Issues with Reserved Space</title><link>https://it-notes.dragas.net/2024/11/28/managing-zfs-full-pool-issues-with-reserved-space/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="Managing ZFS Full Pool Issues with Reserved Space"&gt;&lt;/p&gt;&lt;p&gt;Yesterday morning, I received a panicked call from a developer:&lt;br /&gt;
"I accidentally filled up the storage, and now I can't perform any operations! My ZFS pool is full!"&lt;/p&gt;
&lt;p&gt;I immediately reassured them because I had anticipated this kind of issue. One of the things I almost always do when managing ZFS file systems is to reserve space in a specially created dataset.&lt;/p&gt;
&lt;p&gt;This is because ZFS, like all CoW (Copy-on-Write) file systems, can find itself unable to free up space when completely full. By using reserved space, I can always free it up and delete other data, restoring the system to normal operations.&lt;/p&gt;
&lt;p&gt;To reserve space, simply create a dataset and assign it a reserved size. Of course, this dataset should not be used for anything else; otherwise, the entire purpose would be defeated.&lt;/p&gt;
&lt;p&gt;To create it and reserve space, you only need two simple commands. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs create zroot/reserved
zfs set reservation=5G zroot/reserved
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This creates the dataset and assigns it 5 GB of reserved space.&lt;/p&gt;
&lt;p&gt;Here’s the situation before the operation:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs list zroot
NAME    USED  AVAIL  REFER  MOUNTPOINT
zroot  3.02G   109G    96K  /zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And here’s the situation after:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs list zroot
NAME    USED  AVAIL  REFER  MOUNTPOINT
zroot  8.02G   104G    96K  /zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As you can see, the 5 GB are removed from the available space and marked as used, but they are actually empty.&lt;/p&gt;
&lt;p&gt;In case of a full file system, you can delete this dataset (or reduce its size) to return to normal file system operation.&lt;/p&gt;
&lt;p&gt;Even with this technique, I still recommend not filling ZFS pools beyond 80% of their capacity, as performance degrades significantly past that point.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 28 Nov 2024 20:25:00 +0100</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/11/28/managing-zfs-full-pool-issues-with-reserved-space/</guid><category>zfs</category><category>freebsd</category><category>linux</category><category>data</category><category>filesystems</category><category>recovery</category><category>tipsandtricks</category><category>tutorial</category><category>series</category></item><item><title>From Proxmox to FreeBSD - Story of a Migration</title><link>https://it-notes.dragas.net/2024/10/21/from-proxmox-to-freebsd-story-of-a-migration/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="From Proxmox to FreeBSD - Story of a Migration"&gt;&lt;/p&gt;&lt;p&gt;I have been managing a client's server for several years. I inherited a setup (actually, partly done by me at the time, but following the directions of their internal administrator) based on Proxmox. Originally, the VM disks were &lt;code&gt;qcow2&lt;/code&gt; files, but over time and with Proxmox updates, I managed to create a ZFS pool and move them onto it. For backups, I continued to use Proxmox Backup Server (even though virtualized on bhyve) - a solution we've been using for several years.&lt;/p&gt;
&lt;p&gt;The existing VMs/containers were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;An LXC container based on Debian&lt;/strong&gt;, with Apache as a reverse proxy. The choice of Apache was mainly tied to specific configurations from the company that produces the underlying software. It has always done an excellent job.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A VM running OPNsense&lt;/strong&gt; — the choice was related to direct use by the client, as it allows easy creation/modification of users for the internal VPN. Some users can access via VPN (OpenVPN) and perform specific operations, such as SSH connections. It has always worked well and is up to date.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Two VMs based on Rocky Linux 9&lt;/strong&gt; — both with components of the management software in Java, with the reverse proxy directing requests based on the requested site.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Two VMs based on Rocky Linux 8&lt;/strong&gt; — these also have parts of the management software and the databases. These machines are used both by providing specific URLs and by the other two as databases.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The load is not particularly high, and the machines have good performance. Suddenly, however, I received a notification: one of the NVMe drives died abruptly, and the server rebooted. ZFS did its job, and everything remained sufficiently secure, but since it's a leased server and already several years old, I spoke with the client and proposed getting more recent hardware and redoing the setup &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;based&lt;/a&gt; on a &lt;a href="https://it-notes.dragas.net/2024/10/03/i-solve-problems-eurobsdcon/"&gt;FreeBSD&lt;/a&gt; host.&lt;/p&gt;
&lt;p&gt;Unfortunately, I cannot change the setup of the VMs. In my opinion, there are no contraindications since the databases are PostgreSQL and the management applications are Java applications with Tomcat. However, the software house only certifies that type of setup, and considering they are the ones performing updates and fixes, it's appropriate to maintain the setup they suggest. It's a technically sound choice in my opinion, so I can't say anything negative.&lt;/p&gt;
&lt;h2&gt;Acquiring New Hardware&lt;/h2&gt;
&lt;p&gt;The first step was, of course, to get the new server. It took a few days because I requested ECC RAM (for a small price variation), and the time extended. As soon as the physical server was ready, I got to work.&lt;/p&gt;
&lt;p&gt;I installed &lt;strong&gt;FreeBSD 14.1-RELEASE&lt;/strong&gt;, root on ZFS, creating a mirror setup between the NVMe drives. The I/O is not particularly heavy in this setup, but I don't want to waste potential: even if I can reduce the length of an operation by a few seconds, I see no reason not to do it.&lt;/p&gt;
&lt;p&gt;As the first step, I decided to manually create the bridge to which I will connect both the VMs and the reverse proxy, which will be installed inside a FreeBSD jail.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;vm-bhyve&lt;/code&gt;, the tool I use to manage VMs, allows creating bridges, but I prefer to manage it manually and maintain more complete control over everything. I will also enable &lt;code&gt;pf&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;I decided to use &lt;code&gt;zstd&lt;/code&gt; as the compression algorithm and disable &lt;code&gt;atime&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs set compression=zstd zroot
zfs set atime=off zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I then modified the &lt;code&gt;/etc/rc.conf&lt;/code&gt; file as follows:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;cloned_interfaces=&amp;quot;bridge0 lo1&amp;quot;
ifconfig_lo1_name=&amp;quot;bastille0&amp;quot;
ifconfig_bridge0=&amp;quot;inet 192.168.33.1 netmask 255.255.255.0&amp;quot;
gateway_enable=&amp;quot;YES&amp;quot;
pf_enable=&amp;quot;YES&amp;quot;
bastille_enable=&amp;quot;YES&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I then updated FreeBSD by running the usual command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;freebsd-update fetch install
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configuring the Firewall&lt;/h2&gt;
&lt;p&gt;I configured the firewall with a basic setup:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-pf"&gt;ext_if=&amp;quot;igb0&amp;quot;

set block-policy return
set skip on bridge0

table &amp;lt;jails&amp;gt; persist
nat on $ext_if from {192.168.33.0/24} to any -&amp;gt; ($ext_if)
nat on $ext_if from &amp;lt;jails&amp;gt; to any -&amp;gt; ($ext_if:0)

# nginx-proxy - to the jail
rdr on $ext_if inet proto tcp from any to PUBLIC_IP port = 80 -&amp;gt; 192.168.33.254 port 80
rdr on $ext_if inet proto tcp from any to PUBLIC_IP port = 443 -&amp;gt; 192.168.33.254 port 443

# opnsense
rdr on $ext_if inet proto tcp from any to PUBLIC_IP port = 1194 -&amp;gt; 192.168.33.253 port 1194
rdr on $ext_if inet proto udp from any to PUBLIC_IP port = 1194 -&amp;gt; 192.168.33.253 port 1194

rdr-anchor &amp;quot;rdr/*&amp;quot;

block in all
antispoof for $ext_if inet
pass in inet proto icmp
pass out quick keep state
pass in inet proto tcp from any to any port {http,https} flags S/SA keep state
pass in inet proto {tcp,udp} from any to any port 1194 flags S/SA keep state
# This will be closed at the end of the setup and will be allowed only via VPN
pass in inet proto tcp from any to any port ssh flags S/SA keep state
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once finished, I rebooted everything to load the new kernel. No problems.&lt;/p&gt;
&lt;h2&gt;Installing Necessary Tools&lt;/h2&gt;
&lt;p&gt;I then installed some useful packages:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;pkg install tmux py311-zfs-autobackup mbuffer rsync vm-bhyve-devel edk2-bhyve grub2-bhyve bastille
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I created the datasets for the jails and VMs:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs create zroot/bastille
zfs create zroot/VMs
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I then started configuring everything. First, &lt;strong&gt;BastilleBSD&lt;/strong&gt;: I modified &lt;code&gt;/usr/local/etc/bastille/bastille.conf&lt;/code&gt; by adding:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;## ZFS options
bastille_zfs_enable=&amp;quot;YES&amp;quot;
bastille_zfs_zpool=&amp;quot;zroot/bastille&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then I enabled and configured &lt;code&gt;vm-bhyve&lt;/code&gt;, enabling the serial console on tmux:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;sysrc vm_enable=&amp;quot;YES&amp;quot;
sysrc vm_dir=&amp;quot;zfs:zroot/VMs&amp;quot;
vm init
cp /usr/local/share/examples/vm-bhyve/* /zroot/VMs/.templates/
vm switch create -t manual -b bridge0 public
vm set console=tmux
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now I bootstrapped FreeBSD 14.1-RELEASE on BastilleBSD:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;bastille bootstrap 14.1-RELEASE update
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Setting Up the Reverse Proxy Jail&lt;/h2&gt;
&lt;p&gt;I then started by creating the jail for the reverse proxy:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;bastille create -B apache 14.1-RELEASE 192.168.33.254/24 bridge0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Once created, I had to modify the default gateway of the jail because by default it is set to that of the host. It was enough to set &lt;code&gt;192.168.33.1&lt;/code&gt; in &lt;code&gt;/usr/local/bastille/jails/apache/root/etc/rc.conf&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;The configuration of Apache will not be described here as it is closely dependent on the setup, but this jail can reach (in bridge mode) all the VMs that will be placed on the same bridge.&lt;/p&gt;
&lt;h2&gt;Migrating the VMs&lt;/h2&gt;
&lt;p&gt;For migrating the VMs, I decided to proceed as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Updating the operating systems&lt;/strong&gt; of the original VMs - to have the latest version of the kernel and all userland. The VMs are not UEFI, so it will be necessary to have the exact names of the kernel and &lt;code&gt;initrd&lt;/code&gt; as they will need to be specified in the configuration file of &lt;code&gt;vm-bhyve&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Creating templates&lt;/strong&gt; of the VMs with &lt;code&gt;vm-bhyve&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Snapshotting the VM disks&lt;/strong&gt; and initial transfer to the new server with the VMs still running.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Shutting down all the source VMs&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Creating a new snapshot&lt;/strong&gt; and performing an incremental transfer.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Adjusting configurations&lt;/strong&gt; and changing DNS.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I updated everything using &lt;code&gt;dnf&lt;/code&gt;, and rebooted when there was also a kernel update. I took note of the versions to use, namely &lt;code&gt;vmlinuz-4.18.0-513.18.1.el8_9.x86_64&lt;/code&gt; and &lt;code&gt;initramfs-4.18.0-513.18.1.el8_9.x86_64.img&lt;/code&gt; for Rocky Linux 8.10, and &lt;code&gt;vmlinuz-5.14.0-362.18.1.el9_3.0.1.x86_64&lt;/code&gt; and &lt;code&gt;initramfs-5.14.0-362.18.1.el9_3.0.1.x86_64.img&lt;/code&gt; for Rocky Linux 9.4.&lt;/p&gt;
&lt;h3&gt;Migrating Rocky Linux 8.10 VMs&lt;/h3&gt;
&lt;p&gt;On the destination server, I created the VMs. I used the &lt;code&gt;linux-zvol&lt;/code&gt; template, then modified the configurations:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;vm create -t linux-zvol -s 1G -m 8G -c 4 vm100
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I created a virtual disk of 1 GB because it will be replaced by the dataset sent from the current production server, so it's a fictitious size. At that point, I deleted the &lt;code&gt;zvol&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs destroy zroot/VMs/vm100/disk0
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I performed an initial snapshot of the source VMs:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs snapshot zfspool/vm-100-disk-0@Move01
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And then I copied this first snapshot to the destination machine/VM:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs send -R zfspool/vm-100-disk-0@Move01 | mbuffer -s 128k -m 512M | ssh root@destMachine &amp;quot;zfs receive -F zroot/VMs/vm100/disk0&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;At the end of the copy, you can do a test and try to boot. For the Rocky Linux 8.10 VMs, I had no problems because the &lt;code&gt;/boot&lt;/code&gt; partition is in &lt;code&gt;ext4&lt;/code&gt;. I then configured the VM as follows, running the command &lt;code&gt;vm configure vm100&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;loader=&amp;quot;grub&amp;quot;
cpu=&amp;quot;8&amp;quot;
memory=&amp;quot;12G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;
uuid=&amp;quot;the-uuid&amp;quot;
network0_mac=&amp;quot;the-mac&amp;quot;
grub_run0=&amp;quot;linux /vmlinuz-4.18.0-553.22.1.el8_10.x86_64 root=/dev/vda3&amp;quot;
grub_run1=&amp;quot;initrd /initramfs-4.18.0-553.22.1.el8_10.x86_64.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;All I needed to do is: pass to GRUB the name of the kernel and the &lt;code&gt;initramfs&lt;/code&gt;. It is now possible to start the machine and do a test and connect to the console:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;vm start vm100
vm console vm100
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In theory, the machine should boot correctly. It will probably be necessary to change the network interface configurations - in my case, even keeping the &lt;code&gt;virtio&lt;/code&gt; and the MAC address, it changed from &lt;code&gt;ens18&lt;/code&gt; to &lt;code&gt;enp0s5&lt;/code&gt; - but everything else should be fine.&lt;/p&gt;
&lt;p&gt;At this point, if the source VM was turned off/not reachable or simply there is nothing new to synchronize, the migration is complete. Otherwise, it will be necessary to shut down the source VM, create a new snapshot, and transfer it. Let's start by shutting down the source VM and the destination one (&lt;code&gt;vm stop vm100&lt;/code&gt;), roll back to the snapshot transferred to the destination server, create a new snapshot, and perform an incremental transfer:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs rollback zroot/VMs/vm100/disk0@Move01
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On the source server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs snapshot zfspool/vm-100-disk-0@Move02
zfs send -R -i zfspool/vm-100-disk-0@Move01 zfspool/vm-100-disk-0@Move02 | mbuffer -s 128k -m 512M | ssh root@destMachine &amp;quot;zfs receive -F zroot/VMs/vm100/disk0&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this way, we have updated the destination VM and transferred only the differences, without altering the configuration of &lt;code&gt;vm-bhyve&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Start the machine - if all goes well, the migration of this VM is finished.&lt;/p&gt;
&lt;h3&gt;Migrating Rocky Linux 9.4 VMs&lt;/h3&gt;
&lt;p&gt;For the Rocky Linux 9.4 VMs, the situation was more complex: the partitions were all - even &lt;code&gt;/boot&lt;/code&gt;—in &lt;strong&gt;XFS&lt;/strong&gt;. And, for some reason, the GRUB launched by bhyve cannot read from XFS. Therefore, I had to proceed differently.&lt;/p&gt;
&lt;p&gt;I copied, using &lt;code&gt;rsync&lt;/code&gt;, the &lt;code&gt;/boot&lt;/code&gt; of the original VPS (in ZFS) to a directory on the FreeBSD server (specifically, on the VPS I ran the command: &lt;code&gt;rsync -avhHPx --numeric-ids /boot root@FreeBSDHOST:/tmp/&lt;/code&gt;). In this way, I kept the files available.&lt;/p&gt;
&lt;p&gt;I shut down the machine on the original server and copied its &lt;code&gt;zvol&lt;/code&gt; to the new FreeBSD host using the same method as the others. At that point, I recreated the &lt;code&gt;/boot&lt;/code&gt; partition of my VPS in &lt;code&gt;ext3&lt;/code&gt; - directly from the FreeBSD host:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;pkg install fusefs-ext2
kldload fusefs
mkfs.ext3 /dev/zvol/zroot/VMs/vm104/disk0s1
fuse-ext2 -o force /dev/zvol/zroot/VMs/vm104/disk0s1 /mnt/
cd /mnt
rsync -avhHPx /tmp/boot/. .
umount /mnt
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this way, GRUB will be able to access the &lt;code&gt;/boot&lt;/code&gt; partition to load the kernel and &lt;code&gt;initramfs&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Here is the &lt;code&gt;vm-bhyve&lt;/code&gt; configuration for this VM:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;loader=&amp;quot;grub&amp;quot;
cpu=&amp;quot;8&amp;quot;
memory=&amp;quot;12G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;
uuid=&amp;quot;the-uuid&amp;quot;
network0_mac=&amp;quot;the-mac&amp;quot;
grub_run0=&amp;quot;linux /vmlinuz-5.14.0-427.37.1.el9_4.x86_64 root=/dev/vda3&amp;quot;
grub_run1=&amp;quot;initrd /initramfs-5.14.0-427.37.1.el9_4.x86_64.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Launching the machine, however, it will hang during boot and, after a timeout, will ask for administrator credentials to enter a recovery shell. This is because the &lt;code&gt;blkid&lt;/code&gt; of the &lt;code&gt;/boot&lt;/code&gt; partition has changed, and the &lt;code&gt;fstab&lt;/code&gt; of the VM still reports the data of the old XFS partition. In this case, I used the &lt;code&gt;blkid&lt;/code&gt; command in the VM and copied the UUID of the new partition. Then modify the &lt;code&gt;/etc/fstab&lt;/code&gt; file of the VM and put the new &lt;code&gt;blkid&lt;/code&gt;, as well as changing "xfs" to "ext3". After a reboot, the system should start without problems, again with a network card to reconfigure.&lt;/p&gt;
&lt;p&gt;This procedure worked correctly for all VMs. In this way, the storage remains on &lt;code&gt;virtio-blk&lt;/code&gt; even if it would be optimal for the driver to be changed to &lt;code&gt;nvme&lt;/code&gt;. To make this change, you will need to enter the VMs, create a file called &lt;code&gt;/etc/dracut.conf.d/00-custom.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-conf"&gt;add_drivers+=&amp;quot; nvme &amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;And regenerate the &lt;code&gt;initramfs&lt;/code&gt; - in this way, the &lt;code&gt;nvme&lt;/code&gt; driver will be supported at boot:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;dracut --regenerate-all --force
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It will now be enough to change the configurations of &lt;code&gt;vm-bhyve&lt;/code&gt;—&lt;code&gt;virtio-blk&lt;/code&gt; becomes &lt;code&gt;nvme&lt;/code&gt;, and &lt;code&gt;/dev/vda3&lt;/code&gt; becomes &lt;code&gt;/dev/nvme0n1p3&lt;/code&gt;, for example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;loader=&amp;quot;grub&amp;quot;
cpu=&amp;quot;8&amp;quot;
memory=&amp;quot;12G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;nvme&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;
uuid=&amp;quot;the-uuid&amp;quot;
network0_mac=&amp;quot;the-mac&amp;quot;
grub_run0=&amp;quot;linux /vmlinuz-5.14.0-427.37.1.el9_4.x86_64 root=/dev/nvme0n1p3&amp;quot;
grub_run1=&amp;quot;initrd /initramfs-5.14.0-427.37.1.el9_4.x86_64.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Migrating the OPNsense VM&lt;/h3&gt;
&lt;p&gt;For the VM with OPNsense, the procedure was even simpler. It was enough to create a VM of type &lt;code&gt;freebsd-zvol&lt;/code&gt; and copy the disk as done for the others. In this case, I replicated the MAC address of the original virtualized network interface (Proxmox server) to ensure that the underlying FreeBSD recognizes it as the same. FreeBSD is less picky about these things.&lt;/p&gt;
&lt;p&gt;The final &lt;code&gt;vm-bhyve&lt;/code&gt; configuration file will be:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;loader=&amp;quot;bhyveload&amp;quot;
cpu=&amp;quot;2&amp;quot;
memory=&amp;quot;1G&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=&amp;quot;disk0&amp;quot;
disk0_dev=&amp;quot;sparse-zvol&amp;quot;
uuid=&amp;quot;my-uuid&amp;quot;
network0_mac=&amp;quot;my-mac&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configuring Automatic VM Startup&lt;/h2&gt;
&lt;p&gt;To ensure that the VMs all start at boot, just add them to &lt;code&gt;/etc/rc.conf&lt;/code&gt; as follows, giving a 15-second delay between one VM and another:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;[...]
vm_enable=&amp;quot;YES&amp;quot;
vm_dir=&amp;quot;zfs:zroot/VMs&amp;quot;
vm_list=&amp;quot;vm100 vm101 [...] opnsense&amp;quot;
vm_delay=&amp;quot;15&amp;quot;
[...]
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Setting Up Backups&lt;/h2&gt;
&lt;p&gt;At this point, I configured external backups, every hour, similarly to &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;how I described in a previous article&lt;/a&gt;. I also added a local snapshot every 15 minutes, always using &lt;code&gt;zfs-autobackup&lt;/code&gt;. To do this, I created a new tag:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;zfs set autobackup:localsnap=true zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then I modified the &lt;code&gt;/etc/crontab&lt;/code&gt; file, adding this line:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-cron"&gt;*/15    *       *       *       *       root    /usr/local/bin/zfs-autobackup localsnap --keep-source 15min3h,1h1d &amp;gt; /dev/null 2&amp;gt;&amp;amp;1
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;That is, it will perform a snapshot every 15 minutes and keep them for 3 hours, then keep one per hour for a day. In this way, in case of quick recovery due to a problem/error, I won't need to transfer the entire dataset/VM from the backup.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The migration is complete: after changing the DNS, the client performed some tests, and everything works properly. The setup has been active for over a week, and there have been no issues. The performance is excellent; I did not perform tests compared to the previous setup since the hardware of the new FreeBSD host is more powerful and modern, so it wouldn't make sense.&lt;/p&gt;
&lt;p&gt;Apart from the manager, no user was informed of the change, and in the last week, no reports have been received. The VMs are stable, and the host's load is very low.&lt;/p&gt;
&lt;p&gt;The operation actually took less than two hours in total - most of the time has been consumed by the send/receive operations - and gave excellent results. The alternative would have been to install Proxmox on a new host and move the VMs - I probably would have saved a few minutes, but now I can use bhyve, as well as the simplicity and power of the underlying FreeBSD.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 21 Oct 2024 07:33:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/10/21/from-proxmox-to-freebsd-story-of-a-migration/</guid><category>freebsd</category><category>proxmox</category><category>ownyourdata</category><category>bhyve</category><category>virtualization</category><category>server</category><category>filesystems</category><category>linux</category><category>snapshots</category><category>zfs</category></item><item><title>Automating ZFS Snapshots for Peace of Mind</title><link>https://it-notes.dragas.net/2024/08/21/automating-zfs-snapshots-for-peace-of-mind/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/hard_disk.webp" alt="Automating ZFS Snapshots for Peace of Mind"&gt;&lt;/p&gt;&lt;p&gt;One feature I couldn't live without anymore is snapshots. As system administrators, we often find ourselves in situations where we've made a mistake, need to revert to a previous state, or need access to a log that has been rotated and disappeared. Since I started using ZFS, all of this has become incredibly simple, and I feel much more at ease when making any modifications.&lt;/p&gt;
&lt;p&gt;However, since I don't always remember to create a manual snapshot before starting to work, I use an automatic snapshot system. For this type of snapshot, I use the &lt;a href="https://github.com/psy0rz/zfs_autobackup"&gt;excellent &lt;code&gt;zfs-autobackup&lt;/code&gt; tool&lt;/a&gt; - which I also &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;use for backups&lt;/a&gt;. The goal is to have a single, flexible, and configurable tool without having to learn different syntaxes.&lt;/p&gt;
&lt;h2&gt;Why zfs-autobackup?&lt;/h2&gt;
&lt;p&gt;&lt;code&gt;zfs-autobackup&lt;/code&gt; has several advantages that make it perfect (or nearly so) for my purpose:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;It operates based on "tags" set on individual datasets. I don't have to specify the dataset; I just assign a specific tag (of my choice) to the datasets, and &lt;code&gt;zfs-autobackup&lt;/code&gt; will operate on those datasets, transparently with respect to others. This ensures it will work even on datasets in different zpools.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It's extremely flexible in management. For example, by setting the correct tag to "zroot", it will automatically manage all underlying datasets. However, it's possible to exclude some for more granular snapshot management.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;It works well on both FreeBSD and Linux - I use it with satisfaction on both platforms.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Different tags allow for different levels of data retention and operation. For example, the "mylocalsnap" tag will be for local snapshots, while "backup_offsite" will be for backups that will be copied off-site. The two tags (and related snapshots) will be independent, even though they operate on the same datasets.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Installation&lt;/h2&gt;
&lt;p&gt;On FreeBSD, installation is straightforward, as there's a ready-made package:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;pkg install py311-zfs-autobackup
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On Linux, it will depend on the specific distribution. Being in Python, it will always be possible to &lt;a href="https://github.com/psy0rz/zfs_autobackup/wiki"&gt;install it using pip&lt;/a&gt;.&lt;/p&gt;
&lt;h2&gt;Configuration&lt;/h2&gt;
&lt;p&gt;Once installed, you just need to assign the tag to the dataset. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs set autobackup:mylocalsnap=true zroot
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will set a tag called "mylocalsnap" on zroot and underlying datasets, i.e., on the entire main file system of FreeBSD.&lt;/p&gt;
&lt;h2&gt;Usage&lt;/h2&gt;
&lt;p&gt;Now, you just need to run &lt;code&gt;zfs-autobackup&lt;/code&gt;, specifying both the tag and the snapshot retention criteria:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;/usr/local/bin/zfs-autobackup mylocalsnap --keep-source 5min1h,1h1d
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In this case, it will take a (recursive) snapshot of the datasets that have the "mylocalsnap" tag set to true, keeping one snapshot every 5 minutes for an hour, and one every hour for a day.&lt;/p&gt;
&lt;p&gt;On subsequent executions of &lt;code&gt;zfs-autobackup&lt;/code&gt;, snapshots that don't meet the previous retention criteria will be deleted.&lt;/p&gt;
&lt;p&gt;After running this command, here's the result:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;root@fbsnap:~ # zfs list -t snapshot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
zroot@mylocalsnap-20240820150115                  0B      -    96K  -
zroot/ROOT@mylocalsnap-20240820150115             0B      -    96K  -
zroot/ROOT/default@mylocalsnap-20240820150115     0B      -  1.00G  -
zroot/home@mylocalsnap-20240820150115             0B      -    96K  -
zroot/tmp@mylocalsnap-20240820150115              0B      -   104K  -
zroot/usr@mylocalsnap-20240820150115              0B      -    96K  -
zroot/usr/ports@mylocalsnap-20240820150115        0B      -    96K  -
zroot/usr/src@mylocalsnap-20240820150115          0B      -    96K  -
zroot/var@mylocalsnap-20240820150115              0B      -    96K  -
zroot/var/audit@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/crash@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/log@mylocalsnap-20240820150115          0B      -   144K  -
zroot/var/mail@mylocalsnap-20240820150115         0B      -    96K  -
zroot/var/tmp@mylocalsnap-20240820150115          0B      -    96K  -
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;As you can see, snapshots have been taken of all datasets with the tag set.&lt;/p&gt;
&lt;h2&gt;Automation&lt;/h2&gt;
&lt;p&gt;To automate the process, simply modify the &lt;code&gt;/etc/crontab&lt;/code&gt; file and add a line like this:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;*/5     *       *       *       *       root    /usr/local/bin/zfs-autobackup mylocalsnap --keep-source 5min1h,1h1d
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, wait a few minutes and check again:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;root@fbsnap:~ # zfs list -t snapshot
NAME                                            USED  AVAIL  REFER  MOUNTPOINT
zroot@mylocalsnap-20240820150115                  0B      -    96K  -
zroot/ROOT@mylocalsnap-20240820150115             0B      -    96K  -
zroot/ROOT/default@mylocalsnap-20240820150115   212K      -  1.00G  -
zroot/ROOT/default@mylocalsnap-20240820151000   128K      -  1.00G  -
zroot/home@mylocalsnap-20240820150115             0B      -    96K  -
zroot/tmp@mylocalsnap-20240820150115             72K      -   104K  -
zroot/tmp@mylocalsnap-20240820151000              0B      -   104K  -
zroot/usr@mylocalsnap-20240820150115              0B      -    96K  -
zroot/usr/ports@mylocalsnap-20240820150115        0B      -    96K  -
zroot/usr/src@mylocalsnap-20240820150115          0B      -    96K  -
zroot/var@mylocalsnap-20240820150115              0B      -    96K  -
zroot/var/audit@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/crash@mylocalsnap-20240820150115        0B      -    96K  -
zroot/var/log@mylocalsnap-20240820150115         64K      -   144K  -
zroot/var/log@mylocalsnap-20240820151000         60K      -   144K  -
zroot/var/mail@mylocalsnap-20240820150115         0B      -    96K  -
zroot/var/tmp@mylocalsnap-20240820150115         64K      -    96K  -
zroot/var/tmp@mylocalsnap-20240820151000          0B      -    96K  -
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;If everything went as it should, you'll notice that unmodified datasets won't have new snapshots, while those that have been modified since the previous manual execution (e.g., zroot/var/log) will contain both the previous snapshot and the automatic one.&lt;/p&gt;
&lt;h2&gt;Recovering Files from Snapshots&lt;/h2&gt;
&lt;p&gt;There are various ways to recover a file from the previous snapshot. One option is to restore the entire snapshot to the current dataset, but this may not be the best option as it will perform a complete restore.&lt;/p&gt;
&lt;p&gt;Another alternative is to go to the hidden snapshot directory. For example:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;root@fbsnap:~ # cd /var/log/.zfs/snapshot/mylocalsnap-20240820151000/
root@fbsnap:/var/log/.zfs/snapshot/mylocalsnap-20240820151000 # ls -l
total 39
-rw-------  1 root wheel     872 Aug 20 15:06 auth.log
-rw-r--r--  1 root wheel   79079 Aug 20 13:19 bsdinstall_log
-rw-------  1 root wheel    5401 Aug 20 15:10 cron
-rw-r--r--  1 root wheel      63 Aug 20 13:20 daemon.log
-rw-------  1 root wheel      63 Aug 20 13:20 debug.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 devd.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 lpd-errs
-rw-r-----  1 root wheel      63 Aug 20 13:20 maillog
-rw-r--r--  1 root wheel   17043 Aug 20 15:00 messages
-rw-r-----  1 root network    63 Aug 20 13:20 ppp.log
-rw-------  1 root wheel      63 Aug 20 13:20 security
-rw-r--r--  1 root wheel     197 Aug 20 15:00 utx.lastlogin
-rw-r--r--  1 root wheel     187 Aug 20 15:00 utx.log
-rw-------  1 root wheel      63 Aug 20 13:20 xferlog
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;From here, you can read and recover any file present in the individual snapshot. The snapshots are read-only, so you won't be able to write to them.&lt;/p&gt;
&lt;h2&gt;Creating a Writable Copy of a Snapshot&lt;/h2&gt;
&lt;p&gt;Should you need a read-write copy of a specific snapshot, you can use the zfs clone command:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;root@fbsnap:~ # zfs clone zroot/var/log@mylocalsnap-20240820151000 zroot/recover
root@fbsnap:~ # ls -l /zroot/recover/
total 39
-rw-------  1 root wheel     872 Aug 20 15:06 auth.log
-rw-r--r--  1 root wheel   79079 Aug 20 13:19 bsdinstall_log
-rw-------  1 root wheel    5401 Aug 20 15:10 cron
-rw-r--r--  1 root wheel      63 Aug 20 13:20 daemon.log
-rw-------  1 root wheel      63 Aug 20 13:20 debug.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 devd.log
-rw-r--r--  1 root wheel      63 Aug 20 13:20 lpd-errs
-rw-r-----  1 root wheel      63 Aug 20 13:20 maillog
-rw-r--r--  1 root wheel   17043 Aug 20 15:00 messages
-rw-r-----  1 root network    63 Aug 20 13:20 ppp.log
-rw-------  1 root wheel      63 Aug 20 13:20 security
-rw-r--r--  1 root wheel     197 Aug 20 15:00 utx.lastlogin
-rw-r--r--  1 root wheel     187 Aug 20 15:00 utx.log
-rw-------  1 root wheel      63 Aug 20 13:20 xferlog
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This creates a new dataset zroot/recover that is a writable copy of the snapshot. You can now modify these files as needed, without affecting the original snapshot or the live filesystem.&lt;/p&gt;
&lt;h2&gt;Cleaning Up&lt;/h2&gt;
&lt;p&gt;Sometimes you may want to delete all snapshots generated by &lt;code&gt;zfs-autobackup&lt;/code&gt;. There are various ways, but the quickest can be to use a simple pipe:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;zfs list -t snapshot -o name | grep -i mylocalsnap | xargs -n 1 zfs destroy -vr
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;By implementing automatic ZFS snapshots, you can work with peace of mind, knowing that you can always revert changes or recover lost files. This setup provides an excellent balance between data protection and system performance.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 21 Aug 2024 08:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/08/21/automating-zfs-snapshots-for-peace-of-mind/</guid><category>zfs</category><category>freebsd</category><category>linux</category><category>backup</category><category>data</category><category>filesystems</category><category>snapshots</category><category>recovery</category></item><item><title>From Cloud Chaos to FreeBSD Efficiency</title><link>https://it-notes.dragas.net/2024/07/04/from-cloud-chaos-to-freebsd-efficiency/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/datacenter.webp" alt="From Cloud Chaos to FreeBSD Efficiency"&gt;&lt;/p&gt;&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;A few months ago, a client asked me to take care of their Kubernetes cluster (hosted on AWS and GCP). In their opinion, the costs were exorbitantly high for relatively simple and lean websites. Sure, they had many visits, but nothing too excessive development-wise.&lt;/p&gt;
&lt;p&gt;I kindly declined. Unfortunately, their situation is all too common these days: they hired developers accustomed to working that way, convinced that a system administrator is now unnecessary because "the cloud has infinite potential." They were used to considering optimization as secondary because "we have infinite power" (and this is already a spoiler for the ending).&lt;/p&gt;
&lt;p&gt;Being open to dialogue and new experiences, they asked for my opinion on the matter. We talked for a while, and I explained that, in my view, for the type of setup they had (standard, with various replicas and variants, but primarily based on two platforms), it didn't make sense. I saw it as complicating things. An over-engineering of something simple. Like taking a cruise ship to cross a river.&lt;/p&gt;
&lt;p&gt;They then asked me to create something simple that would serve as a development server and for backups, to understand what kind of solution I had in mind.&lt;/p&gt;
&lt;h2&gt;The Solution&lt;/h2&gt;
&lt;p&gt;So, I started building everything. I began with FreeBSD 13.2-RELEASE, but in the meantime, 14.0-RELEASE came out, so that’s the version I delivered.&lt;/p&gt;
&lt;p&gt;I installed the operating system on a physical server, leased from one of the main European providers. Benefiting from one of their auctions (good deals can be found on weekends), they found a sufficiently powerful machine, with 128GB of RAM, 2 NVMe drives of 1TB each, and two spinning disks of 2TB each for less than 100 euros per month. They also took another, less powerful one for additional backups and to back up the first one.&lt;/p&gt;
&lt;h2&gt;Implementation&lt;/h2&gt;
&lt;p&gt;I decided to keep the host as clean as possible and concentrated the services in jails (managed by BastilleBSD) and VMs. The machine was divided as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A series of bridges - to be used for different projects. Jails of the same project and/or type use the same bridge and can communicate with each other, sharing some resources (MariaDB, etc.).&lt;/li&gt;
&lt;li&gt;A bhyve VM with &lt;a href="https://alpinelinux.org/"&gt;Alpine Linux&lt;/a&gt; - in my opinion, the best distribution for running Docker containers. Do we really need systemd just to launch Docker? They mainly use it as a pre-production test bench, connected via VPN to their company LAN. It is the core of their "online" development, i.e., outside their computers. It has 32GB of RAM, 200GB of disk (obviously bhyve is configured with NVMe drivers), and 4 cores assigned.&lt;/li&gt;
&lt;li&gt;A VNET jail with a reverse proxy (nginx) - they know how to modify virtual hosts and generate certificates with certbot, pointing to the underlying jails.&lt;/li&gt;
&lt;li&gt;A series of "empty" VNET jails, to be cloned, for each type of setup (they mainly have CMS based on WordPress and Laravel, so with all dependencies inside - nginx, php, redis, etc. except the databases).&lt;/li&gt;
&lt;li&gt;A VNET jail with MariaDB installed, to be cloned, to be attached to different projects as needed.&lt;/li&gt;
&lt;li&gt;zfs-autobackup performs local snapshots, keeping: one every 15 minutes for 3 hours, one per hour for 24 hours, one per day for 3 days.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Backups &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;are also performed using zfs-autobackup&lt;/a&gt; and, in case of disaster recovery in rapid times, a zfs-send (and corresponding zfs-receive) every 10 minutes on another machine (the other, smaller one, also taken at auction), with the same bridges, firewall rules, BastilleBSD, and bhyve installed - ready to start in case of disaster. Being a test server, we didn't consider to implement a proper HA - at the moment, it wouldn't make sense.&lt;/p&gt;
&lt;p&gt;They also have another job with zfs-autobackup that performs an additional backup on a server (Debian in their offices). &lt;a href="https://my-notes.dragas.net/posts/2024/who-is-the-real-owner-of-your-data/"&gt;Safe data, in my opinion, are those in storage under your b...ench&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I delivered everything to them and gave a brief course to the more experienced devs on how to manage things. No explanation on the Alpine Linux VM, but I showed them the jails, how to clone, configure, and manage them.&lt;/p&gt;
&lt;h2&gt;Real-world Testing&lt;/h2&gt;
&lt;p&gt;I didn't hear from them anymore. After a few weeks, one of the devs contacted me urgently because a junior unfortunately made a mistake and deleted an entire project from one of the jails. I explained that the local snapshots were restorable with a command, and he was thrilled. He restored both the development jail and the one with the database made two minutes before the "mishap" and they restarted immediately.&lt;/p&gt;
&lt;p&gt;I realized that this event would change some of their procedures and criteria.&lt;/p&gt;
&lt;p&gt;I hadn't heard from anyone for months. This morning, I received a call from their manager, whom I hadn't heard from since the beginning, and he told me how things had been going these months.&lt;/p&gt;
&lt;h2&gt;Lessons Learned&lt;/h2&gt;
&lt;p&gt;First, this person has good communication and commercial skills but little technical background. He is open-minded and tends to study carefully what is proposed to him. He doesn't discard any solution a priori, without having touched its pros and cons.&lt;/p&gt;
&lt;p&gt;They had leased servers with cPanel and were inserting their content inside them. The devs who arrived a few years ago suggested making a technological transition, eliminating these "obsolete" servers and "outdated" methodologies, pushing everything to the cloud and containerizing everything. When we first talked, he told me how they were "lucky to make that transition because their load had increased enormously and the old servers probably wouldn't have handled the load", instead autoscaling saved them. I had some reservations about autoscaling without particular controls, but clearly, I cannot impose my choices on others.&lt;/p&gt;
&lt;p&gt;To cut a long story short: seeing what happened with that junior dev's mistake (and the simplicity with which it was possible to restart immediately), they decided to increase the use of FreeBSD jails and reduce, at least on secondary loads, the use of their Cloud managed with Kubernetes. As they transitioned to jails, however, they noticed some slowdowns. These slowdowns worsened day by day. According to the devs, it would have been appropriate to go back to having, again, autoscaling ("we need moar powaaaaar!!!") but, fortunately, their boss decided to investigate carefully. They realized that these workloads (based on &lt;a href="https://laravel.com/"&gt;Laravel&lt;/a&gt;) were storing sessions on files. Over time, these millions of files (several gigabytes per day) slowed everything down because, for specific operations, Laravel scanned the entire directory. In other words, on the "cloud," they needed much more power than necessary (and much more disk space, but that was cheaper) to carry this load, which was, in fact, unnecessary. After realizing this, they moved the sessions to Redis. Needless to say, everything became extremely faster, even compared to the previous setup on Kubernetes and autoscaling.&lt;/p&gt;
&lt;p&gt;At that point, it was clear that one of the problems with their setup is (as often happens) poor optimization. Today, there's a tendency to rush, "throw in" functions, features, libraries, plugins, etc. without considering the interactions and consequences. If it works, it's fine. Even if it increases computational complexity exponentially just to, for example, change the color of an icon (absurd example, but to give an idea).&lt;/p&gt;
&lt;p&gt;They then started moving even the main Laravel workloads (thanks to the optimization implemented). At this point, they began moving some of the WordPress sites even though they were extremely concerned. In the cluster, every day, at fairly irregular intervals, the load would rise and everything would slow down until autoscaling started scaling up to the imposed limits. CPU at 100% on all containers, and the devs noticed that the load came from a series of "php" processes. Recreating the containers helped for some minutes, but did not solve the problem.&lt;/p&gt;
&lt;p&gt;To their great surprise, all this did not happen on the FreeBSD jails. The load was significantly lower, without any of these spikes. Satisfied, they decided to use this as their final setup. One of the devs, however, wanted to get to the bottom of it and decided to run a test: he moved some of these WordPress sites to the Alpine VM, on Docker. At that point, the spikes resumed, saturating the CPU of the Alpine machine.&lt;/p&gt;
&lt;p&gt;Without going into details, they eventually realized that there was a vulnerability in one (or more) of the many plugins installed on the WordPress sites, which was being exploited to inject a process, probably a cryptominer. The name given to the process was "php" - so the devs, not being system experts, did not worry about understanding better whether it was really php or another process pretending to be it. On FreeBSD, all this did not happen because the injected executable could not run - there was no &lt;a href="https://docs.freebsd.org/en/books/handbook/linuxemu/"&gt;Linux compatibility&lt;/a&gt; activated on the server.&lt;/p&gt;
&lt;p&gt;Until then, they considered these (expensive) spikes as organic and did not worry too much about them. Paying to have their friendly intruders mine.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;They asked me to help, as much as possible, to &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;move other services to FreeBSD&lt;/a&gt;. It won't be easy, probably we will need to use bhyve a lot, but they decided that this is the platform they want to focus on in the coming years.&lt;/p&gt;
&lt;p&gt;Undoubtedly, this is a success story of FreeBSD and, indirectly, of correct and careful management of one's resources. Too often today, there is the superficial belief that the cloud, with its "infinite" resources, is the solution to all problems. And that Kubernetes is the best solution for everything. I, on the other hand, have always believed that there is the right tool for everything. You can hammer a nail with a screwdriver, but it's not the most suitable and efficient tool.&lt;/p&gt;
&lt;p&gt;Today they spend about 1/10 of what they used to spend before, they have more control over their data and the tools they use. Undoubtedly, all this was also caused by poor optimization and control by those who manage the infrastructure, but the question is: how often do people decide that, in the end, it is okay to spend more (especially if it is someone else's money) rather than go crazy for hours behind such a situation? While having defined and limited resources (albeit elevated) poses different problems - but of optimization. And in the age of energy and resource savings, it might be wise to give more importance to optimization.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Abundance led to waste&lt;/em&gt;.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 04 Jul 2024 08:41:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/07/04/from-cloud-chaos-to-freebsd-efficiency/</guid><category>freebsd</category><category>zfs</category><category>backup</category><category>data</category><category>filesystems</category><category>snapshots</category><category>recovery</category><category>networking</category><category>security</category><category>server</category><category>hosting</category><category>linux</category><category>ownyourdata</category><category>jail</category><category>virtualization</category><category>alpine</category><category>bhyve</category><category>docker</category></item><item><title>Proxmox vs FreeBSD: Which Virtualization Host Performs Better?</title><link>https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="Proxmox vs FreeBSD: Which Virtualization Host Performs Better?"&gt;&lt;/p&gt;&lt;p&gt;&lt;strong&gt;TL;DR&lt;/strong&gt;: Skip to the &lt;a href="/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/#heading-11"&gt;Conclusion&lt;/a&gt; for a summary. &lt;/p&gt;
&lt;h3&gt;Preamble&lt;/h3&gt;
&lt;p&gt;I have always been passionate about virtualization and have consistently used it. &lt;/p&gt;
&lt;p&gt;The first solution I installed on my infrastructures (and those of clients) was &lt;a href="https://it-notes.dragas.net/2023/08/27/that-old-netbsd-server-running-since-2010/"&gt;Xen on NetBSD&lt;/a&gt;, with great success. I then used Xen on Linux and, since 2012, OpenNebula, followed by Proxmox in 2013. &lt;a href="https://it-notes.dragas.net/categories/proxmox/"&gt;Proxmox&lt;/a&gt; has always given me great satisfaction, and even today I consider it a valuable platform that I install gladly. I have also used other hypervisors like &lt;a href="https://xcp-ng.org/"&gt;XCP-ng&lt;/a&gt; but less frequently, and in recent years, I have started to make extensive use of bhyve. &lt;/p&gt;
&lt;p&gt;About two and a half years ago, &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;we began a progressive process of migrating our servers (and those of our clients) from Linux to FreeBSD&lt;/a&gt;, &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;using jails&lt;/a&gt; (when possible) or VMs on bhyve. In some cases, &lt;a href="https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/"&gt;migrating setups from Proxmox to FreeBSD&lt;/a&gt; resulted in performance improvements, even with the same hardware. In some instances, 
I migrated VMs without notifying clients, and they contacted me a few days later to inquire if we had new hardware because they noticed better performance. &lt;/p&gt;
&lt;p&gt;After years, I decided to conduct a test to determine if this was just a perception or if there was a technical basis behind it. Of course, &lt;em&gt;this test has no scientific validity&lt;/em&gt;, and the results were obtained on specific hardware and at a specific time, so on different hardware, workload, and situations, the results could be entirely opposite. 
However, I tried to have as scientific and objective an approach as possible since I am comparing two solutions that I care about and use daily. &lt;/p&gt;
&lt;h3&gt;Hardware and Test Conditions&lt;/h3&gt;
&lt;p&gt;I often see comparative tests done on VMs from various providers. In my opinion, this comparison makes no sense because a VM from any provider shares its hardware with many other VMs, so the results will vary depending on the load of the "neighbors" and will never be reliable. &lt;/p&gt;
&lt;p&gt;For this test, I decided to take a physical server with the following characteristics: &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Intel Core i7-6700 &lt;/li&gt;
&lt;li&gt;2x SSD M.2 NVMe 512 GB &lt;/li&gt;
&lt;li&gt;4x RAM 16384 MB DDR4 &lt;/li&gt;
&lt;li&gt;NIC 1 Gbit Intel I219-LM &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The hardware is not recent, but still very widespread. On more recent hardware, the results might differ, but the test will be based on this configuration. &lt;/p&gt;
&lt;p&gt;I installed &lt;a href="https://www.proxmox.com/en/"&gt;Proxmox 8.2.2&lt;/a&gt; starting from the Debian template of the provider and &lt;a href="https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm"&gt;manually installed it following the instructions&lt;/a&gt;. I created a partition for Proxmox and left one partition free on each of the two NVME drives to create (at different times) the ZFS pool (in mirror) and the LVM on top of the Linux software raid. &lt;/p&gt;
&lt;p&gt;After all the tests, I installed &lt;a href="https://www.freebsd.org/"&gt;FreeBSD&lt;/a&gt; 14.1-RELEASE on ZFS on the same host, using &lt;em&gt;bsdinstall&lt;/em&gt; from an &lt;a href="https://mfsbsd.vx.sk/"&gt;mfsbsd image&lt;/a&gt; since the provider does not directly support installing FreeBSD from its panel or rescue mode. &lt;/p&gt;
&lt;p&gt;In both installations, I always trimmed the NVME drives before starting the tests, and in the case of ZFS, I set (both on Proxmox and FreeBSD) compression to zstd and atime to off. 
No other changes were made compared to the standard installation. &lt;/p&gt;
&lt;p&gt;On FreeBSD, the VM was created and managed with &lt;a href="https://github.com/churchers/vm-bhyve"&gt;vm-bhyve (devel)&lt;/a&gt;. &lt;/p&gt;
&lt;p&gt;On Proxmox, I tested the physical host on ZFS and ext4 and the VM on ZFS and LVM as LVM is the standard and most common setup in Proxmox. &lt;/p&gt;
&lt;p&gt;On FreeBSD, I tested the host on ZFS and the VM with both virtio and nvme drivers, on zvol, and as an image file within a ZFS dataset. &lt;/p&gt;
&lt;p&gt;I used &lt;a href="https://github.com/akopytov/sysbench"&gt;sysbench&lt;/a&gt; installed from the official Debian repository (on Proxmox and VM) and from the FreeBSD package on the respective host. &lt;/p&gt;
&lt;p&gt;The VMs, both on Proxmox and FreeBSD, have nearly identical characteristics and default configuration (apart from the nvme drivers set on bhyve, and for that reason, I also tested virtio).&lt;/p&gt;
&lt;p&gt;For those who want to reproduce my tests, here are the detailed configurations of the VMs used in bhyve:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD bhyve VM Configuration with NVMe Driver&lt;/strong&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;loader=&amp;quot;uefi&amp;quot;
cpu=4
memory=4096M
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;nvme&amp;quot;
disk0_name=&amp;quot;disk0.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;FreeBSD bhyve VM Configuration with virtio Driver&lt;/strong&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;loader=&amp;quot;uefi&amp;quot;
cpu=4
memory=4096M
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=&amp;quot;disk0.img&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;&lt;strong&gt;Proxmox VM Configuration&lt;/strong&gt;:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Component&lt;/strong&gt;&lt;/th&gt;
&lt;th&gt;&lt;strong&gt;Details&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Memory&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4.00 GiB [balloon=0]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Processors&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;4 (1 sockets, 4 cores) [x86-64-v2-AES]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;BIOS&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Default (SeaBIOS)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Display&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Default&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Machine&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Default (i440fx)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;SCSI Controller&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;VirtIO SCSI single&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;CD/DVD Drive (ide2)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;local:iso/debian-12.5.0-amd64-netinst.iso,media=cdrom,size=629M&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Hard Disk (scsi0)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;zfspool:vm-100-disk-0,cache=writeback,discard=on,iothread=1,size=50G,ssd=1&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Network Device (net0)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;virtio=BC:24:11:22:3D:F0,bridge=vmbr0&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;In all the configurations, I used Debian 12 as the VM operating system, with the file system on ext4.&lt;/p&gt;
&lt;p&gt;I chose Debian 12 as it is a stable, widespread, and modern Linux distribution. I did not test a FreeBSD VM because, in my setups, &lt;a href="https://it-notes.dragas.net/2023/11/27/migrating-from-vm-to-hierarchical-jails-freebsd/"&gt;I tend not to virtualize FreeBSD on FreeBSD but to use nested jails&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;All tests were performed multiple times, and I took the median results. CPU and RAM were tested only on the first VM (on Proxmox (ZFS) and FreeBSD (ZFS and nvme)) as they are not dependent on the underlying storage. Storage performance, on the other hand, was tested on all configurations.&lt;/p&gt;
&lt;h3&gt;CPU and RAM Tests on VMs&lt;/h3&gt;
&lt;p&gt;On both VMs:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-shell"&gt;sysbench --test=cpu --cpu-max-prime=20000 run
sysbench --test=memory run
&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;Comparative Results&lt;/h4&gt;
&lt;h5&gt;CPU Test&lt;/h5&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Configuration&lt;/th&gt;
&lt;th&gt;Events per Second&lt;/th&gt;
&lt;th&gt;Total Time (s)&lt;/th&gt;
&lt;th&gt;Latency (avg) (ms)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Proxmox&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;498.08&lt;/td&gt;
&lt;td&gt;10.0010&lt;/td&gt;
&lt;td&gt;2.01&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;473.65&lt;/td&gt;
&lt;td&gt;10.0019&lt;/td&gt;
&lt;td&gt;2.11&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h5&gt;CPU Percentage Analysis&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Difference in Events per Second&lt;/strong&gt;: ((498.08 - 473.65) / 498.08 approx -4.91\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in Total Time&lt;/strong&gt;: ((10.0019 - 10.0010) / 10.0010 approx +0.009\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in Latency (avg)&lt;/strong&gt;: ((2.11 - 2.01) / 2.01 approx +4.98\%)&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;RAM Test&lt;/h5&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Configuration&lt;/th&gt;
&lt;th&gt;Total Operations&lt;/th&gt;
&lt;th&gt;Operations per Second&lt;/th&gt;
&lt;th&gt;Total MiB Transferred&lt;/th&gt;
&lt;th&gt;MiB/sec&lt;/th&gt;
&lt;th&gt;Latency (avg) (ms)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Proxmox&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;64777227&lt;/td&gt;
&lt;td&gt;6476757.59&lt;/td&gt;
&lt;td&gt;63259.01&lt;/td&gt;
&lt;td&gt;6324.96&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;68621063&lt;/td&gt;
&lt;td&gt;6861139.06&lt;/td&gt;
&lt;td&gt;67012.76&lt;/td&gt;
&lt;td&gt;6700.33&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h5&gt;RAM Percentage Analysis&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Difference in Total Operations&lt;/strong&gt;: ((68621063 - 64777227) / 64777227 approx +5.94\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in Operations per Second&lt;/strong&gt;: ((6861139.06 - 6476757.59) / 6476757.59 approx +5.94\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in Total MiB Transferred&lt;/strong&gt;: ((67012.76 - 63259.01) / 63259.01 approx +5.93\%)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Difference in MiB/sec&lt;/strong&gt;: ((6700.33 - 6324.96) / 6324.96 approx +5.93\%)&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;CPU and RAM Comparative Results Table&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Test&lt;/th&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Proxmox (KVM)&lt;/th&gt;
&lt;th&gt;FreeBSD (bhyve)&lt;/th&gt;
&lt;th&gt;Difference (%)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;CPU&lt;/td&gt;
&lt;td&gt;Events/s&lt;/td&gt;
&lt;td&gt;498.08&lt;/td&gt;
&lt;td&gt;473.65&lt;/td&gt;
&lt;td&gt;-4.91&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Time (s)&lt;/td&gt;
&lt;td&gt;10.0010&lt;/td&gt;
&lt;td&gt;10.0019&lt;/td&gt;
&lt;td&gt;+0.009&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;2.01&lt;/td&gt;
&lt;td&gt;2.11&lt;/td&gt;
&lt;td&gt;+4.98&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;RAM&lt;/td&gt;
&lt;td&gt;Ops&lt;/td&gt;
&lt;td&gt;64777227&lt;/td&gt;
&lt;td&gt;68621063&lt;/td&gt;
&lt;td&gt;+5.94&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Ops/s&lt;/td&gt;
&lt;td&gt;6476757.59&lt;/td&gt;
&lt;td&gt;6861139.06&lt;/td&gt;
&lt;td&gt;+5.94&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;MiB&lt;/td&gt;
&lt;td&gt;63259.01&lt;/td&gt;
&lt;td&gt;67012.76&lt;/td&gt;
&lt;td&gt;+5.93&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;MiB/s&lt;/td&gt;
&lt;td&gt;6324.96&lt;/td&gt;
&lt;td&gt;6700.33&lt;/td&gt;
&lt;td&gt;+5.93&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;/td&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;td&gt;0.00&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h3&gt;Interpretation of CPU and RAM Results&lt;/h3&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;CPU Performance&lt;/strong&gt;:&lt;/li&gt;
&lt;li&gt;The VM on FreeBSD has slightly lower CPU performance compared to Proxmox (-4.91% in events per second).&lt;/li&gt;
&lt;li&gt;The total execution time is nearly identical, with a negligible difference.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The average latency is slightly higher on FreeBSD (+4.98%).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;RAM Performance&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;The VM on FreeBSD has better RAM performance compared to Proxmox (+5.94% in operations and MiB/sec).&lt;/li&gt;
&lt;li&gt;The average latency is identical in both configurations.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In summary, while Proxmox provides more consistent CPU performance, FreeBSD demonstrates superior memory performance. The choice between Proxmox and FreeBSD may depend on the specific workload requirements and the importance of consistent performance versus higher throughput.&lt;/p&gt;
&lt;h3&gt;I/O Performance Tests&lt;/h3&gt;
&lt;p&gt;The test has been conducted using sysbench, with this command line:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;sysbench --test=fileio --file-total-size=30G prepare
sysbench --test=fileio --file-total-size=30G --file-test-mode=rndrw  --max-time=300 --max-requests=0 run
&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;I/O Comparative Performance Data with Percentage Differences&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;VM on Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;VM on Proxmox (LVM)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, NVMe)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, Virtio)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (zvol)&lt;/th&gt;
&lt;th&gt;Host FreeBSD (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ext4)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File creation speed (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;407.82&lt;/td&gt;
&lt;td&gt;461.52&lt;/td&gt;
&lt;td&gt;1467.83&lt;/td&gt;
&lt;td&gt;1398.81&lt;/td&gt;
&lt;td&gt;1333.64&lt;/td&gt;
&lt;td&gt;1625.67&lt;/td&gt;
&lt;td&gt;968.64&lt;/td&gt;
&lt;td&gt;633.13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reads per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;650.09&lt;/td&gt;
&lt;td&gt;504.80&lt;/td&gt;
&lt;td&gt;11183.44&lt;/td&gt;
&lt;td&gt;806.93&lt;/td&gt;
&lt;td&gt;11834.53&lt;/td&gt;
&lt;td&gt;1234.62&lt;/td&gt;
&lt;td&gt;920.95&lt;/td&gt;
&lt;td&gt;498.37&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writes per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;433.40&lt;/td&gt;
&lt;td&gt;336.54&lt;/td&gt;
&lt;td&gt;7455.62&lt;/td&gt;
&lt;td&gt;537.95&lt;/td&gt;
&lt;td&gt;7889.69&lt;/td&gt;
&lt;td&gt;823.08&lt;/td&gt;
&lt;td&gt;613.96&lt;/td&gt;
&lt;td&gt;332.25&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;fsyncs per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;1387.08&lt;/td&gt;
&lt;td&gt;1076.97&lt;/td&gt;
&lt;td&gt;23858.08&lt;/td&gt;
&lt;td&gt;1721.79&lt;/td&gt;
&lt;td&gt;25247.36&lt;/td&gt;
&lt;td&gt;2634.01&lt;/td&gt;
&lt;td&gt;1964.96&lt;/td&gt;
&lt;td&gt;1063.19&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Read throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;10.16&lt;/td&gt;
&lt;td&gt;7.89&lt;/td&gt;
&lt;td&gt;174.74&lt;/td&gt;
&lt;td&gt;12.61&lt;/td&gt;
&lt;td&gt;184.91&lt;/td&gt;
&lt;td&gt;19.29&lt;/td&gt;
&lt;td&gt;14.39&lt;/td&gt;
&lt;td&gt;7.79&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Write throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;6.77&lt;/td&gt;
&lt;td&gt;5.26&lt;/td&gt;
&lt;td&gt;116.49&lt;/td&gt;
&lt;td&gt;8.41&lt;/td&gt;
&lt;td&gt;123.28&lt;/td&gt;
&lt;td&gt;12.86&lt;/td&gt;
&lt;td&gt;9.59&lt;/td&gt;
&lt;td&gt;5.19&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total events&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;741163&lt;/td&gt;
&lt;td&gt;575588&lt;/td&gt;
&lt;td&gt;12749157&lt;/td&gt;
&lt;td&gt;919952&lt;/td&gt;
&lt;td&gt;13491459&lt;/td&gt;
&lt;td&gt;1407592&lt;/td&gt;
&lt;td&gt;1049894&lt;/td&gt;
&lt;td&gt;568277&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;0.40&lt;/td&gt;
&lt;td&gt;0.52&lt;/td&gt;
&lt;td&gt;0.02&lt;/td&gt;
&lt;td&gt;0.33&lt;/td&gt;
&lt;td&gt;0.02&lt;/td&gt;
&lt;td&gt;0.21&lt;/td&gt;
&lt;td&gt;0.29&lt;/td&gt;
&lt;td&gt;0.53&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;95th percentile latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;2.30&lt;/td&gt;
&lt;td&gt;3.25&lt;/td&gt;
&lt;td&gt;0.06&lt;/td&gt;
&lt;td&gt;1.58&lt;/td&gt;
&lt;td&gt;0.05&lt;/td&gt;
&lt;td&gt;1.32&lt;/td&gt;
&lt;td&gt;1.79&lt;/td&gt;
&lt;td&gt;2.71&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Max latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;22.65&lt;/td&gt;
&lt;td&gt;32.30&lt;/td&gt;
&lt;td&gt;35.49&lt;/td&gt;
&lt;td&gt;13.60&lt;/td&gt;
&lt;td&gt;77.53&lt;/td&gt;
&lt;td&gt;9.03&lt;/td&gt;
&lt;td&gt;9.47&lt;/td&gt;
&lt;td&gt;17.39&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total test time (s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;300.0475&lt;/td&gt;
&lt;td&gt;300.1147&lt;/td&gt;
&lt;td&gt;300.0020&lt;/td&gt;
&lt;td&gt;300.0226&lt;/td&gt;
&lt;td&gt;300.0012&lt;/td&gt;
&lt;td&gt;300.0416&lt;/td&gt;
&lt;td&gt;300.0159&lt;/td&gt;
&lt;td&gt;300.1381&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Percentage Differences Compared to VM on Proxmox (ZFS)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;VM on Proxmox (LVM)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, NVMe)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, Virtio)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (zvol)&lt;/th&gt;
&lt;th&gt;Host FreeBSD (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ext4)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File creation speed (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+13.18%&lt;/td&gt;
&lt;td&gt;+259.77%&lt;/td&gt;
&lt;td&gt;+242.99%&lt;/td&gt;
&lt;td&gt;+227.02%&lt;/td&gt;
&lt;td&gt;+298.62%&lt;/td&gt;
&lt;td&gt;+137.52%&lt;/td&gt;
&lt;td&gt;+55.25%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reads per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.34%&lt;/td&gt;
&lt;td&gt;+1619.98%&lt;/td&gt;
&lt;td&gt;+24.13%&lt;/td&gt;
&lt;td&gt;+1720.45%&lt;/td&gt;
&lt;td&gt;+89.92%&lt;/td&gt;
&lt;td&gt;+41.67%&lt;/td&gt;
&lt;td&gt;-23.34%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writes per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.35%&lt;/td&gt;
&lt;td&gt;+1620.26%&lt;/td&gt;
&lt;td&gt;+24.12%&lt;/td&gt;
&lt;td&gt;+1720.42%&lt;/td&gt;
&lt;td&gt;+89.91%&lt;/td&gt;
&lt;td&gt;+41.66%&lt;/td&gt;
&lt;td&gt;-23.34%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;fsyncs per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.36%&lt;/td&gt;
&lt;td&gt;+1620.02%&lt;/td&gt;
&lt;td&gt;+24.13%&lt;/td&gt;
&lt;td&gt;+1720.18%&lt;/td&gt;
&lt;td&gt;+89.90%&lt;/td&gt;
&lt;td&gt;+41.66%&lt;/td&gt;
&lt;td&gt;-23.35%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Read throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.34%&lt;/td&gt;
&lt;td&gt;+1619.88%&lt;/td&gt;
&lt;td&gt;+24.11%&lt;/td&gt;
&lt;td&gt;+1719.98%&lt;/td&gt;
&lt;td&gt;+89.86%&lt;/td&gt;
&lt;td&gt;+41.63%&lt;/td&gt;
&lt;td&gt;-23.33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Write throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.30%&lt;/td&gt;
&lt;td&gt;+1620.68%&lt;/td&gt;
&lt;td&gt;+24.22%&lt;/td&gt;
&lt;td&gt;+1720.97%&lt;/td&gt;
&lt;td&gt;+89.96%&lt;/td&gt;
&lt;td&gt;+41.65%&lt;/td&gt;
&lt;td&gt;-23.33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total events&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-22.34%&lt;/td&gt;
&lt;td&gt;+1620.16%&lt;/td&gt;
&lt;td&gt;+24.12%&lt;/td&gt;
&lt;td&gt;+1720.31%&lt;/td&gt;
&lt;td&gt;+89.92%&lt;/td&gt;
&lt;td&gt;+41.65%&lt;/td&gt;
&lt;td&gt;-23.31%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+30.00%&lt;/td&gt;
&lt;td&gt;-95.00%&lt;/td&gt;
&lt;td&gt;-17.50%&lt;/td&gt;
&lt;td&gt;-95.00%&lt;/td&gt;
&lt;td&gt;-47.50%&lt;/td&gt;
&lt;td&gt;-27.50%&lt;/td&gt;
&lt;td&gt;+32.50%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;95th percentile latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+41.30%&lt;/td&gt;
&lt;td&gt;-97.39%&lt;/td&gt;
&lt;td&gt;-31.30%&lt;/td&gt;
&lt;td&gt;-97.83%&lt;/td&gt;
&lt;td&gt;-42.61%&lt;/td&gt;
&lt;td&gt;-22.17%&lt;/td&gt;
&lt;td&gt;+17.83%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Max latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+42.60%&lt;/td&gt;
&lt;td&gt;+56.69%&lt;/td&gt;
&lt;td&gt;-39.96%&lt;/td&gt;
&lt;td&gt;+242.30%&lt;/td&gt;
&lt;td&gt;-60.13%&lt;/td&gt;
&lt;td&gt;-58.19%&lt;/td&gt;
&lt;td&gt;-23.22%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total test time (s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+0.02%&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.01%&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.01%&lt;/td&gt;
&lt;td&gt;+0.01%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Percentage Differences Compared to VM on Proxmox (LVM) as this is the standard Proxmox setup&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;VM on Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, NVMe)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (ZFS, Virtio)&lt;/th&gt;
&lt;th&gt;VM on FreeBSD (zvol)&lt;/th&gt;
&lt;th&gt;Host FreeBSD (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ZFS)&lt;/th&gt;
&lt;th&gt;Host Proxmox (ext4)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;File creation speed (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-11.64%&lt;/td&gt;
&lt;td&gt;+218.04%&lt;/td&gt;
&lt;td&gt;+203.09%&lt;/td&gt;
&lt;td&gt;+188.97%&lt;/td&gt;
&lt;td&gt;+252.24%&lt;/td&gt;
&lt;td&gt;+109.88%&lt;/td&gt;
&lt;td&gt;+37.18%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reads per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.78%&lt;/td&gt;
&lt;td&gt;+2115.42%&lt;/td&gt;
&lt;td&gt;+59.85%&lt;/td&gt;
&lt;td&gt;+2244.40%&lt;/td&gt;
&lt;td&gt;+144.58%&lt;/td&gt;
&lt;td&gt;+82.44%&lt;/td&gt;
&lt;td&gt;-1.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writes per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.78%&lt;/td&gt;
&lt;td&gt;+2115.37%&lt;/td&gt;
&lt;td&gt;+59.85%&lt;/td&gt;
&lt;td&gt;+2244.35%&lt;/td&gt;
&lt;td&gt;+144.57%&lt;/td&gt;
&lt;td&gt;+82.43%&lt;/td&gt;
&lt;td&gt;-1.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;fsyncs per second&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.79%&lt;/td&gt;
&lt;td&gt;+2115.30%&lt;/td&gt;
&lt;td&gt;+59.87%&lt;/td&gt;
&lt;td&gt;+2244.30%&lt;/td&gt;
&lt;td&gt;+144.58%&lt;/td&gt;
&lt;td&gt;+82.45%&lt;/td&gt;
&lt;td&gt;-1.28%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Read throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.77%&lt;/td&gt;
&lt;td&gt;+2114.70%&lt;/td&gt;
&lt;td&gt;+59.82%&lt;/td&gt;
&lt;td&gt;+2243.60%&lt;/td&gt;
&lt;td&gt;+144.49%&lt;/td&gt;
&lt;td&gt;+82.38%&lt;/td&gt;
&lt;td&gt;-1.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Write throughput (MiB/s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.71%&lt;/td&gt;
&lt;td&gt;+2114.64%&lt;/td&gt;
&lt;td&gt;+59.89%&lt;/td&gt;
&lt;td&gt;+2243.73%&lt;/td&gt;
&lt;td&gt;+144.49%&lt;/td&gt;
&lt;td&gt;+82.32%&lt;/td&gt;
&lt;td&gt;-1.33%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total events&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;+28.77%&lt;/td&gt;
&lt;td&gt;+2114.98%&lt;/td&gt;
&lt;td&gt;+59.83%&lt;/td&gt;
&lt;td&gt;+2243.94%&lt;/td&gt;
&lt;td&gt;+144.55%&lt;/td&gt;
&lt;td&gt;+82.40%&lt;/td&gt;
&lt;td&gt;-1.27%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Average latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-23.08%&lt;/td&gt;
&lt;td&gt;-96.15%&lt;/td&gt;
&lt;td&gt;-36.54%&lt;/td&gt;
&lt;td&gt;-96.15%&lt;/td&gt;
&lt;td&gt;-59.62%&lt;/td&gt;
&lt;td&gt;-44.23%&lt;/td&gt;
&lt;td&gt;+1.92%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;95th percentile latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-29.23%&lt;/td&gt;
&lt;td&gt;-98.15%&lt;/td&gt;
&lt;td&gt;-51.38%&lt;/td&gt;
&lt;td&gt;-98.46%&lt;/td&gt;
&lt;td&gt;-59.38%&lt;/td&gt;
&lt;td&gt;-44.92%&lt;/td&gt;
&lt;td&gt;-16.62%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Max latency (ms)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-29.88%&lt;/td&gt;
&lt;td&gt;+9.88%&lt;/td&gt;
&lt;td&gt;-57.89%&lt;/td&gt;
&lt;td&gt;+140.03%&lt;/td&gt;
&lt;td&gt;-72.04%&lt;/td&gt;
&lt;td&gt;-70.68%&lt;/td&gt;
&lt;td&gt;-46.16%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Total test time (s)&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.04%&lt;/td&gt;
&lt;td&gt;-0.03%&lt;/td&gt;
&lt;td&gt;-0.04%&lt;/td&gt;
&lt;td&gt;-0.02%&lt;/td&gt;
&lt;td&gt;-0.03%&lt;/td&gt;
&lt;td&gt;+0.01%&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h4&gt;Analysis of Performance Data&lt;/h4&gt;
&lt;p&gt;The performance data collected from various configurations of Proxmox and FreeBSD provides a comprehensive view of the I/O capabilities and highlights some significant differences. Here is an analysis of the key findings:&lt;/p&gt;
&lt;h5&gt;Comparative Analysis&lt;/h5&gt;
&lt;h6&gt;Hypothesis on NVMe Performance and fsync&lt;/h6&gt;
&lt;p&gt;An important observation from my tests is that VMs with the bhyve NVMe driver show significantly higher performance compared to the same VMs with the virtio driver or compared to the physical host system. This difference initially led me to hypothesize that the bhyve NVMe driver might not correctly respect fsync operations, returning a positive result before the underlying file system has confirmed the final write. However, this was just a theory based on benchmark results and is not supported by concrete data. Furthermore, some developers have reviewed the code and found no evidence to suggest this behavior, and I personally have never encountered any potential issues that would indicate such a problem.&lt;/p&gt;
&lt;p&gt;Specifically, I observed that:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The VM with the virtio driver has performance comparable to Proxmox.&lt;/li&gt;
&lt;li&gt;The VM with the NVMe driver, whether on a ZFS dataset or zvol, shows performance superior to the physical FreeBSD host.&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Host Physical Systems and Filesystems&lt;/h5&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;File Creation Speed&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; shows the highest file creation speed at 1625.67 MiB/s, which is +68.03% compared to Host Proxmox (ZFS) and +156.72% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; has a file creation speed of 633.13 MiB/s, which is -34.62% compared to Host Proxmox (ZFS).&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Read and Write Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; demonstrates the highest read and write operations per second with 1234.62 reads/s and 823.08 writes/s.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reads per second: +34.06% compared to Host Proxmox (ZFS) and +147.80% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;li&gt;Writes per second: +34.04% compared to Host Proxmox (ZFS) and +147.61% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; shows a lower performance with 498.37 reads/s and 332.25 writes/s.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ZFS)&lt;/strong&gt; has 920.95 reads/s and 613.96 writes/s.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;fsync Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; achieves the highest fsync operations per second at 2634.01 fsyncs/s, which is +34.02% compared to Host Proxmox (ZFS) and +147.73% compared to Host Proxmox (ext4).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; has a lower performance with 1063.19 fsyncs/s.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ZFS)&lt;/strong&gt; achieves 1964.96 fsyncs/s.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; again leads in throughput with 19.29 MiB/s read and 12.86 MiB/s write.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Read throughput: +34.03% compared to Host Proxmox (ZFS) and +147.53% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;li&gt;Write throughput: +34.08% compared to Host Proxmox (ZFS) and +147.79% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; has the lowest throughput with 7.79 MiB/s read and 5.19 MiB/s write.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ZFS)&lt;/strong&gt; has 14.39 MiB/s read and 9.59 MiB/s write.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Host FreeBSD (ZFS)&lt;/strong&gt; shows the lowest average latency at 0.21 ms and 95th percentile latency at 1.32 ms.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Average latency: -27.59% compared to Host Proxmox (ZFS) and -60.38% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;li&gt;95th percentile latency: -26.27% compared to Host Proxmox (ZFS) and -51.29% compared to Host Proxmox (ext4).&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ext4)&lt;/strong&gt; has the highest average latency at 0.53 ms and 95th percentile latency at 2.71 ms.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Host Proxmox (ZFS)&lt;/strong&gt; has an average latency of 0.29 ms and 95th percentile latency of 1.79 ms.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h5&gt;VMs vs Physical Hosts&lt;/h5&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;File Creation Speed&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; demonstrates an outstanding file creation speed at 1467.83 MiB/s (+218.04% compared to VM on Proxmox (LVM) and +259.77% compared to VM on Proxmox (ZFS)).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; achieves 1333.64 MiB/s, which is also significantly higher than VM on Proxmox (LVM) and VM on Proxmox (ZFS).&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Read and Write Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; shows exceptional performance with 11183.44 reads/s and 7455.62 writes/s.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; also performs excellently with 11834.53 reads/s and 7889.69 writes/s.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;fsync Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; achieves 23858.08 fsyncs/s, and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; achieves 25247.36 fsyncs/s, both significantly higher than any other configuration.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; achieves the highest throughput with 174.74 MiB/s read and 116.49 MiB/s write.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; also has high throughput at 184.91 MiB/s read and 123.28 MiB/s write.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; shows very low average latency at 0.02 ms and 95th percentile latency at 0.06 ms.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; has similarly low latencies, indicating fast response times for I/O operations.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h5&gt;VM Configurations Comparison&lt;/h5&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;File Creation Speed&lt;/strong&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Among VMs, &lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; leads, followed by &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt;, and then &lt;strong&gt;VM on FreeBSD (ZFS, Virtio)&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Read and Write Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; both outperform &lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; and &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt; configurations significantly.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; outperforms &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt; in read and write operations.&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;fsync Operations per Second&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; have significantly higher fsync operations compared to &lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; and &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Throughput&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; have the highest throughput, followed by &lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; and then &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Latency&lt;/strong&gt;:&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;VM on FreeBSD (ZFS, NVMe)&lt;/strong&gt; and &lt;strong&gt;VM on FreeBSD (zvol)&lt;/strong&gt; show the lowest latencies among the VMs, indicating faster response times.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;VM on Proxmox (ZFS)&lt;/strong&gt; shows lower latencies compared to &lt;strong&gt;VM on Proxmox (LVM)&lt;/strong&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Cache Settings and Performance Influence&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Cache settings can significantly influence the performance of virtualization systems. In my setup, I did not modify the cache settings for the NVMe and virtio drivers, keeping the default settings. It is possible that the observed performance differences are also due to how different operating systems manage the caches of NVMe devices. I encourage other system administrators to explore the cache settings of their systems to see if changes in this area can influence benchmark results.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Regarding RAM and CPU, the performance of the VMs is comparable. There are slight differences in favor of Proxmox for CPU and FreeBSD for RAM, but in my opinion, these differences are so negligible that they wouldn't sway the decision towards one solution or the other.&lt;/p&gt;
&lt;p&gt;The I/O performance data clearly indicates that VM on FreeBSD with NVMe and ZFS outperforms all other configurations by a significant margin. This is evident in the file creation speed, read/write operations per second, fsync operations per second, throughput, and latency metrics. &lt;/p&gt;
&lt;p&gt;When comparing physical hosts, Host FreeBSD (ZFS) demonstrates excellent performance, particularly in comparison to Host Proxmox (ZFS) and Host Proxmox (ext4). &lt;/p&gt;
&lt;p&gt;When comparing VMs, VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) configurations stand out as the top performers. &lt;/p&gt;
&lt;p&gt;The VM using virtio on FreeBSD also shows strong performance, albeit not as high as the NVMe configuration. It significantly outperforms Proxmox configurations in terms of file creation speed, read/write operations per second, and throughput, while maintaining competitive latencies.&lt;/p&gt;
&lt;p&gt;The virtio driver provides a stable and reliable option, making it a suitable choice for environments where the NVMe driver cannot be used. This makes FreeBSD with virtio a balanced option for virtualization, offering both high performance and reliability.&lt;/p&gt;
&lt;p&gt;By examining these performance metrics, users can make informed decisions about their virtualization and storage configurations to optimize their systems for specific workloads and performance requirements.&lt;/p&gt;
&lt;p&gt;In light of these tests and experiments, I can therefore affirm that my sensations (and those of many users) of greater "snappiness" of the VMs on FreeBSD can be confirmed. Certainly, Proxmox is a stable solution, rich in features, battle-tested, and has many other valid points, but FreeBSD, especially with the nvme driver, demonstrates very high performance and a very low overhead in installation and operation.&lt;/p&gt;
&lt;p&gt;I will continue to use both solutions with great satisfaction, but I will be even more encouraged to implement virtualization servers based on FreeBSD and bhyve.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 10 Jun 2024 05:53:45 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/</guid><category>freebsd</category><category>linux</category><category>proxmox</category><category>kvm</category><category>bhyve</category><category>hosting</category><category>filesystems</category><category>virtualization</category><category>zfs</category><category>debian</category><category>server</category></item><item><title>How to Set Up a Alpine Linux VM Hosting XRDP and XFCE for Secure Remote Desktop Access</title><link>https://it-notes.dragas.net/2024/05/14/How-to-Set-Up-an-Alpine-Linux-VM-Hosting-XRDP-and-XFCE-for-Secure-Remote-Desktop-Access/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/alps.webp" alt="How to Set Up a Alpine Linux VM Hosting XRDP and XFCE for Secure Remote Desktop Access"&gt;&lt;/p&gt;&lt;h2&gt;Introduction&lt;/h2&gt;
&lt;p&gt;A client recently asked if their approach to remote desktop access was correct. They leave their office PC on and connect to it via remote desktop. Their main requirement is to access internal resources via a browser (they use Brave, so the BSDs cannot be currenly used) and they prefer not to use their home computers for security reasons. I can understand their concern – I wouldn’t be comfortable knowing that a home PC (possibly shared with others) could connect to the company VPN and have unrestricted access.&lt;/p&gt;
&lt;h2&gt;Setting Up Alpine Linux on a VM&lt;/h2&gt;
&lt;p&gt;To address this, I downloaded the &lt;a href="https://alpinelinux.org/downloads/"&gt;Alpine Linux Virt ISO from the official site&lt;/a&gt; and installed it on a VM in their office datacenter. They use Proxmox, which made the process quite straightforward. I allocated 20GB of disk space, 4GB of RAM, and 2 CPU cores to the VM. For added security, the installation process allows you to encrypt the disk. Note that if you choose this option, you’ll need to access the virtualizer console to re-enter the password every time the VM restarts.&lt;/p&gt;
&lt;p&gt;During the Alpine installation, create a non-privileged user who will be using the remote desktop we’re about to set up.&lt;/p&gt;
&lt;h2&gt;Initial Configuration&lt;/h2&gt;
&lt;p&gt;Once the installation is complete, you can log in via the console as root or use SSH with the newly created non-privileged user. In the latter case, you’ll first need to switch to the root user:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;su -
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Enable the community repository by uncommenting it in &lt;code&gt;/etc/apk/repositories&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;http://dl-cdn.alpinelinux.org/alpine/v3.20/main
http://dl-cdn.alpinelinux.org/alpine/v3.20/community
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Installing Required Packages&lt;/h2&gt;
&lt;p&gt;Next, install the main packages needed to manage the remote desktop:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;apk add xrdp xorgxrdp xorg-server xfce4 xfce4-terminal wireguard-tools ifupdown-ng-wireguard&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;Edit the &lt;code&gt;/etc/xrdp/xrdp.ini&lt;/code&gt; file to ensure xrdp listens only on the VPN’s private IP, avoiding exposure to the LAN (or worse, the WAN):&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;port=tcp://172.16.16.1:3389
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Enable xrdp:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;rc-update add xrdp
rc-update add xrdp-sesman
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;Configuring Wireguard&lt;/h2&gt;
&lt;p&gt;To set up Wireguard, navigate to &lt;code&gt;/etc/wireguard&lt;/code&gt; and create the keys:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;wg genkey | tee server.privatekey | wg pubkey &amp;gt; server.publickey&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;Create a configuration file &lt;code&gt;wg0.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;[Interface]
Address = 172.16.16.1/24
ListenPort = 4242
PrivateKey = &amp;lt;server private key value&amp;gt; # the key from the previously generated privatekey file

[Peer]
PublicKey = &amp;lt;client public key value&amp;gt; # client’s public key
AllowedIPs = 172.16.16.2/32
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;On the client, the configuration should look like this:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;[Interface]
PrivateKey = &amp;lt;client private key value&amp;gt;
Address = 172.16.16.2/24

[Peer]
PublicKey = &amp;lt;server public key value&amp;gt;
AllowedIPs = 172.16.16.0/24
Endpoint = &amp;lt;server public ip&amp;gt;:4242
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Then, open the &lt;code&gt;/etc/network/interfaces&lt;/code&gt; file and add:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;auto wg0
iface wg0 inet static
pre-up wg-quick up /etc/wireguard/wg0.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Reboot the VM, and everything should be ready. Just be sure to set your router/firewall to forward the 4242 UDP port to the VPS LAN ip for Wireguard access. If the VM has been exposed via public IP, this won't be necessary, but remember that ssh will be exposed, too so take care.&lt;/p&gt;
&lt;h2&gt;Connecting via Remote Desktop&lt;/h2&gt;
&lt;p&gt;Use your favorite RDP remote desktop client and point it to &lt;code&gt;172.16.16.1&lt;/code&gt;. You should see a login screen.&lt;/p&gt;
&lt;h2&gt;Installing Brave Browser&lt;/h2&gt;
&lt;p&gt;To install Brave Browser on Alpine Linux, the easiest way is to use Flatpak. Open a terminal and, as root, install Flatpak and Brave Browser:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;su -
apk add flatpak
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install flathub com.brave.Browser
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;After logging out and back into the remote desktop, Brave should appear in the list of applications. Launch it, and you can synchronize it with the Brave installation on your work PC. This setup ensures that everything works seamlessly on the virtual remote desktop.&lt;/p&gt;
&lt;h2&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;This approach offers multiple benefits. By exposing the remote desktop via Wireguard, you significantly enhance security without compromising access to internal services. This method ensures that your internal resources remain protected while being easily accessible when needed.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 14 May 2024 08:05:51 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/05/14/How-to-Set-Up-an-Alpine-Linux-VM-Hosting-XRDP-and-XFCE-for-Secure-Remote-Desktop-Access/</guid><category>alpine</category><category>linux</category><category>server</category><category>networking</category><category>hosting</category><category>tutorial</category><category>desktop</category><category>ownyourdata</category></item><item><title>The Double-Edged Sword of Docker: Balancing Benefits and Risks</title><link>https://it-notes.dragas.net/2024/04/22/the-doubled-edge-sword-of-docker/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/containers.webp" alt="The Double-Edged Sword of Docker: Balancing Benefits and Risks"&gt;&lt;/p&gt;&lt;p&gt;As a systems administrator, I am deeply concerned about the consequences of the current widespread adoption of technologies like Docker. Having been a proponent and early adopter of containerization for many years, I recognized its potential early on and have been advocating for its use in many of the Linux-based setups I manage.&lt;/p&gt;
&lt;p&gt;Initially, this relieved me of some headaches. One recurring issue was dealing with developers requesting "exotic" setups—by exotic, I mean specific (sometimes multiple) versions of PHP on the same VPS, or unique combinations of PHP and MySQL (or MariaDB) that required adding external repositories of all sorts—creating future problems when one of these repositories ceases to exist or be updated, leaving us with an unstable, dangerous, or unupgradable system.&lt;/p&gt;
&lt;p&gt;In many cases, I resolved these issues by partitioning components into FreeBSD jails (one jail per service, one for data, with bind mounts as needed—perfect efficiency, excellent upgradability and stability, maximum security). However, this wasn't always feasible. Sometimes, the explicit use of Linux was required, prompting the need for an alternative solution. In the past, I separated components using LXC, similar to FreeBSD jails, but then Docker arrived, and the approach changed.&lt;/p&gt;
&lt;p&gt;At that point, the problem seemed solved: I just needed to provide a VPS with Docker, handle backups, data, monitoring, etc., but leave developers the freedom to include the specific versions of components they needed in their setups.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;But...&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Developers often make poor system administrators. And rightly so, because system administrators are often poor developers. However, this leads to a series of medium-term problems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Continued use of outdated&lt;/em&gt; (or conversely, bleeding-edge and thus unstable) component versions in Dockerfiles, creating stability issues or, worse, security vulnerabilities.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;A habitual approach to software crashes as if they were normal&lt;/em&gt;. Well-developed software should not crash but autonomously manage issues. When a crash is inevitable, it should indicate a situation so severe that it requires a system administrator's intervention. Instead, the world is full of unstable stacks that crash at the slightest exception, with the mentality, "the container will just restart." This, to me, is unacceptable.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Lack of optimization&lt;/em&gt;: I often hear, "I've maxed out the resources on MySQL, we need to scale up." But upon reviewing, I realize that there has been no tuning of its configuration. After some adjustments, the load often decreases by 90%, making it entirely manageable. Yet, we are in an era dominated by major cloud players whose goal is not to optimize our costs (as they claim) but to make us spend more, seemingly simplifying tasks with tools like Kubernetes (and autoscaling) but actually encouraging us to unnecessarily complicate our infrastructure and spend more. Using more resources while contradicting the ecological awareness that marks our times.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Lack of big-picture thinking&lt;/em&gt;: As system administrators managing the overall system, we always have a holistic view. Developers, rightly focused on their projects, often lack the depth of understanding to identify the bottleneck in the entire setup. A typical comment I hear is, "the site is slow, we need a more powerful server." In 90% of cases, this is unnecessary and would be completely ineffective. A misimplemented feature launching 50 concurrent long PHP processes wouldn't be solved by increasing from 4 to 8 cores. It would help, sure, but it wouldn't be a solution. Solving it by reducing the processes to two would change everything.&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Lack of backup strategy&lt;/em&gt;: The average developer focuses on the reproducibility of their setup, maybe keeping a database dump (not always), but seldom addresses the issue of recovery time. I recently had a discussion with a colleague (who calls himself a DevOps) who told me he had "production database dumps, a .tar.gz of individual web app directories, and notes on how he set up that server." When I asked how many systems he managed, he said "over 100, on the same cluster." Asked about disaster recovery, he believed it was "impossible" for such a cluster to be unreachable for long (though the OVH Strasbourg fire should have taught us that nothing is impossible when data is concentrated in one place). Nevertheless, he thought he could restore operations in "about 2 hours per server"—thus, 100 servers would require 200 hours of work. For 99% of setups, these would be totally unacceptable timelines.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Today, we have hardware so powerful that it can handle unimaginable loads from just a few years ago. A bit of planning, optimization, and design can greatly reduce costs, and increase productivity, stability, and system reliability.&lt;/p&gt;
&lt;p&gt;Thus, I remain in favor of solutions like Docker, but the turn the entire IT industry is taking towards such solutions worries me because it might improve some aspects but will worsen others. We are simply shifting the problem elsewhere.&lt;/p&gt;
&lt;p&gt;In my view, there is no one-size-fits-all solution to any problem; each requires its own study and implementation.&lt;/p&gt;
&lt;p&gt;The solution to all the problems we have known was one: 42. And we all know how that turned out.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 22 Apr 2024 05:30:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/04/22/the-doubled-edge-sword-of-docker/</guid><category>docker</category><category>container</category><category>linux</category><category>server</category><category>freebsd</category><category>jail</category><category>lxc</category></item><item><title>Installing Alpine Linux in a FreeBSD Jail</title><link>https://it-notes.dragas.net/2024/01/18/installing-alpine-linux-on-a-freebsd-jail/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/alps.webp" alt="Installing Alpine Linux in a FreeBSD Jail"&gt;&lt;/p&gt;&lt;h2&gt;Installing Alpine Linux in a FreeBSD Jail&lt;/h2&gt;
&lt;p&gt;Alpine Linux is one of my favorite Linux distributions, particularly for specialized purposes. I recently faced the task of moving an Alpine Linux-based VPS into a FreeBSD host and considered two approaches:&lt;/p&gt;
&lt;h2&gt;1. Moving the VPS to bhyve on FreeBSD&lt;/h2&gt;
&lt;p&gt;The simplest and more conventional method involves transferring the VPS to the FreeBSD host and running it on bhyve. This solution is proven and stable. However, bhyve doesn't support &lt;a href="https://en.wikipedia.org/wiki/Memory_ballooning"&gt;memory ballooning&lt;/a&gt;. The assigned and unused memory - used by the Alpine VPS for caching but rendered unnecessary since FreeBSD performs its own caching - led me to explore another method.&lt;/p&gt;
&lt;h2&gt;2. Using Linuxulator&lt;/h2&gt;
&lt;p&gt;With the Linuxulator, I thought about creating a jail and copying Alpine Linux into it, attempting to run all necessary services (not many, but complex enough to initially discourage a direct migration to FreeBSD). This approach wasn't pioneering, as it's been used by others (and myself in other situations), but I had never applied it to Alpine Linux before.&lt;/p&gt;
&lt;h3&gt;The Implementation&lt;/h3&gt;
&lt;p&gt;I won't describe the entire process of what I did - it turned out to be straightforward: I prepared a Linux jail, configured it, performed an rsync of the entire filesystem of the original VPS, fixed a few things, and started it up. However, I will explain how I create and use Alpine Linux jails in FreeBSD hosts without any jail management software (like BastilleBSD, iocage, ezjail, etc.). Many aren't aware that managing the entire jail lifecycle is already integrated into FreeBSD and that these tools are helpful but not strictly necessary.&lt;/p&gt;
&lt;h4&gt;Enabling Linuxulator&lt;/h4&gt;
&lt;p&gt;First, enable and start the Linuxulator on the FreeBSD server:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;service linux enable
service linux start
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This loads all necessary modules for executing Linux-compiled executables.&lt;/p&gt;
&lt;h4&gt;Setting Up the Jail&lt;/h4&gt;
&lt;p&gt;Choose the path for installing the jail. There's no need to use ZFS for this type of setup; UFS is sufficient.&lt;/p&gt;
&lt;p&gt;Create the path, for example, &lt;code&gt;/var/jails&lt;/code&gt;, and the path for the individual jail, such as &lt;code&gt;/var/jails/alpine01&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;mkdir -p /var/jails/alpine01&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;Download the Alpine Linux base filesystem directly from the &lt;a href="https://alpinelinux.org/downloads/"&gt;Alpine Linux website&lt;/a&gt; and decompress it in the newly created directory.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;cd /var/jails/alpine01
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/releases/x86_64/alpine-minirootfs-3.20.2-x86_64.tar.gz
tar zxf alpine-minirootfs-3.20.2-x86_64.tar.gz
rm alpine-minirootfs-3.20.2-x86_64.tar.gz
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;In theory, the base system is already functional, but some services might complain about missing configurations. It's advisable to create the file &lt;code&gt;/etc/network/interfaces&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;echo "auto lo" &amp;gt; /var/jails/alpine01/etc/network/interfaces&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;And if you want to use openrc:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;touch /var/jails/alpine01/run/openrc/softlevel&lt;/code&gt; &lt;/p&gt;
&lt;h4&gt;Starting the Jail&lt;/h4&gt;
&lt;p&gt;The jail is now ready to be launched. There are two approaches: include its configuration in &lt;code&gt;/etc/jail.conf&lt;/code&gt; of the FreeBSD server or create a separate file and launch it manually. The latter allows for interesting possibilities: everything related to the jail lives in the &lt;code&gt;/var/jails&lt;/code&gt; directory, which could be on a different device, even NFS or external. This approach is also applicable for FreeBSD jails and is convenient for portable jails or those used from encrypted/separated devices.&lt;/p&gt;
&lt;p&gt;Following this second approach, create a file named &lt;code&gt;/var/jail/alpine01.conf&lt;/code&gt; containing:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;alpine01 {
    host.hostname = &amp;quot;${name}&amp;quot;;
    #exec.consolelog = &amp;quot;/var/log/jail_console_${name}.log&amp;quot;;
    ip4.addr= 192.168.1.111;
    interface = vtnet0;
    path=&amp;quot;/var/jails/${name}&amp;quot;;
    allow.raw_sockets=1;
    #exec.start='/sbin/openrc';
    exec.start='/bin/true';
    exec.stop='/bin/true';
    persist;
    mount.fstab=&amp;quot;/var/jails/${name}.fstab&amp;quot;;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I intentionally left the console log lines and the start of openrc commented out (not everyone wants them), and openrc is currently not present in the jail.&lt;/p&gt;
&lt;p&gt;Adjust the assigned IP and interface based on your configuration.&lt;/p&gt;
&lt;p&gt;Create the jail's fstab, &lt;code&gt;/var/jail/alpine01.fstab&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;devfs           /var/jails/alpine01/dev      devfs           rw                      0       0
tmpfs           /var/jails/alpine01/dev/shm  tmpfs           rw,size=1g,mode=1777    0       0
fdescfs         /var/jails/alpine01/dev/fd   fdescfs         rw,linrdlnk             0       0
linprocfs       /var/jails/alpine01/proc     linprocfs       rw                      0       0
linsysfs        /var/jails/alpine01/sys      linsysfs        rw                      0       0
&lt;/code&gt;&lt;/pre&gt;

&lt;h4&gt;Launching and Testing the Jail&lt;/h4&gt;
&lt;p&gt;To launch the jail:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;jail -crm -f alpine01.conf&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;For testing:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;jexec alpine01 login -f root&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;Welcome to your new Alpine Linux jail. Once in the jail, don't forget to add the nameserver in &lt;code&gt;/etc/resolv.conf&lt;/code&gt;, or the jail won't be able to perform DNS resolutions:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;echo "nameserver 1.1.1.1" &amp;gt; /etc/resolv.conf&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;To simulate a real Alpine Linux boot as closely as possible, I suggest installing openrc:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;apk add openrc&lt;/code&gt; &lt;/p&gt;
&lt;p&gt;Then you can modify the &lt;code&gt;alpine01.conf&lt;/code&gt; file. Exit the jail and adjust as follows:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;#exec.start='/bin/true';
exec.start='/sbin/openrc';
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now, you can restart the jail and see openrc start services (currently, none) inside the jail:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;jail -r alpine01
jail -crm -f alpine01.conf
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The jail is now ready.&lt;/p&gt;
&lt;p&gt;The Linuxulator can't emulate all Linux syscalls, so some things might not work or only work partially. However, it's more than adequate for many uses. The same approach can also be used for other distributions. A convenient method to download the root file systems of various distributions is to look for those already created for LXC, freely downloadable as regular files.&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;FreeBSD jail's embrace,
Alpine's freedom finds its place,
Unix worlds in grace.
&lt;/code&gt;&lt;/pre&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Thu, 18 Jan 2024 14:05:51 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2024/01/18/installing-alpine-linux-on-a-freebsd-jail/</guid><category>freebsd</category><category>server</category><category>alpine</category><category>hosting</category><category>tutorial</category><category>jail</category><category>container</category><category>linux</category></item><item><title>Migrating from an Old Linux Server to a New FreeBSD Machine</title><link>https://it-notes.dragas.net/2023/10/25/migrating-from-an-old-linux-server-to-a-new-freebsd-machine/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/content/images/2023/10/3500d14b-cb7e-4de6-af9e-0ca135985b41.webp" alt="Migrating from an Old Linux Server to a New FreeBSD Machine"&gt;&lt;/p&gt;&lt;p&gt;&lt;em&gt;Preamble:&lt;/em&gt; I believe it's time to bid farewell to this venerable Linux server.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;TL;DR:&lt;/strong&gt;  &lt;em&gt;The article chronicles the journey of transitioning from an outdated Linux server, running for 1690 days without updates, to a modern FreeBSD machine. This migration involved using tools like mfsBSD, BastilleBSD, Borg Backup, and bhyve. Despite initial hesitations due to the Linux server's impeccable performance, the transition was smooth, resulting in improved manageability and efficiency. The piece emphasizes the importance of regular system updates and anticipates revisiting the topic in the future with new uptime achievements and updates.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;This server loyally served for years as a secondary backup server while also providing a few minor services to users. As it often happens, it remained in operation, neglected and without updates for years. Stable operating systems have the "flaw" of being forgotten, giving the false impression that they don't need maintenance or updates. This machine continued its service without oversight for years. When approached for a service request (not due to malfunctions), I advised the client to upgrade the whole system. A mere update would not suffice, so I suggested starting afresh on new hardware with FreeBSD as the primary OS.&lt;/p&gt;
&lt;p&gt;The client was understandably hesitant given the uptime stats:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;08:58:43 up 1690 days, 21:32, 4 users, load average: 9.57, 10.15, 8.76&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;Not a single error, not a single hiccup. From his perspective, a similar setup to what was installed many years ago and still working flawlessly was preferred. Nevertheless, he trusted my expertise and let me proceed.&lt;/p&gt;
&lt;p&gt;This server had a plethora of duties, among which:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;One of the pivotal tasks was running &lt;a href="https://www.proxmox.com/en/proxmox-backup-server/overview"&gt;Proxmox Backup Server&lt;/a&gt; via Docker. Proxmox Backup Server requires Debian, but this server was running on Ubuntu 16.04 (previously upgraded from Ubuntu 14.04 – yes, ancient!). Hence, Proxmox Backup Server was still at version 1.x.&lt;/li&gt;
&lt;li&gt;Another critical function was storing backups made through &lt;a href="https://www.borgbackup.org/"&gt;BorgBackup&lt;/a&gt; on its file system. The /home directory used a mirrored btrfs file system, and each backed-up server had its user on this system. Clients could backup (using a push method) only via VPN and only during specific windows when the server permitted (by adding specific firewall rules via Jenkins. Jenkins also managed connection protocols, snapshots, backups, etc.).&lt;/li&gt;
&lt;li&gt;Among the lesser tasks, the server ran a few Docker containers with HandBrake on various presets. The client processed video conversions by uploading the original files via sftp, and after some hours, fetched the converted files from the destination directory. This will not be replicated on the new FreeBSD server since they now handle this operation locally on their high-performance MacBook Pro with Apple Silicon. However, a future restoration isn't off the table.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first thing I did was install FreeBSD on the new hardware. Given that it's a physical server on Hetzner (an auction pick due to disk space needs over power), and FreeBSD wasn't an option, I used &lt;a href="https://mfsbsd.vx.sk/"&gt;mfsBSD&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;After booting the physical server in Linux rescue, I copied mfsBSD onto the disks using &lt;code&gt;dd&lt;/code&gt; and restarted. On boot, I SSHed into mfsBSD and executed the installation using &lt;code&gt;bsdinstall&lt;/code&gt;—a robust and efficient method.&lt;/p&gt;
&lt;p&gt;I set up a raidz1 with all four 6 TB disks, resulting in a final storage space of 21.8T, ample for now without the video files.&lt;/p&gt;
&lt;p&gt;To ensure continuity, I kept a setup similar to the old one. The clients would essentially continue with their usual backup procedure without necessitating drastic changes to backup scripts. To avoid storing these backups directly in the physical machine's /home and to leave the door open for future services, I installed &lt;a href="https://bastillebsd.org/"&gt;BastilleBSD&lt;/a&gt; and began setting up several jails. I replaced the old Linux machine's behavior with a VNET FreeBSD jail. In past scenarios, I've created Linux jails (thanks to BastilleBSD) and transferred the old server into the jail using rsync, making minor configuration tweaks. While this usually works, it doesn't address the underlying issue of an obsolete setup. Given the opportunity, I opted for a modern toolset.&lt;/p&gt;
&lt;p&gt;Thus, I copied every home directory (along with their historic backups) in its entirety, installed BorgBackup, and re-established the VPN. With a VNET jail, I can craft networking devices and fine-tune configurations. After recreating user accounts, inputting the various SSH &lt;code&gt;authorized_keys&lt;/code&gt;, and checking all clients, I set up a snapshot plan on the host. This ensures that if a client is compromised with the potential (however remote) for breach and backup deletion, a ZFS snapshot of the entire jail remains available.&lt;/p&gt;
&lt;p&gt;As mentioned, one of the core tools on the old server was Proxmox Backup Server. It's not natively installable on FreeBSD, necessitating a VM. Enter the fantastic &lt;code&gt;bhyve&lt;/code&gt;, supported by &lt;code&gt;vm-bhyve&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;However, one issue arose: backups would consume vast amounts of space, and I wanted to avoid housing an enormous disk image (or a zvol) with both VMs and backups. So, I opted for a slightly less performant but more flexible solution: installing Debian 12 and Proxmox Backup Server on the VM while placing backups on a separate ZFS dataset on the physical machine, exported via NFS and mounted on the VM.&lt;/p&gt;
&lt;p&gt;Given that the physical server has an internal bridge "vm-public" with IP &lt;code&gt;192.168.124.1&lt;/code&gt; and the VM is at &lt;code&gt;192.168.124.2&lt;/code&gt;, I just created a dataset named &lt;code&gt;zroot/PBS&lt;/code&gt; and added the following line to &lt;code&gt;/etc/exports&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;/zroot/PBS -alldirs -maproot=root -network 192.168.124.2/32&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;To enable NFS, insert into &lt;code&gt;/etc/rc.conf&lt;/code&gt;:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;rpcbind_enable=&amp;quot;YES&amp;quot; 
nfs_server_enable=&amp;quot;YES&amp;quot; 
mountd_flags=&amp;quot;-r&amp;quot; 
rpc_lockd_enable=&amp;quot;YES&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Within the VM, create &lt;code&gt;/PBS&lt;/code&gt; and include in &lt;code&gt;/etc/fstab&lt;/code&gt;:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;192.168.124.1:/zroot/PBS /PBS nfs rw,async,soft,intr,noexec 0 0&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;After Proxmox Backup Server's installation, simply set up the datastores in &lt;code&gt;/PBS/&lt;/code&gt;, and they'll directly store on the physical machine's ZFS dataset.&lt;/p&gt;
&lt;p&gt;For firewall configurations, I exposed port 8007, redirecting it towards the VM, and everything started functioning smoothly. I then set a Proxmox Backup Server replica from the old to the new server. After completion, I changed the Proxmox Backup Server IP on all Proxmox hosts to point to the new server. Smooth sailing.&lt;/p&gt;
&lt;p&gt;The old Ubuntu server also managed other minor services, which have become obsolete and weren't replicated.&lt;/p&gt;
&lt;p&gt;The transition was seamless, the client is pleased, and I'm content since each service is now neatly segregated into its jail or VM. The machine's load is minimal, which might pave the way for other tasks, via VPN. Everything now rests on ZFS, and the icing on the cake: I made the client promise not to reach another 1690 days of uptime but to timely update as required.&lt;/p&gt;
&lt;p&gt;I'm not entirely convinced the promise will hold—meaning, in a few years, I might yet be discussing this "new" server, highlighting another impressive uptime and another upgrade journey.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 25 Oct 2023 16:39:37 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/10/25/migrating-from-an-old-linux-server-to-a-new-freebsd-machine/</guid><category>freebsd</category><category>bhyve</category><category>borg</category><category>btrfs</category><category>container</category><category>data</category><category>docker</category><category>filesystems</category><category>jail</category><category>server</category><category>snapshots</category><category>virtualization</category><category>vpn</category><category>proxmox</category><category>backup</category><category>linux</category></item><item><title>Boosting Network Performance in FreeBSD's VNET Jails</title><link>https://it-notes.dragas.net/2023/08/14/boosting-network-performance-in-freebsds-vnet-jails/</link><description>&lt;p&gt;&lt;img src="https://images.unsplash.com/photo-1604869515882-4d10fa4b0492?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDIwfHxuZXR3b3JrfGVufDB8fHx8MTY5MjAzNTI4N3ww&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Boosting Network Performance in FreeBSD&amp;#x27;s VNET Jails"&gt;&lt;/p&gt;&lt;p&gt;In the world of FreeBSD, jails are a renowned feature that allows for system-level virtualization. As I was setting up the jails for BSDCafe, I stumbled upon an interesting discovery: the network performance of VNET jails was noticeably lower compared to that of VPS or standard jails. Rather than diving into this immediately, I decided to take a mental note and proceed.&lt;/p&gt;
&lt;p&gt;As I delved deeper with various tests, a pattern began to emerge. Anytime there was a NAT (Network Address Translation) acting between the internal bridge of the VNET jails - irrespective of whether it was local or bridged via a VPN - the outgoing performance took a nosedive.&lt;/p&gt;
&lt;p&gt;From using &lt;code&gt;tcpdump&lt;/code&gt; to carrying out MTU (Maximum Transmission Unit) tests, my endeavors seemed fruitless. However, a memory from the past struck me. I recalled setting up a FreeBSD VM on Proxmox (effectively pointing towards an issue with KVM) where I had to make specific tweaks.&lt;/p&gt;
&lt;p&gt;To remedy the situation, I made the following modifications:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Added the following to &lt;code&gt;/boot/loader.conf&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;hw.vtnet.X.csum_disable=1
hw.vtnet.lro_disable=1
&lt;/code&gt;&lt;/pre&gt;

&lt;ol&gt;
&lt;li&gt;Integrated these lines into &lt;code&gt;/etc/sysctl.conf&lt;/code&gt;:&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;net.link.bridge.pfil_member=0
net.link.bridge.pfil_bridge=0
net.link.bridge.pfil_onlyip=0
&lt;/code&gt;&lt;/pre&gt;

&lt;ol&gt;
&lt;li&gt;And appended to &lt;code&gt;/etc/rc.local&lt;/code&gt; (which I already use for initialization):&lt;/li&gt;
&lt;/ol&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;ifconfig vtnet0 -rxcsum
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The end result was exhilarating: not only did the VNET jails now perform at full bandwidth, but even those interconnected via VPN showcased commendable performance.&lt;/p&gt;
&lt;p&gt;Interestingly, this seems to be linked to a long-standing bug from 2012, &lt;a href="https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=165059"&gt;FreeBSD Bug 165059&lt;/a&gt;. This issue is even highlighted in the official PFSense documentation.&lt;/p&gt;
&lt;p&gt;In the vast landscape of tech, sometimes revisiting the past provides solutions for the present. All's well that ends well, and I'm pleased to share this resolution with my readers. For those dabbling in FreeBSD, I hope this piece offers some guidance in optimizing your VNET jail setups.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Mon, 14 Aug 2023 15:50:14 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/08/14/boosting-network-performance-in-freebsds-vnet-jails/</guid><category>freebsd</category><category>jail</category><category>kvm</category><category>linux</category><category>networking</category><category>proxmox</category><category>hosting</category><category>vpn</category><category>virtualization</category><category>server</category><category>container</category></item><item><title>Creating a Mikrotik CHR - RouterOS 7 - bhyve VM in FreeBSD</title><link>https://it-notes.dragas.net/2023/03/21/creating-a-mikrotik-chr-routeros-7-bhyve-vm-in-freebsd-2/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/content/images/2023/03/mikrotik-1.jpeg" alt="Creating a Mikrotik CHR - RouterOS 7 - bhyve VM in FreeBSD"&gt;&lt;/p&gt;&lt;p&gt;While I love &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;FreeBSD&lt;/a&gt;-based router solutions (OPNsense, PFsense, etc.), I also appreciate and use &lt;a href="https://mikrotik.com"&gt;MikroTik&lt;/a&gt; devices and software. They produce and sell reasonably-priced, efficient hardware, and have implemented some interesting proprietary solutions (EoIP, etc.). Additionally, they're a European company. From the smallest Wi-Fi router to the largest enterprise routing platform, they employ the same software with (almost) the same features.&lt;/p&gt;
&lt;p&gt;That's why I'm also implementing virtualized MikroTik CHR solutions - they're lightweight, efficient, and can handle a significant amount of traffic with minimal resource overhead.&lt;/p&gt;
&lt;p&gt;Our FreeBSD hypervisors run &lt;a href="https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/"&gt;vm-bhyve&lt;/a&gt; as a lightweight, efficient, and intelligent management tool for VMs.&lt;/p&gt;
&lt;p&gt;Even though MikroTik states that bhyve is not supported (as they believe it's merely a paravirtualization software), CHR based on RouterOS 6 works flawlessly, &lt;a href="https://github.com/churchers/vm-bhyve/wiki/Supported-Guest-Examples"&gt;following the hints provided by the vm-bhyve documentation&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Unfortunately, things change when dealing with RouterOS 7. It doesn't boot at all, and if you upgrade from RoS 6 to RoS 7, it ceases to function entirely.&lt;/p&gt;
&lt;p&gt;It seems that the MikroTik CHR image based on RouterOS 7 has an unusual partition table, somewhere between MBR and GPT. For this reason, Proxmox (KVM) doesn't experience any issues, while bhyve seems unable to properly boot the VM. I conducted several tests, but the best solution was suggested by user &lt;a href="https://forum.mikrotik.com/viewtopic.php?t=184254"&gt;kriszos on the MikroTik forum&lt;/a&gt; (even if kriszos was dealing with Hyper-V).&lt;/p&gt;
&lt;p&gt;The script should be used on a machine with the following tools installed: gdisk, wget, unzip, qemu-img, qemu-nbd, and rsync:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;#!/bin/bash
wget --no-check-certificate https://download.mikrotik.com/routeros/7.8/chr-7.8.img.zip -O /tmp/chr.img.zip
unzip -p /tmp/chr.img.zip &amp;gt; /tmp/chr.img
rm -rf  chr.qcow2
qemu-img convert -f raw -O qcow2 /tmp/chr.img chr.qcow2
rm -rf /tmp/chr.im*
modprobe nbd
qemu-nbd -c /dev/nbd0 chr.qcow2
rm -rf /tmp/tmp*
mkdir /tmp/tmpmount/
mkdir /tmp/tmpefipart/
mount /dev/nbd0p1 /tmp/tmpmount/
rsync -a /tmp/tmpmount/ /tmp/tmpefipart/
umount /dev/nbd0p1
mkfs -t fat /dev/nbd0p1
mount /dev/nbd0p1 /tmp/tmpmount/
rsync -a /tmp/tmpefipart/ /tmp/tmpmount/
umount /dev/nbd0p1
rm -rf /tmp/tmp*
(
echo 2 # use GPT
echo t # change partition code
echo 1 # select first partition
echo 8300 # change code to Linux filesystem 8300
echo r # Recovery/transformation
echo h # Hybrid MBR
echo 1 2 # partitions added to the hybrid MBR
echo n # Place EFI GPT (0xEE) partition first in MBR (good for GRUB)? (Y/N)
echo   # Enter an MBR hex code (default 83)
echo y # Set the bootable flag? (Y/N)
echo   # Enter an MBR hex code (default 83)
echo n # Set the bootable flag? (Y/N)
echo n # Unused partition space(s) found. Use one to protect more partitions? (Y/N)
echo w # write changes to disk
echo y # confirm
) | gdisk /dev/nbd0
qemu-nbd -d /dev/nbd0
echo &amp;quot;script finished, created file chr.qcow2&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I used the script on a machine with Proxmox installed, and it worked correctly. Once the relevant image was obtained, it was converted to raw format to be fed to bhyve:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-bash"&gt;qemu-img convert -f qcow2 -O raw chr.qcow2 chr.img
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Now that we have the correct image available, here's an example of a VM configuration managed by vm-bhyve:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;loader=&amp;quot;uefi&amp;quot;
graphics=&amp;quot;no&amp;quot;
cpu=&amp;quot;2&amp;quot;
memory=“128M&amp;quot;
network0_type=&amp;quot;virtio-net&amp;quot;
network0_switch=&amp;quot;public&amp;quot;
disk0_type=&amp;quot;virtio-blk&amp;quot;
disk0_name=“chr.img”
uuid=“cafecafe-cafe-cafe-cafe-cafecafecafe”
network0_mac=“ca:fe:ca:fe:ca:fe”
&lt;/code&gt;&lt;/pre&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 21 Mar 2023 12:27:34 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/03/21/creating-a-mikrotik-chr-routeros-7-bhyve-vm-in-freebsd-2/</guid><category>freebsd</category><category>proxmox</category><category>virtualization</category><category>hosting</category><category>linux</category><category>server</category><category>tutorial</category><category>kvm</category><category>mikrotik</category><category>bhyve</category></item><item><title>How we are migrating (many of) our servers from Linux to FreeBSD - Part 3 - Proxmox to FreeBSD</title><link>https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="How we are migrating (many of) our servers from Linux to FreeBSD - Part 3 - Proxmox to FreeBSD"&gt;&lt;/p&gt;&lt;p&gt;In recent years, &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;we've been migrating many of our servers from Linux to FreeBSD&lt;/a&gt; as part of our consolidation and optimization efforts. Specifically, we've been &lt;a href="https://it-notes.dragas.net/2022/02/05/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-1-system-and-jails-setup/"&gt;moving services that were previously deployed using Docker onto FreeBSD&lt;/a&gt;, and it has proven to be a great choice for handling workloads efficiently.&lt;/p&gt;
&lt;p&gt;To this end, we've also been migrating many of our virtual machines (VMs) to FreeBSD, deploying services within FreeBSD jails. In some cases, these jails have even replaced entire VMs and run bare metal. Although we prefer to move to native FreeBSD whenever possible, sometimes it's not the best option for all the services we offer. As a result, one of our most critical physical servers has been left behind for years.&lt;/p&gt;
&lt;div class="hc-toc"&gt;&lt;/div&gt;

&lt;p&gt;This server was a Proxmox server that we installed many years ago and updated to version 6.4. It hosted some critical services, but upgrading to Proxmox 7.x posed some challenges. In particular, &lt;a href="https://forum.proxmox.com/threads/unified-cgroup-v2-layout-upgrade-warning-pve-6-4-to-7-0/"&gt;some of the LXC containers required tweaks&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Unfortunately, this server was quite old, with only four physical disks and 64 GB of RAM. It was located in an OVH data center and had been running well until one of the disks started to malfunction once a week, on Sundays. This would trigger a RAID reconstruction that kept the system busy for about two days.&lt;/p&gt;
&lt;p&gt;Despite my preference for simple setups, this server had been deployed gradually over many years, and everything was tied together. As a result, unraveling the system to resolve the issues was not a simple task. &lt;em&gt;Sometimes the combination of simple things can make everything complex&lt;/em&gt;.&lt;/p&gt;
&lt;h3&gt;The Proxmox Server&lt;/h3&gt;
&lt;p&gt;The &lt;a href="https://www.proxmox.com/en/"&gt;Proxmox&lt;/a&gt; server was configured as the central hub for various services, including primary DNS, web hosting, VOIP, and more. It featured several bridges, each with its own specific purpose, and was connected to a virtual machine running &lt;a href="https://mikrotik.com"&gt;MikroTik CHR&lt;/a&gt;. This machine was responsible for consolidating all incoming VPNs from the MikroTik devices we managed, both ours and those belonging to our clients. Additionally, it provided a series of bridges to manage these devices and all server management VPNs and other services. The Proxmox server also housed several virtual machines running Linux, FreeBSD, OpenBSD, and NetBSD, as well as LXC containers.&lt;/p&gt;
&lt;p&gt;Over the last two years, we've been migrating most of these virtual machines and containers to FreeBSD-based VMs, which feature their own specific jails. Consequently, most of the VMs we've had to move were BSD-based, while only five Linux VMs remained. The LXC containers hosted a range of services, including servers managed by &lt;a href="https://www.virtualmin.com"&gt;Virtualmin&lt;/a&gt;, a large installation of &lt;a href="https://www.zimbra.com"&gt;Zimbra&lt;/a&gt; (which was hosted within an LXC container running CentOS 7), as well as some minor Alpine Linux-based machines. We located all these virtual machines and containers in a LAN created and managed by CHR. All public IPs were managed by CHR, which relied on NAT mappings to establish communication between them. CHR had thus become the heart of our system, and if it experienced any issues, it could potentially take down the entire system. Fortunately, it remained stable for years.&lt;/p&gt;
&lt;h3&gt;Migration - first steps&lt;/h3&gt;
&lt;p&gt;The first step I took was to install FreeBSD on the new server. Easy peasy. The next step was to find a way for the CHR to migrate to the new server (under &lt;a href="https://bhyve.org"&gt;bhyve&lt;/a&gt;) and continue to manage all the public IPs of the original server. The problem is that OVH, with its failover IPs, &lt;a href="https://it-notes.dragas.net/2022/01/14/freebsd-assign-ovh-failover-ips-to-freebsd-jails/"&gt;ties a specific MAC address to each individual IP address&lt;/a&gt;. Therefore, the only way was to create a bridge on the FreeBSD server (on the Proxmox server, I already had the bridge on the physical network card) and create an L2 tunnel between the two servers - I used OpenVPN with tap interfaces, specifically inserted into the bridges. I could have used other methods and techniques, but I wanted to experiment with a setup that could allow, if necessary, to bridge a larger number of physical and virtual servers even if the IPs are all mapped to a single server. OVH does not allow, in fact, the splitting of classes, so a move must be made for the entire class, not for a single IP address.&lt;/p&gt;
&lt;p&gt;Initially, MikroTik CHR 7 did not boot on bhyve. In the end, &lt;a href="https://it-notes.dragas.net/2023/03/21/creating-a-mikrotik-chr-routeros-7-bhyve-vm-in-freebsd-2/"&gt;I managed to make it work&lt;/a&gt;, but I had other problems, probably related to the MTU of the interfaces. So I thought about taking the opportunity to unbind the LXC containers and VMs from CHR and remove MikroTik from the setup. With RouterOS version 7, in fact, Wireguard-based VPNs are also supported, so within a few days, it was possible to update the few routers still on 6.x and recreate some VPNs using Wireguard. I mapped both the VMs and LXC containers directly to their respective public IPs, greatly simplifying the steps. Everything worked perfectly.&lt;/p&gt;
&lt;p&gt;The next step was to test the first migrations, starting from the VMs already on FreeBSD. For simplicity, I created a new FreeBSD VM in bhyve and copied (via zfs-send and zfs-receive) the datasets related to &lt;a href="https://bastillebsd.org"&gt;BastilleBSD&lt;/a&gt;. All services are installed in jails managed by Bastille, so this was enough to have, in a short time, a new operating server equivalent to the previous one. At that point, I shut down the original server, connected the VM to the bridge linked to the tunnel (after modifying its MAC address), turned on the new FreeBSD VM (on bhyve), and everything started to work correctly - but from the new physical server.&lt;/p&gt;
&lt;p&gt;One by one, I moved all the FreeBSD VMs. For Linux, NetBSD, and OpenBSD, I simply copied the images and pointed bhyve to them. Some small specific configuration on vm-bhyve and everything started to work correctly. &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;Where possibile&lt;/a&gt;, I replaced the “virtio” with “nvme” as &lt;a href="https://klarasystems.com/articles/virtualization-showdown-freebsd-bhyve-linux-kvm/"&gt;it performs much better on bhyve&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;Migration - LXC containers to Virtual Machines&lt;/h3&gt;
&lt;p&gt;For LXC containers, I initially thought of creating an Alpine Linux virtual machine, installing LXD, and copying each individual container. It worked for some of them, but for others, I started to encounter strange issues, similar to those that would have required manual intervention to upgrade from Proxmox 6.x to 7.x. As is often the case with Linux-based solutions, compatibility is not always preserved between updates, so I would have had to fine-tune all the containers, which I didn't feel like doing. The containers had been created (at the time) to optimize RAM usage on the Proxmox machine, but to date, they have caused more problems than benefits. In some cases, certain processes got "stuck," making it impossible to "reboot" the LXC container, requiring the entire physical node to be rebooted. If they had been virtual machines, I could have given a "kill" command from the virtualizer (to the respective KVM process, in that case) and restarted it.&lt;/p&gt;
&lt;p&gt;For greater compatibility and ease of future management, I decided to convert the LXC containers into actual VMs on bhyve. The process was simple:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Creating an empty VM with vm-bhyve and booting the VM with SystemRescueCD.&lt;/li&gt;
&lt;li&gt;Creating destination partitions and file systems in the VM, then doing a complete rsync of the original LXC container.&lt;/li&gt;
&lt;li&gt;Adjusting the fstab file, installing the kernel on the destination VM, and creating the initrd (some containers were already copies of VMs, so the kernel remained installed and updated, even though it wasn't being used. The initrd, on the other hand, did not include the &lt;em&gt;nvme&lt;/em&gt; or &lt;em&gt;virtio&lt;/em&gt; drivers, so I had to regenerate it anyway.)&lt;/li&gt;
&lt;li&gt;Adjusting the bhyve vm configuration file, doing one last rsync after shutting down the services, shutting down the original LXC container, and starting the bhyve VM.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Everything worked correctly, so one by one, I moved all the containers. The largest one ended up on another physical node (also FreeBSD with bhyve) temporarily because the space on the new server was not sufficient to contain it. It didn't need to be on this server, so no problem.&lt;/p&gt;
&lt;p&gt;One by one, the LXC containers started on the new server. Apart from some minor adjustments to the destination VMs (different network interface names, etc.), I didn't encounter any particular problems even after several days. Everything works perfectly.&lt;/p&gt;
&lt;p&gt;At the very end, I re-created the MikroTik CHR VM. I’ll keep this setup separate for now, as strictly tied to eoip interfaces. This was the main reason why I haven’t performed the migration before. Things were too tied together and I had to untie everything, step by step.&lt;/p&gt;
&lt;h3&gt;…and then one of the Linux VMs started to freeze&lt;/h3&gt;
&lt;p&gt;Several Linux VMs are just the basis on which Docker runs. One of them (not even among the busiest) started, every 12/15 hours, to completely freeze. It stopped responding to ping, and it was impossible to give any type of command from the console. In a word: &lt;em&gt;stuck&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Searching the web, I found some references to this problem and, observing the errors of an ssh session that was left connected (stuck, but still showing the last error), I found it to be a problem &lt;a href="https://forums.freebsd.org/threads/bhyve-debian-with-docker-unstable.87956/"&gt;similar to the one described in this post&lt;/a&gt;, namely:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;&amp;quot;watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [khugepaged:67]&amp;quot;
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;I tried various solutions such as changing the storage driver, the number of cores, the distribution (from Alpine to Debian), etc., but none of these operations solved the issue. I also noticed that the problem occurs with all Linux VMs, but only those with a recent kernel (&amp;gt; 5.10.x) freeze, while the others continue to work. The problem does not occur, however, with the *BSDs.&lt;/p&gt;
&lt;p&gt;In the end, I:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Reduced the number of cores to 1 for the VMs that did not have a high load (some remained with multiple cores), hypothesising a problem with allocating cores that were too busy&lt;/li&gt;
&lt;li&gt;Gave the command: "&lt;em&gt;/usr/bin/echo 60 &amp;gt; /proc/sys/kernel/watchdog_thresh&lt;/em&gt;" to the VM.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The VM became stable, and I have not seen that error/warning on any other machine since. I will investigate further, but I believe it is a problem related to the Linux kernel, which, for some reason, generates a kernel panic if particular situations of CPU concurrency are generated.&lt;/p&gt;
&lt;h3&gt;The End…and a nice OOM!&lt;/h3&gt;
&lt;p&gt;After moving everything, I was finally able to migrate the entire class of OVH IPs from one physical server to another. The operation was quite quick, but in order to avoid problems, I notified all users and performed the operation on a Sunday and during off-peak hours. The whole process took about 10 minutes and there were no hitches of any kind.&lt;/p&gt;
&lt;p&gt;For safety reasons, I kept the Proxmox machine active for a few more days, but there was no need to use it. However, after a couple of days, I encountered a problem: the largest VM, in some cases, was being "killed" because FreeBSD generated an OOM. I had never seen, from FreeBSD 13.0 onwards, any OOM related to "abuse" of RAM usage by ZFS, but in this case, it actually happened.&lt;/p&gt;
&lt;p&gt;In the end, I understood that ZFS, on FreeBSD, is able to release memory, but not quickly enough to manage any "spikes" in individual VMs. In fact, the VMs do not know the situation of the physical host's RAM, so they will tend to occupy all the space allotted to them (even if only for caching). A sudden spike (i.e. if you create and launch a new VM) could cause a sudden increase in RAM usage by the bhyve process, and FreeBSD could be forced to kill it, even if part of the RAM is only ARC cache. While Proxmox supports HA (i.e., control over whether the VM is running), vm-bhyve only launches the VM (bhyve process). I should manage it with tools like &lt;em&gt;&lt;a href="https://mmonit.com/monit/"&gt;monit&lt;/a&gt;&lt;/em&gt;, but for now, I preferred to simply set limits on ZFS RAM usage using "vfs.zfs.arc_max", and there have been no more problems.&lt;/p&gt;
&lt;h3&gt;Final considerations&lt;/h3&gt;
&lt;p&gt;The operation was long but linear. The most complex part was unraveling all the configurations related to MikroTik CHR and the VPNs linked to each individual LXC machine/container. Once everything was implemented on a dedicated VM, the operation was fairly straightforward.&lt;/p&gt;
&lt;p&gt;The hardware specifications of the destination physical server are slightly better than the starting one, but the final performance of the setup has greatly improved. The VMs are very responsive (even those that were previously LXC containers running directly on bare metal) and, thanks to ZFS, I can make local snapshots every 5 minutes. In addition, every 10 minutes, I can copy (using the excellent zfs-autobackup) all the VMs and jails to other nodes &lt;a href="https://it-notes.dragas.net/2022/05/30/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-2/"&gt;both as a backup and as an immediate restart in case of disaster&lt;/a&gt;. I just need to map the IPs, and everything will start working very quickly. Proxmox also allows you to perform this type of operation with ZFS, but you still need to have Proxmox (in a compatible version) on the target machine. With the current setup, I only need any FreeBSD node that supports bhyve.&lt;/p&gt;
&lt;p&gt;Proxmox is an excellent tool, well-developed, open-source, efficient, and stable. We manage many installations, including complex ones (&lt;a href="https://it-notes.dragas.net/2020/06/29/create-automatic-snapshots-on-cephfs/"&gt;ceph clusters&lt;/a&gt;, etc.), and it has never let us down. However, not all tools are ideal for all situations, and for setups like the one described, the new configuration based on FreeBSD has shown significantly interesting performance and greater management and maintenance granularity.&lt;/p&gt;
&lt;p&gt;Virtualizing on vm-bhyve is not complex, but it is certainly not comparable, at the current state, to the simplicity of using a clean and complete interface like Proxmox's. A complete HA system is still missing (sure, it's achievable manually, but...), as well as complete management web interface. However, for knowledgeable users, it is undoubtedly a powerful tool that allows you to have excellent FreeBSD as a base. I'm totally satisfied with my migration and the result is far better than I expected.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 14 Mar 2023 13:00:00 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/03/14/how-we-are-migrating-many-of-our-servers-from-linux-to-freebsd-part-3/</guid><category>freebsd</category><category>alpine</category><category>data</category><category>bhyve</category><category>filesystems</category><category>docker</category><category>ha</category><category>hardware</category><category>hosting</category><category>linux</category><category>lxc</category><category>networking</category><category>ovh</category><category>proxmox</category><category>recovery</category><category>restore</category><category>server</category><category>snapshots</category><category>virtualization</category><category>web</category><category>zfs</category><category>backup</category><category>jail</category><category>container</category><category>mikrotik</category><category>ownyourdata</category><category>series</category></item><item><title>Deploying a piece of the Fediverse</title><link>https://it-notes.dragas.net/2023/01/15/deploying-a-piece-of-the-fediverse/</link><description>&lt;p&gt;&lt;img src="https://images.unsplash.com/photo-1456428746267-a1756408f782?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDEwNHx8c2VydmVyJTIwbmV0d29ya3xlbnwwfHx8fDE2NzM3NzQ3MDI&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="Deploying a piece of the Fediverse"&gt;&lt;/p&gt;&lt;p&gt;After &lt;a href="https://en.wikipedia.org/wiki/Acquisition_of_Twitter_by_Elon_Musk"&gt;Elon Musk’s Twitter deal&lt;/a&gt;, many users &lt;a href="https://www.theverge.com/2022/12/20/23518325/mastodon-monthly-active-users-twitter-elon-musk"&gt;decided to “fly away” from the "traditional" commercial Social Networks&lt;/a&gt;. Some for good, some just decided to increase their presence in other, alternative Social Network. That's what I'm doing.&lt;/p&gt;
&lt;p&gt;Many of those users decided to join the &lt;a href="https://fediverse.info"&gt;Fediverse&lt;/a&gt; - even if many of them just call it &lt;a href="https://joinmastodon.org"&gt;Mastodon&lt;/a&gt;, as they don’t understand that Mastodon is just a Software that allows to join the Fediverse.&lt;/p&gt;
&lt;p&gt;The Fediverse is composed by thousands of “instances”, some are bigger (like &lt;a href="https://mastodon.social/explore"&gt;mastodon.social&lt;/a&gt;), some are personal (aka: single user instances), many are normal communities with their members. Many of them communicate using the same open protocol, &lt;a href="https://www.w3.org/TR/activitypub/"&gt;ActivityPub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Because of this, there’s no “one size fits all” so I’ve started to explore the different solutions. I’ve been mainly focusing on running them on &lt;a href="https://it-notes.dragas.net/2022/01/24/why-were-migrating-many-of-our-servers-from-linux-to-freebsd/"&gt;FreeBSD&lt;/a&gt;, but I’ve had to fire up Linux for some tests. Here’s what I’ve found out.&lt;/p&gt;
&lt;div class="hc-toc"&gt;&lt;/div&gt;

&lt;h2&gt;Backends / Complete Solutions&lt;/h2&gt;
&lt;h3&gt;&lt;a href="https://joinmastodon.org"&gt;Mastodon&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;I won’t spend too much time on Mastodon as you may find almost everything,  everywhere, about it. Tons of articles have been written about Mastodon, so this one would be just another one, surely not the best one. Many consider it to be “the Fediverse” (they just say “Mastodon” to refer to the whole “Fediverse”, &lt;a href="https://blog.castopod.org/the-fediverse-is-so-much-bigger-than-mastodon/"&gt;and they’re wrong&lt;/a&gt;), it’s by far the most installed solution. It’s so popular that there are plenty of clients (both for Android and iOS) that perfectly work with it. Mastodon has its own APIs - and many other Fediverse solutions are using them, just to be able to be compatibile with the Mastodon apps. It also supports backend based “Hide replies” (only show new posts, not all the replies to other posts) in timeline. It’s good as you can filter the replies from any frontend or mobile app as the backend won’t provide them at all, while the client doesn't need to be aware that you're filtering them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I am suggesting it&lt;/strong&gt;: It’s stable and well done, there’s a lot of documentation and if you install and manage it correctly, you shouldn’t notice anything strange or unexpected. Remember, no software solution can be considered “set and forget” and Mastodon is not an exception. Please, don’t forget that running an instance is not just installing the software. Moderation is a serious issue. More about it later.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Please, consider that&lt;/strong&gt;: Mastodon is also quite heavy, not easy to scale (&lt;a href="https://hazelweakly.me/blog/scaling-mastodon/"&gt;even if there’s documentation around&lt;/a&gt;) and, by default, is caching everything it sees and knows about. This means that both a single user instance or a thousands of users’ one, will (rapidly) grow because any media will be locally cached.&lt;/p&gt;
&lt;p&gt;My first, single user installation grew, in a week, well over 100 GB of occupied storage because of all this caching. It can’t be avoided (you can just tell Mastodon to delete the cache after &lt;em&gt;x&lt;/em&gt; days). While it may make sense for a big instance, it can be considered an overkill for a single user one. This is a "known problem", but mainly considered as a feature: all the media will be locally processed, all the contents will be locally stored. So an evil content hidden in a media file will be reprocessed by local ffmpeg or ImageMagick and will be cleared.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I’m suggesting another solution&lt;/strong&gt;: Monopoly is bad and Mastodon is becoming, for many, a synonym of Fediverse .  More, it requires much space and it’s resource-hungry. It’s easy to install Mastodon and experience a huge resource drain in just a few days, especially if you’re not a skilled system administrator. While Mastodon is the best solution for a complete microblogging experience, other solutions exist.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To sum up&lt;/strong&gt;: Installing Mastodon on FreeBSD was easy. Even if I had read about problems, &lt;a href="https://it-notes.dragas.net/2022/11/23/installing-mastodon-on-a-freebsd-jail/"&gt;I’ve documented how to do it and it’s stable and reliable&lt;/a&gt;. Mastodon is, IMHO, a good piece of software but keep in mind that it could not be the best solution for you and a small, single user instance can become huge in a few weeks. Also, keep in mind that it's the most deployed Fediverse software, so any mobile app, any web app, any hint will work perfectly with Mastodon. Your Fediverse experience will be smooth.&lt;/p&gt;
&lt;h3&gt;&lt;a href="https://akkoma.social"&gt;Akkoma&lt;/a&gt; (&lt;a href="https://pleroma.social"&gt;Pleroma&lt;/a&gt; fork)&lt;/h3&gt;
&lt;p&gt;I’ve read about Akkoma in a reddit thread about how difficult was to install Mastodon on FreeBSD. It was described as a Pleroma fork, but actively developed and maintained, faster and with more advanced features. That’s why I decided to try it and - at least for now - stick with it (and not Pleroma, but many of the things I’ll point out here apply to Pleroma, too).&lt;/p&gt;
&lt;p&gt;Akkoma (as all the Pleroma forks) is much, much lighter than Mastodon. It’s perfectly able to run a single user instance on a Raspberry PI. Moreover, &lt;a href="https://www.linkedin.com/in/christine-lemmer-webber-aa8b93210?challengeId=AQEV7hPAP5kmZAAAAYW1APUoLUR1KExbqsA00X_acs1iXnaskmdkm-me-JY7qjRW2oqQlm6bvuKE7PaY88WTXasMsRZZx1lUTA&amp;amp;submissionId=d99bcdc8-2575-3a17-da9c-44bf4c03cb36&amp;amp;challengeSource=AgFktTMs4QFFzgAAAYW1ARkWNfsdRcZLCwaiJ0-pl_6jhKinHY94qCISxxZTZnM&amp;amp;challegeType=AgH1fssT9S9nagAAAYW1ARkZPK6hfooac1aWFgkA8hAdAMf0TNCthsU&amp;amp;memberId=AgHEvyepJMJAUwAAAYW1ARkbjUhwUbj0a-vmoG1ulu1a9kc&amp;amp;recognizeDevice=AgHmFlNf0hJLQgAAAYW1ARkedPChO-KJr-SC5ZM9P_7ylZB7LNDa"&gt;Christine Lemmer-Webber&lt;/a&gt;, coauthor of the &lt;a href="https://www.w3.org/TR/activitypub/"&gt;ActivityPub protocol&lt;/a&gt;, said that &lt;a href="https://octodon.social/@cwebber/109546851049168850"&gt;Akkoma is a good solution&lt;/a&gt; - and I think that her opinion is a qualified one.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I am suggesting it&lt;/strong&gt;: First of all, for its documentation. As for Pleroma, there are installation instructions for a lot of operating systems (yes, also &lt;a href="https://docs.akkoma.dev/stable/installation/freebsd_en/#installing-frontends"&gt;FreeBSD&lt;/a&gt;, &lt;a href="https://docs.akkoma.dev/stable/installation/netbsd_en/"&gt;NetBSD&lt;/a&gt;, &lt;a href="https://docs.akkoma.dev/stable/installation/openbsd_en/"&gt;OpenBSD&lt;/a&gt;). It’s light and fast. It doesn’t cache remote media by default, so that’s perfect for a single user instance. You can enable it (both pre-fetching media as soon as the server gets the status, like Mastodon, or just downloading them and caching when the first user meets them), you can also proxy your local media. S3 storage and remote CDNs are supported and everything is customisable. Message size limit is set to 5000 characters by default, but can be adjusted. Everything is configurable and you can choose your favourite frontend. It has quote posts (while Mastodon doesn’t allow them, even if they’re perfectly visible if created by Akkoma).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Please, consider that&lt;/strong&gt;: Akkoma is not Mastodon. They “talk” using the same language but are different pieces of software. There’s less activity around it (the documentation is good, the support forum is good, but the number of Akkoma installations can’t be compared to Mastodon’s ones). Many Mastodon mobile apps seem to have problems with Akkoma and at the moment &lt;a href="https://meta.akkoma.dev/t/hashtags-from-akkoma-are-links-on-mastodon/"&gt;there’s a bug (probably it's a Mastodon bug, but users will think it's Akkoma's fault)  that, if you’re posting an hashtag, a link will be shown on Mastodon&lt;/a&gt;. More, if you want to hide boosts or replies from your timeline, remember that Akkoma won’t perform that at backend’s level, but it should be done by the frontend. Not all frontends and mobile apps support it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I’m suggesting another solution:&lt;/strong&gt; Well, not exactly. Actually, I’m suggesting to try Akkoma. It’s a good piece of software, developed by friendly people, well accepted by the Fediverse instances’ administrators and has a lot of happy users and instance administrators. I've used Akkoma as my main instance software for more or less one month. The bug I've described and some problems here and there made me move back to Mastodon. I'm keeping my Akkoma instance up, even if not actively used.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To sum up&lt;/strong&gt;: Akkoma installation is easy and well documented, you have a lot of settings to customise your instance and can fine-tune your installation for your hardware capabilities.&lt;/p&gt;
&lt;h3&gt;&lt;a href="https://join.misskey.page"&gt;Misskey&lt;/a&gt; (and its forks like &lt;a href="https://joinfirefish.org/"&gt;Firefish&lt;/a&gt;, etc.)&lt;/h3&gt;
&lt;p&gt;Misskey is a very nice piece of software. I had some troubles to run it on FreeBSD (but I didn’t try that much) so I decided to fire up a Linux machine and use Docker.&lt;/p&gt;
&lt;p&gt;The interface is nice - it’s Japanese design, so it’s fancy and rich of emojis and effects. I’ve tried it just for a few hours and I appreciated it but decided it wasn’t ok for my needs (at least, for now).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I am suggesting it:&lt;/strong&gt; if you’re building a community, Misskey (or one of its forks) is a very good choice. It’s eye candy, complete and usable. It’s a part of the Fediverse, so no problems to talk to Mastodon, Akkoma, etc. It also has a “drive” feature, useful for many users that want to exchange files.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I’m suggesting another solution:&lt;/strong&gt; the main reason why I had to look at another solution is that it doesn’t support Mastodon APIs so no Mastodon Android or iOS app is working with a Misskey instance. While it may be ok for many users, this could be a problem for others.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To sum up&lt;/strong&gt;: Misskey is nice, worth trying and a very good solution if it fits your community’s needs. For me, it doesn’t give any advantage over other lighter solutions.&lt;/p&gt;
&lt;h3&gt;&lt;a href="https://friendi.ca"&gt;Friendica&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Friendica is a project that aims to be similar to Facebook. It’s federated and actively maintained, the interface is nice and familiar to Facebook users. I’ve been able to install it on a FreeBSD jail (as it’s in PHP) and everything worked as expected. I didn’t spend too much time on Friendica but I’m planning to do a deeper test as I’m working on a community of former Facebook users and this could be the right choice.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I am suggesting it:&lt;/strong&gt; If you’re creating a community for (former) Facebook users, they’ll have a familiar feeling in Friendica. The interface is clean and usable, it supports a lot of protocols (ActivityPub, OStatus, diaspora), it supports plugins, can import websites via rss (so automatic post is easy).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Please, consider that&lt;/strong&gt;: Moderation tools are different from the ones you have on Mastodon or Akkoma (Pleroma, etc.) and you can’t easily report users, especially remote ones as support for Mastodon API is limited. With the current growth of users, it’s easy to find a bad person trying to disturb. Not having effective ways to deal with it may be frustrating.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I’m suggesting another solution:&lt;/strong&gt; I didn’t try Friendica long enough to find some big problems with it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To sum up:&lt;/strong&gt; Generally speaking, it is considered a solid and stable solution, actively maintained and, being in php, portable. If you want to create a Facebook-like community, that's the way to go.&lt;/p&gt;
&lt;h3&gt;&lt;a href="https://codeberg.org/grunfink/snac2"&gt;snac2&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;snac2 is a simple, minimalistic ActivityPub instance that supports the Mastodon API. This makes it compatible with platforms like Pleroma, Akkoma, and Mastodon itself. It's written in portable C and it's been created to be light, easy to deploy, and with only two dependencies: &lt;em&gt;openssl&lt;/em&gt; and &lt;em&gt;curl&lt;/em&gt;. It heavily relies on hard links and &lt;em&gt;doesn't need any database&lt;/em&gt;. This is a big, big plus for me. I've performed many tests and found that this is one of the best lightweight solutions to join the Fediverse.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I am suggesting it:&lt;/strong&gt; snac2 is clean and polished. The federation with the other solutions is good, it works beautifully with &lt;a href="https://tusky.app/"&gt;Tusky&lt;/a&gt; (on Android) and &lt;a href="https://tooot.app/"&gt;tooot&lt;/a&gt; (both on Android and iOS) - also consider &lt;a href="https://enafore.social/"&gt;Enafore&lt;/a&gt; as a PWA or web interface - and its integrated web interface is minimal but effective. No javascript, no cookies - clean web. More, the dev is responsive and open to patches and contributions. I've helped with some stress tests and contributed with instructions and patches to make it work on FreeBSD and NetBSD, and they've been merged immediately. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Please, consider that&lt;/strong&gt;: It's not Mastodon. Some of the Mastodon API features are (currently) not supported so the experience could be different from the other solutions. Some Mastodon apps don't work (the official Mastodon app, for example, can't login). There's no open registration option (users should be manually registered from the cli) and account migration is not supported, at the moment. More, while it's not caching external media, locally published media will stay on the local drives (no S3 upload option), so be prepared to serve those files as well. Testing it from my FTTC connection, I almost DDoSed my internet connection for 15 minutes, publishing a photo.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I’m suggesting another solution:&lt;/strong&gt; Actually, I'm suggesting to try snac2. I think it could be a great solution for a single user instance or for instances managed by tech people, as you can run it just 1 minute after the download. It's light, easy, straightfoward and the dev is a nice person. I'd suggest other solutions if the priority is to offer a full, feature rich Fediverse experience, with registration, big and featureful configuration panels, etc&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To sum up:&lt;/strong&gt; snac2 is a lightweight and effective way to join the Fediverse. Currently, it's my favourite solution for small communities as the "file only" approach and no dependencies are coherent with my ideas. I've migrated my instance from FreeBSD to NetBSD, from external datacenters to my home network and it's just been a matter of a single rsync &lt;strong&gt;&lt;em&gt;(-H)&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;h3&gt;&lt;a href="https://gotosocial.org"&gt;GoToSocial&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;GoToSocial is a new microblogging platform. Its target it to be a light, customisable, easy to manage and integrated software. It is still at its early alpha stage, it should evolve into a beta at some point in ~~2023~~ 2024. It’s developed in Go and installation in easy and fast, it supports Postgres, Mysql and Sqlite. Basic functionalities have already been integrated and it can federate with (almost) all the other ActivityPub implementations, even if with some small problems. FreeBSD installation was easy and fast, as they provide a amd64 FreeBSD binary. Being written in go, it shouldn't be difficult to self-compile it for other architectures.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I am suggesting it:&lt;/strong&gt; While it doesn’t have an integrated frontend (but Mastodon mobile apps or other frontends can be used, as &lt;a href="https://enafore.social"&gt;Enafore&lt;/a&gt; &lt;a href="https://github.com/BDX-town/Mangane"&gt;Mangane&lt;/a&gt;, etc.), it already supports many of the features you’d expect from an ActivityPub implementation. It's fast, suitable for installation on a low end hardware, can be deployed without a reverse proxy as has integrated support for Letsencrypt certificates. But…&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Please, consider that&lt;/strong&gt;: It’s still at alpha stage. Things can still break and it doesn’t support any kind of account migration from Mastodon (while it’s supported by Pleroma and its forks).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why I’m suggesting another solution:&lt;/strong&gt; While I think it could be a game changer, I think it’s bit early to deploy it unless you’re a very skilled and experienced administrator. You should understand its “quirks and features”, mainly tied to its alpha status. While I’ve installed and will keep installed a GoToSocial instance, at the moment I’ll just keep it in a test stage server in order to follow its development status. &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;To sum up:&lt;/strong&gt; GoToSocial could potentially become one of the most interesting ActivityPub microblogging platforms, at least for small and medium sized communities. It's a bit early to consider it ready for a production deployment, but it's already worth testing.&lt;/p&gt;
&lt;h3&gt;Other solutions&lt;/h3&gt;
&lt;p&gt;I’ve tried &lt;a href="https://takahe.social"&gt;Takahe&lt;/a&gt;, another microblogging platform. It’s under active and hard development, so, like GoToSocial, should be kept on the radar as it’s quite promising. I've just fired up a Linux docker installation to try it, so I haven't tried on FreeBSD.&lt;/p&gt;
&lt;p&gt;While I’ve also tried platforms like &lt;a href="https://pixelfed.org"&gt;Pixelfed&lt;/a&gt;, &lt;a href="https://joinpeertube.org"&gt;Peertube&lt;/a&gt;, and &lt;a href="https://funkwhale.audio"&gt;FunkWhale&lt;/a&gt;, which are all part of the Fediverse, they cater to specific needs and are distinct from the microblogging platforms that are the focus of this list.&lt;/p&gt;
&lt;h2&gt;Frontends&lt;/h2&gt;
&lt;p&gt;All the Fediverse backend implementations have their own specific features but the users will just interact via a frontend. While Mastodon - but also Pixelfed, Peertube, etc. - are presented as a specific stack (backend + frontend), there are many frontends that can interact via the Mastodon API.&lt;/p&gt;
&lt;p&gt;Of course, being the “Mastodon” API, not all the backends are perfectly compatibile/supported by all the frontends. Mobile apps use the Mastodon API, that’s why their compatibility with other implementations like snac2, Pleroma, Akkoma, etc. may not be perfect at all the times.&lt;/p&gt;
&lt;p&gt;I won’t describe them all. I’ll just enumerate the ones I’m using, with some notes:&lt;/p&gt;
&lt;h3&gt;Akkoma’s Pleroma-FE&lt;/h3&gt;
&lt;p&gt;With the 2022.12 release of Akkoma, Pleroma-FE has been evolved into a proper, nice looking PWA. After an initial configuration of its many options, I found it quite nice and effective. The only problem (common with many PWAs on iOS) is that when you suspend the app, it doesn’t detect it, so if you open it again after two hours, you’ll just see the recent posts, not all the posts of the last two hours. This is because of the way iOS deals with app suspend/resume and &lt;a href="https://github.com/elk-zone/elk/issues/750#issuecomment-1371966812"&gt;many PWAs don’t seem to understand they’ve been suspended.&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;&lt;a href="https://github.com/BDX-town/Mangane"&gt;Mangane&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Mangane is a Soapbox fork that aims to improve compatibility with Akkoma. It is nice and clean and the developers are improving it to support Akkoma's features, so it's great. At the moment I've noticed  some visual problems, on iOS, with the icons -  &lt;strong&gt;but &lt;a href="https://github.com/BDX-town/Mangane"&gt;Guérin&lt;/a&gt; contacted me to ask for information (after reading this article) and opened an issue on to fix it&lt;/strong&gt;. Guérin has been nice and helpful, making Mangane even more appealing. I’m using it for planned posts on Akkoma, as Pleroma-FE doesn’t support them, yet. I'm also using Mangane as a daily driver, from time to time, as I like it.&lt;/p&gt;
&lt;h3&gt;&lt;a href="https://enafore.social"&gt;Enafore&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;A light, fast, complete and usable web frontend. I’ve been using it when I needed a simple, clear frontend. It supports “hide replies”, which is great to improve timeline quality. It's among my favourite choices when using a webapp and is a good choice for snac2.&lt;/p&gt;
&lt;h3&gt;&lt;a href="https://elk.zone"&gt;Elk&lt;/a&gt;&lt;/h3&gt;
&lt;p&gt;Elk is a frontend for Mastodon API. It’s nice, clear, eye candy, intelligent. The developers are nice people, open to suggestions. Development is a bit slower compared to the initial pace, but the app is already very complete and stable.&lt;/p&gt;
&lt;p&gt;Elk is definitely a very good piece of software, and I recommend to try it.&lt;/p&gt;
&lt;h2&gt;Moderation and final considerations&lt;/h2&gt;
&lt;p&gt;One of the things you should be considering is that deploying a piece of the Fediverse isn’t just installing a software and interacting with others. Actually, that’s just the easiest part of the experience, at least if you’re not creating a single user instance.&lt;/p&gt;
&lt;p&gt;Many of the people that joined the Fediverse in its early days decided to do it as they felt attacked on other social networks. In the last years, the commercial socials have proven to be the perfect place for negative people, attacking others without being blocked/stopped in an efficient way. &lt;em&gt;Hate causes addiction and the owners of those commercial socials make a lot of money if people interact, showing them ads every time they open the app/website&lt;/em&gt;. They make money (also) through people hating each other.&lt;/p&gt;
&lt;p&gt;The Fediverse gives the possibility to mute and block users, but also to mute and block entire instances. One of the main tasks of an instance’s administrator is to make sure that everything is ok. While you can define your own instance’s rules, other instances’ admins may block you if they find you’re federating by sending messages agains their rules.&lt;/p&gt;
&lt;p&gt;As an administrator, you’re also responsible of keeping your users’ data safe, to avoid sharing/providing illegal contents or offensive stuff. So you’re free to set your rules, but others are free to “defederate” you, if they don’t like the contents your instance is providing. While it’s not an issue for a single user instance, you must be quite careful when opening the registrations as you may find out you've been defederated because of (your) lack of moderation.&lt;/p&gt;
&lt;p&gt;There’s some strong criticism against the Fediverse because of this, as many see it as more “censored” than the traditional, commercial social networks. I don’t think it’s true, as you’re free to fire up your instance, decide your rules and act as you want. But you can’t impose others and their instances to follow you, even if you think you’re right. In a free world, everybody should be free to decide if they want to listen to you. But everybody should also be free to create a new space and start sharing their ideas. There’s not a central authority of contents, so only the single admins and users can choose what they want to see or avoid. There's not a commercially driven algorithm that may decide that you should see contents that will cause you anger and hate just because it generates traffic (and money) to the social's owner.&lt;/p&gt;
&lt;p&gt;The Fediverse can be a beautiful place to stay.&lt;/p&gt;
&lt;p&gt;&lt;mastodon-comments host="mastodon.bsd.cafe" user="stefano" tootId="111732122236947352"&gt;&lt;/mastodon-comments&gt;&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Sun, 15 Jan 2023 09:29:45 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/01/15/deploying-a-piece-of-the-fediverse/</guid><category>fediverse</category><category>mastodon</category><category>freebsd</category><category>linux</category><category>docker</category><category>gotosocial</category><category>snac2</category><category>snac</category><category>mangane</category><category>twitter</category><category>facebook</category><category>social</category><category>hosting</category><category>server</category><category>web</category><category>akkoma</category><category>pleroma</category><category>elk</category><category>pixelfed</category><category>funkwhale</category><category>peertube</category><category>enafore</category></item><item><title>How to force reboot a frozen Linux or FreeBSD machine</title><link>https://it-notes.dragas.net/2023/01/08/how-to-force-reboot-a-frozen-linux-or-freebsd-servers/</link><description>&lt;p&gt;&lt;img src="https://images.unsplash.com/photo-1514747348279-46eb4082b804?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDl8fGZyb3plbnxlbnwwfHx8fDE2NzMxNzg1NzQ&amp;ixlib=rb-4.0.3&amp;q=80&amp;w=2000" alt="How to force reboot a frozen Linux or FreeBSD machine"&gt;&lt;/p&gt;&lt;p&gt;Sometimes your filesystem gets stuck. No operations can be done and a “reboot” will just cause an indefinite waiting time for I/O. You could be able to ssh into the server (or login via console), but no operations can be done as the storage devices are blocked. This may happen because of a file system failure or a strange kernel problem. This is more likely to happen when dealing with usb attached external interfaces.&lt;/p&gt;
&lt;p&gt;There’s a “magical” code that triggers a specific kernel condition - and a reboot. &lt;strong&gt;Be careful, those commands should be considered as the last resort. No disk flush will be performed, no shutdown procedure will be started so you might completely destroy your file system or any open file.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Yet, this could be the best option you have and won’t be more harmful than a hard reset or a “traditional” cable pull - but you can do this remotely.&lt;/p&gt;
&lt;p&gt;Remember to launch those commands with &lt;strong&gt;root&lt;/strong&gt; privileges:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;On Linux:&lt;/strong&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;echo b &amp;gt; /proc/sysrq-trigger
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will trigger an immediate reboot - no further disk operation will be performed.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;On FreeBSD:&lt;/strong&gt;&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;sysctl debug.kdb.panic=1
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;This will trigger a kernel panic that will (by default) cause a reboot.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Sun, 08 Jan 2023 10:56:41 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2023/01/08/how-to-force-reboot-a-frozen-linux-or-freebsd-servers/</guid><category>freebsd</category><category>server</category><category>tutorial</category><category>linux</category><category>ha</category><category>recovery</category></item><item><title>Creating an Alpine Linux VM on bhyve - with root on ZFS (optionally encrypted)</title><link>https://it-notes.dragas.net/2022/11/01/creating-an-alpine-vm-on-bhyve-with-root-on-zfs-optionally-encrypted/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/alps.webp" alt="Creating an Alpine Linux VM on bhyve - with root on ZFS (optionally encrypted)"&gt;&lt;/p&gt;&lt;p&gt;Bhyve is great - and we’re using it with a lot of guest operating systems.&lt;/p&gt;
&lt;p&gt;One of my favourite Linux distributions is &lt;a href="https://alpinelinux.org"&gt;Alpine Linux&lt;/a&gt; - it’s great as a docker or lxc/lxd host, is light, stable and easily manageable.&lt;/p&gt;
&lt;p&gt;On FreeBSD, &lt;a href="https://github.com/churchers/vm-bhyve"&gt;vm-bhyve&lt;/a&gt; already provides a good template for Alpine Linux, but it’s based on the plain standard image with boot on ext4.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;So we need the &lt;a href="https://alpinelinux.org/downloads/"&gt;alpine-extended iso&lt;/a&gt; -&lt;/strong&gt; with zfs module.&lt;/p&gt;
&lt;p&gt;Let’s create a new Alpine Linux bhyve VM:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;vm create -t alpine -s 50G -m 4G -c 2 alpinevm&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now let’s configure it:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;vm configure alpinevm&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Let’s change some options: &lt;em&gt;vmlinuz-vanilla&lt;/em&gt; and &lt;em&gt;initramfs-vanilla&lt;/em&gt; should be changed to &lt;em&gt;vmlinuz-lts&lt;/em&gt; and &lt;em&gt;initramfs-lts&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;More, we should instruct grub to boot from ZFS. The configuration should be similar to this:&lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code&gt;[…]
grub_install0=&amp;quot;linux /boot/vmlinuz-lts initrd=/boot/initramfs-lts alpine_dev=cdrom:iso9660 modules=loop,squashfs,sd-mod,usb-storage,sr-mod&amp;quot;
grub_install1=&amp;quot;initrd /boot/initramfs-lts&amp;quot;
grub_run0=&amp;quot;linux /boot/vmlinuz-lts root=rpool/ROOT/alpine rootfstype=zfs modules=ext4,zfs&amp;quot;
grub_run1=&amp;quot;initrd /boot/initramfs-lts&amp;quot;
[…]
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;It’s now time to start with the installation:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;vm install alpinevm &lt;em&gt;alpine-extended.iso&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Now, follow &lt;a href="https://wiki.alpinelinux.org/wiki/Root_on_ZFS_with_native_encryption"&gt;this Alpine Linux Wiki guide&lt;/a&gt;. Remember that we’re dealing with “&lt;em&gt;vda&lt;/em&gt;” devices, not &lt;em&gt;"sda"&lt;/em&gt;, so change them accordingly. If you don’t want to have an encrypted rootfs dataset, just avoid the encryption line in the zpool create command.&lt;/p&gt;
&lt;p&gt;At the end of the installation procedure, reboot and enjoy your new Alpine Linux bhyve VM.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Tue, 01 Nov 2022 15:47:35 +0000</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2022/11/01/creating-an-alpine-vm-on-bhyve-with-root-on-zfs-optionally-encrypted/</guid><category>alpine</category><category>linux</category><category>server</category><category>tutorial</category><category>zfs</category><category>freebsd</category><category>hosting</category><category>filesystems</category><category>bhyve</category><category>virtualization</category></item></channel></rss>