Proxmox vs FreeBSD: Which Virtualization Host Performs Better?

TL;DR: Skip to the Conclusion for a summary.

Preamble

I have always been passionate about virtualization and have consistently used it.

The first solution I installed on my infrastructures (and those of clients) was Xen on NetBSD, with great success. I then used Xen on Linux and, since 2012, OpenNebula, followed by Proxmox in 2013. Proxmox has always given me great satisfaction, and even today I consider it a valuable platform that I install gladly. I have also used other hypervisors like XCP-ng but less frequently, and in recent years, I have started to make extensive use of bhyve.

About two and a half years ago, we began a progressive process of migrating our servers (and those of our clients) from Linux to FreeBSD, using jails (when possible) or VMs on bhyve. In some cases, migrating setups from Proxmox to FreeBSD resulted in performance improvements, even with the same hardware. In some instances, I migrated VMs without notifying clients, and they contacted me a few days later to inquire if we had new hardware because they noticed better performance.

After years, I decided to conduct a test to determine if this was just a perception or if there was a technical basis behind it. Of course, this test has no scientific validity, and the results were obtained on specific hardware and at a specific time, so on different hardware, workload, and situations, the results could be entirely opposite. However, I tried to have as scientific and objective an approach as possible since I am comparing two solutions that I care about and use daily.

Hardware and Test Conditions

I often see comparative tests done on VMs from various providers. In my opinion, this comparison makes no sense because a VM from any provider shares its hardware with many other VMs, so the results will vary depending on the load of the “neighbors” and will never be reliable.

For this test, I decided to take a physical server with the following characteristics:

  • Intel Core i7-6700
  • 2x SSD M.2 NVMe 512 GB
  • 4x RAM 16384 MB DDR4
  • NIC 1 Gbit Intel I219-LM

The hardware is not recent, but still very widespread. On more recent hardware, the results might differ, but the test will be based on this configuration.

I installed Proxmox 8.2.2 starting from the Debian template of the provider and manually installed it following the instructions. I created a partition for Proxmox and left one partition free on each of the two NVME drives to create (at different times) the ZFS pool (in mirror) and the LVM on top of the Linux software raid.

After all the tests, I installed FreeBSD 14.1-RELEASE on ZFS on the same host, using bsdinstall from an mfsbsd image since the provider does not directly support installing FreeBSD from its panel or rescue mode.

In both installations, I always trimmed the NVME drives before starting the tests, and in the case of ZFS, I set (both on Proxmox and FreeBSD) compression to zstd and atime to off. No other changes were made compared to the standard installation.

On FreeBSD, the VM was created and managed with vm-bhyve (devel).

On Proxmox, I tested the physical host on ZFS and ext4 and the VM on ZFS and LVM as LVM is the standard and most common setup in Proxmox.

On FreeBSD, I tested the host on ZFS and the VM with both virtio and nvme drivers, on zvol, and as an image file within a ZFS dataset.

I used sysbench installed from the official Debian repository (on Proxmox and VM) and from the FreeBSD package on the respective host.

The VMs, both on Proxmox and FreeBSD, have nearly identical characteristics and default configuration (apart from the nvme drivers set on bhyve, and for that reason, I also tested virtio).

For those who want to reproduce my tests, here are the detailed configurations of the VMs used in bhyve:

FreeBSD bhyve VM Configuration with NVMe Driver:

1
2
3
4
5
6
7
loader="uefi"
cpu=4
memory=4096M
network0_type="virtio-net"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0.img"

FreeBSD bhyve VM Configuration with virtio Driver:

1
2
3
4
5
6
7
loader="uefi"
cpu=4
memory=4096M
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"

Proxmox VM Configuration:

ComponentDetails
Memory4.00 GiB [balloon=0]
Processors4 (1 sockets, 4 cores) [x86-64-v2-AES]
BIOSDefault (SeaBIOS)
DisplayDefault
MachineDefault (i440fx)
SCSI ControllerVirtIO SCSI single
CD/DVD Drive (ide2)local:iso/debian-12.5.0-amd64-netinst.iso,media=cdrom,size=629M
Hard Disk (scsi0)zfspool:vm-100-disk-0,cache=writeback,discard=on,iothread=1,size=50G,ssd=1
Network Device (net0)virtio=BC:24:11:22:3D:F0,bridge=vmbr0

In all the configurations, I used Debian 12 as the VM operating system, with the file system on ext4.

I chose Debian 12 as it is a stable, widespread, and modern Linux distribution. I did not test a FreeBSD VM because, in my setups, I tend not to virtualize FreeBSD on FreeBSD but to use nested jails.

All tests were performed multiple times, and I took the median results. CPU and RAM were tested only on the first VM (on Proxmox (ZFS) and FreeBSD (ZFS and nvme)) as they are not dependent on the underlying storage. Storage performance, on the other hand, was tested on all configurations.

CPU and RAM Tests on VMs

On both VMs:

1
2
sysbench --test=cpu --cpu-max-prime=20000 run
sysbench --test=memory run

Comparative Results

CPU Test
ConfigurationEvents per SecondTotal Time (s)Latency (avg) (ms)
Proxmox498.0810.00102.01
FreeBSD473.6510.00192.11
CPU Percentage Analysis
  • Difference in Events per Second: ((498.08 - 473.65) / 498.08 approx -4.91%)
  • Difference in Total Time: ((10.0019 - 10.0010) / 10.0010 approx +0.009%)
  • Difference in Latency (avg): ((2.11 - 2.01) / 2.01 approx +4.98%)
RAM Test
ConfigurationTotal OperationsOperations per SecondTotal MiB TransferredMiB/secLatency (avg) (ms)
Proxmox647772276476757.5963259.016324.960.00
FreeBSD686210636861139.0667012.766700.330.00
RAM Percentage Analysis
  • Difference in Total Operations: ((68621063 - 64777227) / 64777227 approx +5.94%)
  • Difference in Operations per Second: ((6861139.06 - 6476757.59) / 6476757.59 approx +5.94%)
  • Difference in Total MiB Transferred: ((67012.76 - 63259.01) / 63259.01 approx +5.93%)
  • Difference in MiB/sec: ((6700.33 - 6324.96) / 6324.96 approx +5.93%)

CPU and RAM Comparative Results Table

TestMetricProxmox (KVM)FreeBSD (bhyve)Difference (%)
CPUEvents/s498.08473.65-4.91
Time (s)10.001010.0019+0.009
Latency2.012.11+4.98
RAMOps6477722768621063+5.94
Ops/s6476757.596861139.06+5.94
MiB63259.0167012.76+5.93
MiB/s6324.966700.33+5.93
Latency0.000.000.00

Interpretation of CPU and RAM Results

  1. CPU Performance:

    • The VM on FreeBSD has slightly lower CPU performance compared to Proxmox (-4.91% in events per second).
    • The total execution time is nearly identical, with a negligible difference.
    • The average latency is slightly higher on FreeBSD (+4.98%).
  2. RAM Performance:

    • The VM on FreeBSD has better RAM performance compared to Proxmox (+5.94% in operations and MiB/sec).
    • The average latency is identical in both configurations.

In summary, while Proxmox provides more consistent CPU performance, FreeBSD demonstrates superior memory performance. The choice between Proxmox and FreeBSD may depend on the specific workload requirements and the importance of consistent performance versus higher throughput.

I/O Performance Tests

The test has been conducted using sysbench, with this command line:

1
2
sysbench --test=fileio --file-total-size=30G prepare
sysbench --test=fileio --file-total-size=30G --file-test-mode=rndrw  --max-time=300 --max-requests=0 run

I/O Comparative Performance Data with Percentage Differences

MetricVM on Proxmox (ZFS)VM on Proxmox (LVM)VM on FreeBSD (ZFS, NVMe)VM on FreeBSD (ZFS, Virtio)VM on FreeBSD (zvol)Host FreeBSD (ZFS)Host Proxmox (ZFS)Host Proxmox (ext4)
File creation speed (MiB/s)407.82461.521467.831398.811333.641625.67968.64633.13
Reads per second650.09504.8011183.44806.9311834.531234.62920.95498.37
Writes per second433.40336.547455.62537.957889.69823.08613.96332.25
fsyncs per second1387.081076.9723858.081721.7925247.362634.011964.961063.19
Read throughput (MiB/s)10.167.89174.7412.61184.9119.2914.397.79
Write throughput (MiB/s)6.775.26116.498.41123.2812.869.595.19
Total events741163575588127491579199521349145914075921049894568277
Average latency (ms)0.400.520.020.330.020.210.290.53
95th percentile latency (ms)2.303.250.061.580.051.321.792.71
Max latency (ms)22.6532.3035.4913.6077.539.039.4717.39
Total test time (s)300.0475300.1147300.0020300.0226300.0012300.0416300.0159300.1381

Percentage Differences Compared to VM on Proxmox (ZFS)

MetricVM on Proxmox (LVM)VM on FreeBSD (ZFS, NVMe)VM on FreeBSD (ZFS, Virtio)VM on FreeBSD (zvol)Host FreeBSD (ZFS)Host Proxmox (ZFS)Host Proxmox (ext4)
File creation speed (MiB/s)+13.18%+259.77%+242.99%+227.02%+298.62%+137.52%+55.25%
Reads per second-22.34%+1619.98%+24.13%+1720.45%+89.92%+41.67%-23.34%
Writes per second-22.35%+1620.26%+24.12%+1720.42%+89.91%+41.66%-23.34%
fsyncs per second-22.36%+1620.02%+24.13%+1720.18%+89.90%+41.66%-23.35%
Read throughput (MiB/s)-22.34%+1619.88%+24.11%+1719.98%+89.86%+41.63%-23.33%
Write throughput (MiB/s)-22.30%+1620.68%+24.22%+1720.97%+89.96%+41.65%-23.33%
Total events-22.34%+1620.16%+24.12%+1720.31%+89.92%+41.65%-23.31%
Average latency (ms)+30.00%-95.00%-17.50%-95.00%-47.50%-27.50%+32.50%
95th percentile latency (ms)+41.30%-97.39%-31.30%-97.83%-42.61%-22.17%+17.83%
Max latency (ms)+42.60%+56.69%-39.96%+242.30%-60.13%-58.19%-23.22%
Total test time (s)+0.02%-0.02%-0.01%-0.02%-0.02%-0.01%+0.01%

Percentage Differences Compared to VM on Proxmox (LVM) as this is the standard Proxmox setup

MetricVM on Proxmox (ZFS)VM on FreeBSD (ZFS, NVMe)VM on FreeBSD (ZFS, Virtio)VM on FreeBSD (zvol)Host FreeBSD (ZFS)Host Proxmox (ZFS)Host Proxmox (ext4)
File creation speed (MiB/s)-11.64%+218.04%+203.09%+188.97%+252.24%+109.88%+37.18%
Reads per second+28.78%+2115.42%+59.85%+2244.40%+144.58%+82.44%-1.27%
Writes per second+28.78%+2115.37%+59.85%+2244.35%+144.57%+82.43%-1.27%
fsyncs per second+28.79%+2115.30%+59.87%+2244.30%+144.58%+82.45%-1.28%
Read throughput (MiB/s)+28.77%+2114.70%+59.82%+2243.60%+144.49%+82.38%-1.27%
Write throughput (MiB/s)+28.71%+2114.64%+59.89%+2243.73%+144.49%+82.32%-1.33%
Total events+28.77%+2114.98%+59.83%+2243.94%+144.55%+82.40%-1.27%
Average latency (ms)-23.08%-96.15%-36.54%-96.15%-59.62%-44.23%+1.92%
95th percentile latency (ms)-29.23%-98.15%-51.38%-98.46%-59.38%-44.92%-16.62%
Max latency (ms)-29.88%+9.88%-57.89%+140.03%-72.04%-70.68%-46.16%
Total test time (s)-0.02%-0.04%-0.03%-0.04%-0.02%-0.03%+0.01%

Analysis of Performance Data

The performance data collected from various configurations of Proxmox and FreeBSD provides a comprehensive view of the I/O capabilities and highlights some significant differences. Here is an analysis of the key findings:

Comparative Analysis
Hypothesis on NVMe Performance and fsync

An important observation from my tests is that VMs with the bhyve NVMe driver show significantly higher performance compared to the same VMs with the virtio driver or compared to the physical host system. This difference led me to hypothesize that the bhyve NVMe driver might not correctly respect fsync operations, returning a positive result before the underlying file system has confirmed the final write. However, this is just a theory based on benchmark results and is not supported by concrete data.

Specifically, I observed that:

  • The VM with the virtio driver has performance comparable to Proxmox.
  • The VM with the NVMe driver, whether on a ZFS dataset or zvol, shows performance superior to the physical FreeBSD host.

These observations suggest that the bhyve NVMe driver might “cheat” by returning an ok for fsync operations before they are actually completed. However, further testing and analysis are needed to confirm or refute this hypothesis.

Host Physical Systems and Filesystems
  1. File Creation Speed:

    • Host FreeBSD (ZFS) shows the highest file creation speed at 1625.67 MiB/s, which is +68.03% compared to Host Proxmox (ZFS) and +156.72% compared to Host Proxmox (ext4).
    • Host Proxmox (ext4) has a file creation speed of 633.13 MiB/s, which is -34.62% compared to Host Proxmox (ZFS).
  2. Read and Write Operations per Second:

    • Host FreeBSD (ZFS) demonstrates the highest read and write operations per second with 1234.62 reads/s and 823.08 writes/s.
      • Reads per second: +34.06% compared to Host Proxmox (ZFS) and +147.80% compared to Host Proxmox (ext4).
      • Writes per second: +34.04% compared to Host Proxmox (ZFS) and +147.61% compared to Host Proxmox (ext4).
    • Host Proxmox (ext4) shows a lower performance with 498.37 reads/s and 332.25 writes/s.
    • Host Proxmox (ZFS) has 920.95 reads/s and 613.96 writes/s.
  3. fsync Operations per Second:

    • Host FreeBSD (ZFS) achieves the highest fsync operations per second at 2634.01 fsyncs/s, which is +34.02% compared to Host Proxmox (ZFS) and +147.73% compared to Host Proxmox (ext4).
    • Host Proxmox (ext4) has a lower performance with 1063.19 fsyncs/s.
    • Host Proxmox (ZFS) achieves 1964.96 fsyncs/s.
  4. Throughput:

    • Host FreeBSD (ZFS) again leads in throughput with 19.29 MiB/s read and 12.86 MiB/s write.
      • Read throughput: +34.03% compared to Host Proxmox (ZFS) and +147.53% compared to Host Proxmox (ext4).
      • Write throughput: +34.08% compared to Host Proxmox (ZFS) and +147.79% compared to Host Proxmox (ext4).
    • Host Proxmox (ext4) has the lowest throughput with 7.79 MiB/s read and 5.19 MiB/s write.
    • Host Proxmox (ZFS) has 14.39 MiB/s read and 9.59 MiB/s write.
  5. Latency:

    • Host FreeBSD (ZFS) shows the lowest average latency at 0.21 ms and 95th percentile latency at 1.32 ms.
      • Average latency: -27.59% compared to Host Proxmox (ZFS) and -60.38% compared to Host Proxmox (ext4).
      • 95th percentile latency: -26.27% compared to Host Proxmox (ZFS) and -51.29% compared to Host Proxmox (ext4).
    • Host Proxmox (ext4) has the highest average latency at 0.53 ms and 95th percentile latency at 2.71 ms.
    • Host Proxmox (ZFS) has an average latency of 0.29 ms and 95th percentile latency of 1.79 ms.
VMs vs Physical Hosts
  1. File Creation Speed:

    • VM on FreeBSD (ZFS, NVMe) demonstrates an outstanding file creation speed at 1467.83 MiB/s (+218.04% compared to VM on Proxmox (LVM) and +259.77% compared to VM on Proxmox (ZFS)).
    • VM on FreeBSD (zvol) achieves 1333.64 MiB/s, which is also significantly higher than VM on Proxmox (LVM) and VM on Proxmox (ZFS).
  2. Read and Write Operations per Second:

    • VM on FreeBSD (ZFS, NVMe) shows exceptional performance with 11183.44 reads/s and 7455.62 writes/s.
    • VM on FreeBSD (zvol) also performs excellently with 11834.53 reads/s and 7889.69 writes/s.
    • These values suggest that the NVMe driver might not be honoring fsync properly, resulting in inflated performance metrics.
  3. fsync Operations per Second:

    • VM on FreeBSD (ZFS, NVMe) achieves 23858.08 fsyncs/s, and VM on FreeBSD (zvol) achieves 25247.36 fsyncs/s, both significantly higher than any other configuration.
  4. Throughput:

    • VM on FreeBSD (ZFS, NVMe) achieves the highest throughput with 174.74 MiB/s read and 116.49 MiB/s write.
    • VM on FreeBSD (zvol) also has high throughput at 184.91 MiB/s read and 123.28 MiB/s write.
  5. Latency:

    • VM on FreeBSD (ZFS, NVMe) shows very low average latency at 0.02 ms and 95th percentile latency at 0.06 ms.
    • VM on FreeBSD (zvol) has similarly low latencies, indicating fast response times for I/O operations.
VM Configurations Comparison
  1. File Creation Speed:

    • Among VMs, VM on FreeBSD (ZFS, NVMe) leads, followed by VM on FreeBSD (zvol), and then VM on FreeBSD (ZFS, Virtio).
  2. Read and Write Operations per Second:

    • VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) both outperform VM on Proxmox (ZFS) and VM on Proxmox (LVM) configurations significantly.
    • VM on Proxmox (ZFS) outperforms VM on Proxmox (LVM) in read and write operations.
  3. fsync Operations per Second:

    • VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) have significantly higher fsync operations compared to VM on Proxmox (ZFS) and VM on Proxmox (LVM).
  4. Throughput:

    • VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) have the highest throughput, followed by VM on Proxmox (ZFS) and then VM on Proxmox (LVM).
  5. Latency:

    • VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) show the lowest latencies among the VMs, indicating faster response times.
    • VM on Proxmox (ZFS) shows lower latencies compared to VM on Proxmox (LVM).

Cache Settings and Performance Influence

Cache settings can significantly influence the performance of virtualization systems. In my setup, I did not modify the cache settings for the NVMe and virtio drivers, keeping the default settings. It is possible that the observed performance differences are also due to how different operating systems manage the caches of NVMe devices. I encourage other system administrators to explore the cache settings of their systems to see if changes in this area can influence benchmark results.

Conclusion

Regarding RAM and CPU, the performance of the VMs is comparable. There are slight differences in favor of Proxmox for CPU and FreeBSD for RAM, but in my opinion, these differences are so negligible that they wouldn’t sway the decision towards one solution or the other.

The I/O performance data clearly indicates that VM on FreeBSD with NVMe and ZFS outperforms all other configurations by a significant margin. This is evident in the file creation speed, read/write operations per second, fsync operations per second, throughput, and latency metrics. However, the exceptionally high performance of VM on FreeBSD with NVMe and ZFS suggests that there might be an underlying issue, such as the NVMe driver not honoring fsync properly. This could lead to the VM believing that data has been written when it has not, resulting in artificially inflated performance results.

When comparing physical hosts, Host FreeBSD (ZFS) demonstrates excellent performance, particularly in comparison to Host Proxmox (ZFS) and Host Proxmox (ext4).

When comparing VMs, VM on FreeBSD (ZFS, NVMe) and VM on FreeBSD (zvol) configurations stand out as the top performers. However, it’s important to consider the potential fsync issue with NVMe storage. VM on Proxmox (ZFS) shows better performance than VM on Proxmox (LVM), but both are outperformed by the FreeBSD configurations.

The VM using virtio on FreeBSD also shows strong performance, albeit not as high as the NVMe configuration. It significantly outperforms Proxmox configurations in terms of file creation speed, read/write operations per second, and throughput, while maintaining competitive latencies.

The virtio driver provides a stable and reliable option, making it a suitable choice for environments where the NVMe driver’s potential fsync issue might be a concern. This makes FreeBSD with virtio a balanced option for virtualization, offering both high performance and reliability.

In conclusion, while the VM on FreeBSD with NVMe and ZFS shows the best performance, it is essential to investigate the potential issue with fsync operations.

By examining these performance metrics, users can make informed decisions about their virtualization and storage configurations to optimize their systems for specific workloads and performance requirements.

In light of these tests and experiments, I can therefore affirm that my sensations (and those of many users) of greater “snappiness” of the VMs on FreeBSD can be confirmed. Certainly, Proxmox is a stable solution, rich in features, battle-tested, and has many other valid points, but FreeBSD, especially with the nvme driver, demonstrates very high performance and a very low overhead in installation and operation.

I will continue to use both solutions with great satisfaction, but I will be even more encouraged to implement virtualization servers based on FreeBSD and bhyve.


Related Content

0%