<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"><channel xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><title>IT Notes - smartos</title><link>https://it-notes.dragas.net/categories/smartos/</link><description>Articles in category smartos</description><language>en</language><lastBuildDate>Wed, 19 Nov 2025 09:16:00 +0100</lastBuildDate><atom:link href="https://it-notes.dragas.net/categories/smartos/feed.xml" rel="self" type="application/rss+xml"></atom:link><item><title>Static Web Hosting on the Intel N150: FreeBSD, SmartOS, NetBSD, OpenBSD and Linux Compared  </title><link>https://it-notes.dragas.net/2025/11/19/static-web-hosting-intel-n150-freebsd-smartos-netbsd-openbsd-linux/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="A server rack with some servers and cables"&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Update&lt;/strong&gt;: This post has been updated to include &lt;strong&gt;Docker&lt;/strong&gt; benchmarks and a comparison of container overhead versus FreeBSD Jails and illumos Zones.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: Some operating systems (FreeBSD and Linux) support kernel TLS (kTLS) and the related SSL_sendfile path in nginx, which can improve HTTPS performance for static files. Since this feature is not available on all the systems included in the comparison (for example NetBSD, OpenBSD and illumos), the benchmarks were run with a common baseline configuration that does not rely on kTLS. The goal is to compare the systems under similar conditions rather than to measure OS specific optimizations.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I often get very specific infrastructure requests from clients. Most of the time it is some form of hosting. My job is usually to suggest and implement the setup that fits their goals, skills and long term plans.  &lt;/p&gt;
&lt;p&gt;If there are competent technicians on the other side, and they are willing to learn or already comfortable with Unix style systems, my first choices are usually one of the BSDs or an illumos distribution. If they need a control panel, or they already have a lot of experience with a particular stack that will clearly help them, I will happily use Linux and it usually delivers solid, reliable results.  &lt;/p&gt;
&lt;p&gt;Every now and then someone asks the question I like the least:  &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;“But how does it &lt;em&gt;perform&lt;/em&gt; compared to X or Y?”  &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I have never been a big fan of benchmarks. At best they capture a very specific workload on a very specific setup. They are almost never a perfect reflection of what will happen in the real world.  &lt;/p&gt;
&lt;p&gt;For example, I discovered that idle bhyve VMs seem to use fewer resources when the host is illumos than when the host is FreeBSD. It looks strange at first sight, but the illumos people are clearly working very hard on this, and the result is a very capable and efficient platform.  &lt;/p&gt;
&lt;p&gt;Despite my skepticism, from time to time I enjoy running some comparative tests. I already did it with &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;Proxmox KVM versus FreeBSD bhyve&lt;/a&gt;, and I also &lt;a href="https://it-notes.dragas.net/2025/09/19/freebsd-vs-smartos-whos-faster-for-jails-zones-bhyve/"&gt;compared Jails, Zones, bhyve and KVM&lt;/a&gt; on the same Intel N150 box. That led to the FreeBSD vs SmartOS article where I focused on CPU and memory performance on this small mini PC.  &lt;/p&gt;
&lt;p&gt;This time I wanted to do something simpler, but also closer to what I see every day: &lt;strong&gt;static web hosting.&lt;/strong&gt;  &lt;/p&gt;
&lt;p&gt;Instead of synthetic CPU or I/O tests, I wanted to measure how different operating systems behave when they serve a small static site with nginx, both over HTTP and HTTPS.  &lt;/p&gt;
&lt;p&gt;This is &lt;strong&gt;not&lt;/strong&gt; meant to be a super rigorous benchmark. I used the default nginx packages, almost default configuration, and did not tune any OS specific kernel settings. In my experience, careful tuning of kernel and network parameters can easily move numbers by several tens of percentage points. The problem is that very few people actually spend time chasing such optimizations. Much more often, once a limit is reached, someone yells “we need mooooar powaaaar” while the real fix would be to tune the existing stack a bit.&lt;/p&gt;
&lt;p&gt;So the question I want to answer here is more modest and more practical:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;With default nginx and a small static site, how much does the choice of host OS really matter on this Intel N150 mini PC?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;em&gt;Spoiler&lt;/em&gt;: less than people think, at least for plain HTTP. Things get more interesting once TLS enters the picture.&lt;/p&gt;
&lt;hr /&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;br /&gt;
These benchmarks are a snapshot of my specific hardware, network and configuration. They are useful to compare &lt;em&gt;relative&lt;/em&gt; behavior on this setup. They are not a universal ranking of operating systems. Different CPUs, NICs, crypto extensions, kernel versions or nginx builds can completely change the picture.  &lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;h2&gt;Test setup&lt;/h2&gt;
&lt;p&gt;The hardware is the same Intel N150 mini PC I used in my previous tests: a small, low power box that still has enough cores to be interesting for lab and small production workloads.  &lt;/p&gt;
&lt;p&gt;On it, I installed several operating systems and environments, always on the bare metal, not nested inside each other. On each OS I installed nginx from the official packages.  &lt;/p&gt;
&lt;h3&gt;Software under test&lt;/h3&gt;
&lt;p&gt;On the host:  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;, with:&lt;br /&gt;
- a Debian 12 LX zone&lt;br /&gt;
- an Alpine Linux 3.22 LX zone&lt;br /&gt;
- a native SmartOS zone  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt; 14.3-RELEASE:&lt;br /&gt;
- nginx running inside a native jail  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;OpenBSD&lt;/strong&gt; 7.8:&lt;br /&gt;
- nginx on the host  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;NetBSD&lt;/strong&gt; 10.1:&lt;br /&gt;
- nginx on the host  &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Debian&lt;/strong&gt; 13.2:&lt;br /&gt;
- nginx on the host &lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Alpine Linux&lt;/strong&gt; 3.22:&lt;br /&gt;
- nginx on the host&lt;br /&gt;
- Docker: Debian 13 container running on the Alpine host (ports mapped)&lt;/p&gt;
&lt;p&gt;I also tried to include &lt;strong&gt;DragonFlyBSD&lt;/strong&gt;, but the NIC in this box is not supported. Using a different NIC just for one OS would have made the comparison meaningless, so I excluded it.  &lt;/p&gt;
&lt;h3&gt;nginx configuration&lt;/h3&gt;
&lt;p&gt;In all environments:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;nginx was installed from the system packages  &lt;/li&gt;
&lt;li&gt;&lt;code&gt;worker_processes&lt;/code&gt; was set to &lt;code&gt;auto&lt;/code&gt;  &lt;/li&gt;
&lt;li&gt;the web root contained the same static content  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The important part is that I used &lt;strong&gt;exactly the same &lt;code&gt;nginx.conf&lt;/code&gt; file for all operating systems and all combinations in this article&lt;/strong&gt;. I copied the same configuration file verbatim to every host, jail and zone. The only changes were the IP address and file paths where needed, for example for the TLS certificate and key.  &lt;/p&gt;
&lt;p&gt;The static content was a default build of the example site generated by &lt;a href="https://bssg.dragas.net/"&gt;&lt;strong&gt;BSSG&lt;/strong&gt;, my Bash static site generator&lt;/a&gt;. The web root was the same logical structure on every OS and container type.  &lt;/p&gt;
&lt;p&gt;There is no OS specific tuning in the configuration and no kernel level tweaks. This is very close to a “package install plus minimal config” situation.  &lt;/p&gt;
&lt;h3&gt;TLS configuration&lt;/h3&gt;
&lt;p&gt;For HTTPS I used a very simple configuration, identical on every host.  &lt;/p&gt;
&lt;p&gt;Self signed certificate created with:  &lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-sh"&gt;openssl req -x509 -newkey rsa:4096 -nodes -keyout server.key -out server.crt -days 365 -subj &amp;quot;/CN=localhost&amp;quot;  
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;Example nginx &lt;code&gt;server&lt;/code&gt; block for HTTPS (simplified):  &lt;/p&gt;
&lt;pre class="highlight"&gt;&lt;code class="language-nginx"&gt;server {  
listen 443 ssl http2;  
listen [::]:443 ssl http2;  

server_name _;  

ssl_certificate /etc/nginx/ssl/server.crt;  
ssl_certificate_key /etc/nginx/ssl/server.key;  

root /var/www/html;  
index index.html index.htm;  

location / {  
try_files $uri $uri/ =404;  
}  
}  
&lt;/code&gt;&lt;/pre&gt;

&lt;p&gt;The HTTP virtual host is also the same everywhere, with the root pointing to the BSSG example site.  &lt;/p&gt;
&lt;h3&gt;Load generator&lt;/h3&gt;
&lt;p&gt;The tests were run from my workstation on the same LAN:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;client host: a mini PC machine connected at 2.5 Gbit/s  &lt;/li&gt;
&lt;li&gt;switch: 2.5 Gbit/s  &lt;/li&gt;
&lt;li&gt;test tool: &lt;code&gt;wrk&lt;/code&gt;  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For each target host I ran:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;wrk -t4 -c50 -d10s http://IP&lt;/code&gt;  &lt;/li&gt;
&lt;li&gt;&lt;code&gt;wrk -t4 -c10 -d10s http://IP&lt;/code&gt;  &lt;/li&gt;
&lt;li&gt;&lt;code&gt;wrk -t4 -c50 -d10s https://IP&lt;/code&gt;  &lt;/li&gt;
&lt;li&gt;&lt;code&gt;wrk -t4 -c10 -d10s https://IP&lt;/code&gt;  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each scenario was executed multiple times to reduce noise; the numbers below are medians (or very close to them) from the runs.&lt;/p&gt;
&lt;h2&gt;The contenders&lt;/h2&gt;
&lt;p&gt;To keep things readable, I will refer to each setup as follows:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SmartOS Debian LX&lt;/strong&gt; → SmartOS host, Debian 12 LX zone  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SmartOS Alpine LX&lt;/strong&gt; → SmartOS host, Alpine 3.22 LX zone  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SmartOS Native&lt;/strong&gt; → SmartOS host, native zone  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD Jail&lt;/strong&gt; → FreeBSD 14.3-RELEASE, nginx in a jail  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OpenBSD Host&lt;/strong&gt; → OpenBSD 7.8, nginx on the host  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NetBSD Host&lt;/strong&gt; → NetBSD 10.1, nginx on the host  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Debian Host&lt;/strong&gt; → Debian 13.2, nginx on the host  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Alpine Host&lt;/strong&gt; → Alpine 3.22, nginx on the host  &lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Docker Container&lt;/strong&gt; → Alpine host, Debian 13 Docker container&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Everything uses the same nginx configuration file and the same static site.  &lt;/p&gt;
&lt;h2&gt;Static HTTP results&lt;/h2&gt;
&lt;p&gt;Let us start with plain HTTP, since this removes TLS from the picture and focuses on the kernel, network stack and nginx itself.  &lt;/p&gt;
&lt;h3&gt;HTTP, 4 threads, 50 concurrent connections&lt;/h3&gt;
&lt;p&gt;Approximate median &lt;code&gt;wrk&lt;/code&gt; results:  &lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Environment&lt;/th&gt;
&lt;th&gt;HTTP 50 connections&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Debian LX&lt;/td&gt;
&lt;td&gt;~46.2 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Alpine LX&lt;/td&gt;
&lt;td&gt;~49.2 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Native&lt;/td&gt;
&lt;td&gt;~63.7 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FreeBSD Jail&lt;/td&gt;
&lt;td&gt;~63.9 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenBSD Host&lt;/td&gt;
&lt;td&gt;~64.1 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NetBSD Host&lt;/td&gt;
&lt;td&gt;~64.0 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debian Host&lt;/td&gt;
&lt;td&gt;~63.8 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alpine Host&lt;/td&gt;
&lt;td&gt;~63.9 k&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Docker Container&lt;/td&gt;
&lt;td&gt;~63.7 k&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Two things stand out:  &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;All the native or jail/container setups on the hosts that are not LX zones cluster around 63 to 64k requests per second.  &lt;/li&gt;
&lt;li&gt;The two SmartOS LX zones sit slightly lower, in the 46 to 49k range, which is still very respectable for this hardware.  &lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;In other words, as long as you are on the host or in something very close to it (FreeBSD jail, SmartOS native zone, NetBSD, OpenBSD, Linux on bare metal), static HTTP on nginx will happily max out around 64k requests per second with this small Intel N150 CPU.  &lt;/p&gt;
&lt;p&gt;The Debian and Alpine LX zones on SmartOS are a bit slower, but not dramatically so. They still deliver close to 50k requests per second and, in a real world scenario, you would probably saturate the network or the client long before hitting those numbers.  &lt;/p&gt;
&lt;h3&gt;HTTP, 4 threads, 10 concurrent connections&lt;/h3&gt;
&lt;p&gt;With fewer concurrent connections, absolute throughput drops, but the relative picture is similar:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;SmartOS Native around 44k  &lt;/li&gt;
&lt;li&gt;NetBSD and Alpine Host around 34 to 35k  &lt;/li&gt;
&lt;li&gt;FreeBSD, Debian, OpenBSD around 31 to 33k  &lt;/li&gt;
&lt;li&gt;The Docker Container sits slightly lower at ~30.2k req/s, showing a small overhead from the networking layer  &lt;/li&gt;
&lt;li&gt;The SmartOS LX zones sit slightly below, around 35 to 37k req/s  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The important conclusion is simple:  &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;For plain HTTP static hosting, once nginx is installed and correctly configured, the choice between these operating systems makes very little difference on this hardware. Zones and jails add negligible overhead, LX zones add a small one.  &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you are only serving static content over HTTP, your choice of OS should be driven by other factors: ecosystem, tooling, update strategy, your own expertise and preference.  &lt;/p&gt;
&lt;h2&gt;Static HTTPS results&lt;/h2&gt;
&lt;p&gt;TLS is where things start to diverge more clearly and where CPU utilization becomes interesting.  &lt;/p&gt;
&lt;h3&gt;HTTPS, 4 threads, 50 concurrent connections&lt;/h3&gt;
&lt;p&gt;Approximate medians:  &lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Environment&lt;/th&gt;
&lt;th&gt;HTTPS 50 connections&lt;/th&gt;
&lt;th&gt;CPU notes at 50 HTTPS connections&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Debian LX&lt;/td&gt;
&lt;td&gt;~51.4 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Alpine LX&lt;/td&gt;
&lt;td&gt;~40.4 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SmartOS Native&lt;/td&gt;
&lt;td&gt;~52.8 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;FreeBSD Jail&lt;/td&gt;
&lt;td&gt;~62.9 k&lt;/td&gt;
&lt;td&gt;around 60% CPU idle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;OpenBSD Host&lt;/td&gt;
&lt;td&gt;~39.7 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NetBSD Host&lt;/td&gt;
&lt;td&gt;~40.4 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Debian Host&lt;/td&gt;
&lt;td&gt;~62.8 k&lt;/td&gt;
&lt;td&gt;about 20% CPU idle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Alpine Host&lt;/td&gt;
&lt;td&gt;~62.4 k&lt;/td&gt;
&lt;td&gt;small idle headroom, around 7% idle&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Docker Container&lt;/td&gt;
&lt;td&gt;~62.7 k&lt;/td&gt;
&lt;td&gt;CPU saturated&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;These numbers tell a more nuanced story.  &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD, Debian and Alpine on bare metal form a “fast TLS” group.&lt;/strong&gt;&lt;br /&gt;
All three sit around 62 to 63k requests per second with 50 concurrent HTTPS connections.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD does this while using significantly less CPU.&lt;/strong&gt;&lt;br /&gt;
During the HTTPS tests with 50 connections, the FreeBSD host still had around 60% CPU idle. It is the platform that handled TLS load most comfortably in terms of CPU headroom.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Debian and Alpine are close in throughput, but push the CPU harder.&lt;/strong&gt;&lt;br /&gt;
Debian still had some idle time left, Alpine even less. In practice, all three are excellent here, but FreeBSD gives you more room before you hit the wall.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SmartOS, NetBSD and OpenBSD form a “good but heavier” TLS group.&lt;/strong&gt;&lt;br /&gt;
Their HTTPS throughput is in the 40 to 52k req/s range and they reach full CPU usage at 50 concurrent connections. OpenBSD and NetBSD stabilize around 39 to 40k req/s. SmartOS native and the Debian LX zone manage slightly better (around 51 to 53k) but still with the CPU pegged.  &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;HTTPS, 4 threads, 10 concurrent connections&lt;/h3&gt;
&lt;p&gt;With lower concurrency:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;FreeBSD, Debian and Alpine still sit in roughly the 29 to 31k req/s range  &lt;/li&gt;
&lt;li&gt;SmartOS Native and LX zones are in the mid to high 30k range  &lt;/li&gt;
&lt;li&gt;The Docker Container drops slightly to ~27.8k req/s  &lt;/li&gt;
&lt;li&gt;NetBSD and OpenBSD sit around 26 to 27k req/s  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The relative pattern is the same: for this TLS workload, FreeBSD and modern Linux distributions on bare metal appear to make better use of the cryptographic capabilities of the CPU, delivering higher throughput or more headroom or both.  &lt;/p&gt;
&lt;h2&gt;What TLS seems to highlight&lt;/h2&gt;
&lt;p&gt;The HTTPS tests point to something that is not about nginx itself, but about the TLS stack and how well it can exploit the hardware.  &lt;/p&gt;
&lt;p&gt;On this Intel N150, my feeling is:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;FreeBSD, with the userland and crypto stack I am running, is very efficient at TLS here. It delivers the highest throughput while keeping plenty of CPU in reserve.  &lt;/li&gt;
&lt;li&gt;Debian and Alpine, with their recent kernels and libraries, are also strong performers, close to FreeBSD in throughput, but with less idle CPU.  &lt;/li&gt;
&lt;li&gt;NetBSD, OpenBSD and SmartOS (native and LX) are still perfectly capable of serving a lot of HTTPS traffic, but they have to work harder to keep up and they hit 100% CPU much earlier.  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This matches what I see in day to day operations: TLS performance is often less about “nginx vs something else” and more about the combination of:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the TLS library version and configuration  &lt;/li&gt;
&lt;li&gt;how well the OS uses the CPU crypto instructions  &lt;/li&gt;
&lt;li&gt;kernel level details in the network and crypto paths  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I suspect the differences here are mostly due to how each system combines its TLS stack (OpenSSL, LibreSSL and friends), its kernel and its hardware acceleration support. It would take a deeper dive into profiling and configuration knobs to attribute the gaps precisely.  &lt;/p&gt;
&lt;p&gt;In any case, on this specific mini PC, if I had to pick a platform to handle a large amount of HTTPS static traffic, FreeBSD, Debian and Alpine would be my first candidates, in that order.  &lt;/p&gt;
&lt;h2&gt;Zones, jails, containers and Docker: overhead in practice&lt;/h2&gt;
&lt;p&gt;Another interesting part of the story is the overhead introduced by different isolation technologies.  &lt;/p&gt;
&lt;p&gt;From these tests and the &lt;a href="https://it-notes.dragas.net/2025/09/19/freebsd-vs-smartos-whos-faster-for-jails-zones-bhyve/"&gt;previous virtualization article on the same N150 machine&lt;/a&gt;, the picture is consistent:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;FreeBSD jails behave almost like bare metal and are significantly more efficient than Docker.&lt;/strong&gt;&lt;br /&gt;
For both HTTP and HTTPS, running nginx in a jail on FreeBSD 14.3-RELEASE produces numbers practically identical to native hosts.&lt;br /&gt;
The contrast with Docker is striking: while the Docker container required 100% CPU to reach peak for the HTTP and HTTPS throughput, &lt;strong&gt;the FreeBSD jail delivered the same speed with ~60% of the CPU sitting idle&lt;/strong&gt;. In terms of performance cost per request, Jails are drastically cheaper.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SmartOS native zones are also very close to the metal.&lt;/strong&gt;&lt;br /&gt;
Static HTTP performance reaches the same 64k req/s region and HTTPS is only slightly behind the "fast TLS" group, although with higher CPU usage.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;SmartOS LX zones introduce a noticeable but modest overhead.&lt;/strong&gt;&lt;br /&gt;
Both Debian and Alpine LX zones on SmartOS perform slightly worse than the native zone or FreeBSD jails. For static HTTP they are still very fast. For HTTPS the Debian LX zone remains competitive but costs more CPU, while the Alpine LX zone is slower.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Docker on Linux performs efficiently but eats the margins.&lt;/strong&gt;
I ran an additional test using a Debian 13 Docker container running on the Alpine Linux host.
At peak load (50 connections), the throughput was impressive and virtually identical to bare metal: ~63.7k req/s for HTTP and ~62.7k req/s for HTTPS.
However, there is a clear cost. First, while the bare metal host maintained a small CPU buffer (~7% idle) during the HTTPS test, Docker &lt;strong&gt;saturated the CPU to 100%&lt;/strong&gt;.
Second, at lower concurrency (10 connections), the overhead became visible. The Docker container scored ~30.2k req/s for HTTP and ~27.8k req/s for HTTPS, slightly trailing the ~31-34k and ~29-31k range of the bare metal counterparts. The abstraction layers (NAT, bridging, namespaces) are extremely efficient, but they are not completely free.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This leads to a clear conclusion on efficiency: &lt;strong&gt;FreeBSD Jails provide the highest throughput with the lowest CPU cost.&lt;/strong&gt; LX zones and Docker containers can match the speed (or come close), but they burn significantly more CPU cycles to do so.&lt;/p&gt;
&lt;h2&gt;What this means for real workloads&lt;/h2&gt;
&lt;p&gt;It is easy to get lost in tables and percentages, so let us go back to the initial question.  &lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;A client wants static hosting.&lt;br /&gt;
Does the choice between FreeBSD, SmartOS, NetBSD or Linux matter in terms of performance?  &lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;For &lt;strong&gt;plain HTTP&lt;/strong&gt; on this hardware, with nginx and the same configuration:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Not really.&lt;br /&gt;
All the native hosts and FreeBSD jails deliver roughly the same maximum throughput, in the 63 to 64k req/s range. SmartOS LX zones are slightly slower but still strong.  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For &lt;strong&gt;HTTPS&lt;/strong&gt;:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Yes, it starts to matter a bit more.  &lt;/li&gt;
&lt;li&gt;FreeBSD stands out for how relaxed the CPU is under high TLS load.  &lt;/li&gt;
&lt;li&gt;Debian and Alpine are very close in throughput, with more CPU used but still with some headroom.  &lt;/li&gt;
&lt;li&gt;SmartOS, NetBSD and OpenBSD can still push a lot of HTTPS traffic, but they reach 100% CPU earlier and stabilize at lower request rates.  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Does this mean you should always choose FreeBSD or Debian or Alpine for static HTTPS hosting?  &lt;/p&gt;
&lt;p&gt;Not necessarily.  &lt;/p&gt;
&lt;p&gt;In real deployments, the bottleneck is rarely the TLS performance of a single node serving a small static site. Network throughput, storage, logging, reverse proxies, CDNs and application layers all play a role.  &lt;/p&gt;
&lt;p&gt;However, knowing that FreeBSD and current Linux distributions can squeeze more out of a small CPU under TLS is useful when you are:  &lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;sizing hardware for small VPS nodes that must serve many HTTPS requests  &lt;/li&gt;
&lt;li&gt;planning to consolidate multiple services on a low power box  &lt;/li&gt;
&lt;li&gt;deciding whether you can afford to keep some CPU aside for other tasks (cache, background jobs, monitoring, and so on)  &lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As always, the right answer depends on the complete picture: your skills, your tooling, your backups, your monitoring, the rest of your stack, and your tolerance for troubleshooting when things go sideways.  &lt;/p&gt;
&lt;h2&gt;Final thoughts&lt;/h2&gt;
&lt;p&gt;From these small tests, my main takeaways are:  &lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Static HTTP is basically solved on all these platforms.&lt;/strong&gt;&lt;br /&gt;
On a modest Intel N150, every system tested can push around 64k static HTTP requests per second with nginx set to almost default settings. For many use cases, that is already more than enough.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;TLS performance is where the OS and crypto stack start to matter.&lt;/strong&gt;&lt;br /&gt;
FreeBSD, Debian and Alpine squeeze more HTTPS requests out of the N150, and FreeBSD in particular does it with a surprising amount of idle CPU left. NetBSD, OpenBSD and SmartOS need more CPU to reach similar speeds and stabilize at lower throughput once the CPU is saturated.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Jails and native zones are essentially free, LX zones cost a bit more.&lt;/strong&gt;&lt;br /&gt;
FreeBSD jails and SmartOS native zones show very little overhead for this workload. SmartOS LX zones are still perfectly usable, but if you are chasing every last request per second you will see the cost of the translation layer.  &lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Benchmarks are only part of the story.&lt;/strong&gt;&lt;br /&gt;
If your team knows OpenBSD inside out and has tooling, scripts and workflows built around it, you might happily accept using more CPU on TLS in exchange for security features, simplicity and familiarity. The same goes for NetBSD or SmartOS in environments where their specific strengths shine.  &lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;I will not choose an operating system for a client just because a benchmark looks nicer. These numbers are one of the many inputs I consider. What matters most is always the combination of reliability, security, maintainability and the human beings who will have to operate the&lt;br /&gt;
system at three in the morning when something goes wrong.  &lt;/p&gt;
&lt;p&gt;Still, it is nice to know that if you put a tiny Intel N150 in front of a static site and you pick FreeBSD or a modern Linux distribution for HTTPS, you are giving that little CPU a fair chance to shine.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Wed, 19 Nov 2025 09:16:00 +0100</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/11/19/static-web-hosting-intel-n150-freebsd-smartos-netbsd-openbsd-linux/</guid><category>freebsd</category><category>smartos</category><category>illumos</category><category>linux</category><category>netbsd</category><category>openbsd</category><category>jail</category><category>zones</category><category>docker</category><category>hosting</category><category>server</category><category>sysadmin</category><category>ownyourdata</category></item><item><title>FreeBSD vs. SmartOS: Who's Faster for Jails, Zones, and bhyve VMs?</title><link>https://it-notes.dragas.net/2025/09/19/freebsd-vs-smartos-whos-faster-for-jails-zones-bhyve/</link><description>&lt;p&gt;&lt;img src="https://it-notes.dragas.net/featured/server_rack.webp" alt="A server rack with some servers and cables"&gt;&lt;/p&gt;&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;&lt;br /&gt;
These benchmarks were performed on my specific hardware and tuned for the workloads I expect to run.&lt;br /&gt;
They should not be taken as absolute or universally applicable results.&lt;br /&gt;
Different CPUs, storage, networking setups, or workload profiles could produce very different outcomes.&lt;br /&gt;
What I’m sharing here is a faithful snapshot of &lt;em&gt;my&lt;/em&gt; test environment and use case - a guidepost, not a final verdict.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Years ago, I installed a PCEngines APU at a client's site. It dutifully ran Proxmox with a few small VMs inside. It wasn't a speed demon, but it got the job done. Tasked with running in a closed, uncooled, and unsupervised server closet, it soldiered on for about seven years.&lt;/p&gt;
&lt;p&gt;Then, while I was at BSDCan, I got the call. A series of power outages and surges had finally taken their toll, and the APU was dead. It was probably just the power supply, but given its age, we decided it was time for a replacement. I set up a remote bypass to keep them running, but I knew I'd need to install something more powerful soon.&lt;/p&gt;
&lt;p&gt;I ordered a modern MiniPC-based on the low-power Intel Processor N150 platform, but with 16GB of RAM and more than enough performance to serve as a decent workstation. I have a similar one in my office running openSUSE Tumbleweed, and it works beautifully.&lt;/p&gt;
&lt;p&gt;This time, however, I decided to replace Proxmox with a different virtualization system. This decision wasn't made in a vacuum. In the past, I've put bhyve head-to-head with Proxmox, and my findings were clear: &lt;strong&gt;bhyve on FreeBSD is an extremely efficient hypervisor, &lt;a href="https://it-notes.dragas.net/2024/06/10/proxmox-vs-freebsd-which-virtualization-host-performs-better/"&gt;often outperforming KVM on Proxmox in my tests&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This positive experience is what made FreeBSD with bhyve a top contender. The other path was a KVM-style approach (which would require fewer changes to the VMs), where my options would be NetBSD or an &lt;a href="https://www.illumos.org/"&gt;illumos-based&lt;/a&gt; OS like &lt;a href="https://www.tritondatacenter.com/smartos"&gt;SmartOS&lt;/a&gt;. Since I had the new hardware on hand, I decided to run some tests to see how these different technologies stacked up against each other, and against the bare metal itself.&lt;/p&gt;
&lt;h3&gt;The Lineup: What I Put on the Test Bench&lt;/h3&gt;
&lt;p&gt;My goal was to test every reasonable option on this Intel N150 hardware. The final lineup covered the entire spectrum:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;The Baseline:&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD 14.3-RELEASE Bare Metal:&lt;/strong&gt; The ground truth for performance on this hardware.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;OS-Level Virtualization (Containers):&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SmartOS Native Zone:&lt;/strong&gt; The baseline native container on SmartOS.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SmartOS LX Zone:&lt;/strong&gt; Running &lt;strong&gt;Ubuntu 24.04&lt;/strong&gt; and &lt;strong&gt;Alpine Linux&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD Native Jail:&lt;/strong&gt; The baseline native container on FreeBSD.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD Jail with Linux:&lt;/strong&gt; A jail running a &lt;strong&gt;Ubuntu 22.04&lt;/strong&gt; userland.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Full Hardware Virtualization (HVM):&lt;/strong&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;SmartOS bhyve Zone:&lt;/strong&gt; A FreeBSD guest inside the bhyve hypervisor on a SmartOS host.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SmartOS KVM Zone:&lt;/strong&gt; A FreeBSD guest inside the KVM hypervisor on a SmartOS host.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FreeBSD bhyve VM:&lt;/strong&gt; A FreeBSD guest inside the bhyve hypervisor on a FreeBSD host.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;The Benchmark: My &lt;code&gt;sysbench&lt;/code&gt; Commands&lt;/h3&gt;
&lt;p&gt;To keep the comparison fair and simple, I used two core &lt;code&gt;sysbench&lt;/code&gt; commands. To ensure consistency, I even compiled &lt;code&gt;sysbench&lt;/code&gt; from scratch on the SmartOS native zone to match the versions and compile options on the other systems as closely as possible.&lt;/p&gt;
&lt;p&gt;The commands I used in each environment were:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;For CPU performance:&lt;/strong&gt; &lt;code&gt;sysbench --test=cpu --cpu-max-prime=20000 run&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;For memory performance:&lt;/strong&gt; &lt;code&gt;sysbench --test=memory run&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;First Look: CPU and Memory on the Intel N150&lt;/h3&gt;
&lt;p&gt;My initial tests on the Intel N150 hardware immediately revealed some interesting trends. The &lt;code&gt;sysbench&lt;/code&gt; CPU results from any native FreeBSD environment (bare metal or jail) were on a completely different scale from the Linux and SmartOS guests, making a direct comparison meaningless.&lt;/p&gt;
&lt;p&gt;However, by excluding the incompatible FreeBSD-native results, we get a very clear picture of the overhead between the various container technologies.&lt;/p&gt;
&lt;h4&gt;Valid CPU Performance Comparison (Single Thread, Intel N150)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Host OS&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Container Tech&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Guest OS&lt;/th&gt;
&lt;th style="text-align: left;"&gt;CPU Performance (Events/sec)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Jail (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Ubuntu 22.04&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;1108.18&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;LX Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Ubuntu 24.04&lt;/td&gt;
&lt;td style="text-align: left;"&gt;1107.13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Native Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;1107.04&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;LX Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Alpine Linux&lt;/td&gt;
&lt;td style="text-align: left;"&gt;1022.81&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The takeaway here was clear: for CPU work, the overhead from these containers is basically a rounding error. For CPU-bound tasks, neither SmartOS Zones nor FreeBSD Jails will be a bottleneck.&lt;/p&gt;
&lt;p&gt;The memory results, which were consistent across all platforms, were far more revealing.&lt;/p&gt;
&lt;h4&gt;Overall Memory Performance Comparison (Intel Processor N150)&lt;/h4&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Host OS&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Virtualization Tech&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Guest OS&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Memory Performance (Transfer Rate)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;LX Zone (OS-level)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;Ubuntu 24.04&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;4970.54 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Native Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;SmartOS (Native)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;4549.97 MiB/sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Jail (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Ubuntu 22.04&lt;/td&gt;
&lt;td style="text-align: left;"&gt;4348.32 MiB/sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;Bare Metal&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD (Native)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;4005.08 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Native Jail (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;FreeBSD (Native)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;3990.13 MiB/sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;SmartOS&lt;/td&gt;
&lt;td style="text-align: left;"&gt;LX Zone (OS-level)&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Alpine Linux&lt;/td&gt;
&lt;td style="text-align: left;"&gt;3803.72 MiB/sec&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;bhyve VM (Full HVM)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;3636.01 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;bhyve Zone (Full HVM)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;3020.15 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;KVM Zone (Full HVM)&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;205.18 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;These initial numbers led to a few conclusions: a virtual layer could be a performance boost, the userland matters, and bhyve clearly outclassed the legacy KVM on SmartOS. However, one result was nagging at me: the performance gap between FreeBSD bare metal (&lt;code&gt;4005.08 MiB/sec&lt;/code&gt;) and a native bhyve VM (&lt;code&gt;3636.01 MiB/sec&lt;/code&gt;) was about 9%. This was a larger drop than I expected. It prompted a new question: was this overhead inherent to bhyve, or was it a quirk of the new N150 hardware?&lt;/p&gt;
&lt;h3&gt;Going deeper: Testing on an Intel i7-7500U&lt;/h3&gt;
&lt;p&gt;To see if more mature, better-supported hardware would tell a different story, I replicated the FreeBSD tests on an older Qotom Mini-PC powered by an Intel i7-7500U. The results were illuminating and dramatically changed the narrative.&lt;/p&gt;
&lt;h4&gt;CPU Performance Comparison (Intel i7-7500U)&lt;/h4&gt;
&lt;p&gt;Once again, the CPU tests produced strange results. The native FreeBSD environments all reported incredibly high numbers in the millions of events/sec, while the Ubuntu Linuxulator jail's result was on a completely different, incompatible scale. Frankly, given the massive discrepancy between FreeBSD-native and Linux-based environments, I'm unsure that the &lt;code&gt;sysbench&lt;/code&gt; CPU figures can be considered totally reliable in absolute terms.&lt;/p&gt;
&lt;p&gt;However, what &lt;em&gt;is&lt;/em&gt; useful is comparing the native FreeBSD results against each other. This tells us about relative overhead.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Platform&lt;/th&gt;
&lt;th style="text-align: left;"&gt;CPU Performance (Events/sec)&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Overhead vs. Bare Metal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD Bare Metal&lt;/td&gt;
&lt;td style="text-align: left;"&gt;6,377,778&lt;/td&gt;
&lt;td style="text-align: left;"&gt;Baseline&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD Native Jail&lt;/td&gt;
&lt;td style="text-align: left;"&gt;6,379,271&lt;/td&gt;
&lt;td style="text-align: left;"&gt;~0.0%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD bhyve VM&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;6,346,852&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;-0.48%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Even if we're skeptical of the absolute numbers, the relative comparison is crystal clear: the CPU overhead of bhyve is &lt;strong&gt;less than half a percent&lt;/strong&gt;. This is the key takeaway.&lt;/p&gt;
&lt;h4&gt;Memory Performance Comparison (Intel i7-7500U)&lt;/h4&gt;
&lt;p&gt;The memory benchmarks, in contrast, were consistent and highly informative.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left;"&gt;Platform&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Memory Performance (Transfer Rate)&lt;/th&gt;
&lt;th style="text-align: left;"&gt;Overhead vs. Bare Metal&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;Ubuntu 22.04 Jail&lt;/td&gt;
&lt;td style="text-align: left;"&gt;4856.23 MiB/sec&lt;/td&gt;
&lt;td style="text-align: left;"&gt;+7.55%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;FreeBSD Native Jail&lt;/td&gt;
&lt;td style="text-align: left;"&gt;4517.73 MiB/sec&lt;/td&gt;
&lt;td style="text-align: left;"&gt;+0.05%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD Bare Metal&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;4515.24 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;Baseline&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;FreeBSD bhyve VM&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;4491.60 MiB/sec&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left;"&gt;&lt;strong&gt;-0.52%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This is where the real story is. The memory performance of a bhyve VM was a mere &lt;strong&gt;0.52%&lt;/strong&gt; slower than bare metal. This is the kind of near-native performance one hopes for from a top-tier hypervisor and stands in stark contrast to the 9% drop seen on the newer N150.&lt;/p&gt;
&lt;h3&gt;Breaking Down the Results: What I Learned From Both Tests&lt;/h3&gt;
&lt;p&gt;This comprehensive two-platform analysis paints a much clearer picture.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. Hardware &lt;em&gt;Really&lt;/em&gt; Matters&lt;/strong&gt;
Performance is not an absolute. The difference between the two platforms was stark: on the mature i7-7500U, bhyve’s overhead was less than 1%, while on the newer, budget N150, it was a more significant 9%. This suggests the performance dip is likely due to missing optimizations for that specific CPU architecture, rather than a fundamental flaw in bhyve itself.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. bhyve's True Potential is Near-Native Speed&lt;/strong&gt;
The i7 tests prove that &lt;strong&gt;bhyve is an exceptionally efficient hypervisor&lt;/strong&gt; on well-supported hardware. The relative CPU overhead was a negligible -0.48%, and more importantly, the reliable memory benchmarks showed a performance drop of just 0.52% compared to bare metal. This is the gold standard for virtualization.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. FreeBSD Jails are Feather-Light&lt;/strong&gt;
On both platforms, native FreeBSD jails demonstrated almost zero performance overhead. On the i7, both CPU and memory performance were virtually identical to bare metal (a 0.05% difference). The N150 CPU tests further showed that FreeBSD's container implementation is so efficient that running a Linux userland inside a jail delivered the best CPU scores of the entire lineup.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4. SmartOS Zones Are Also Extremely Efficient&lt;/strong&gt;
Just like Jails, SmartOS's native Zones proved to be remarkably lightweight. The N150 CPU tests confirm this, showing that native and LX zones have virtually identical, top-tier performance. On the memory front, the native Zone delivered performance over 13% &lt;em&gt;faster&lt;/em&gt; than the FreeBSD bare-metal baseline, pointing to the high efficiency of the illumos kernel.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;5. The Linux Userland Excels at Throughput&lt;/strong&gt;
A clear pattern emerged on both testbeds: the Ubuntu userland consistently delivered excellent benchmark results. On the CPU front, Ubuntu on both FreeBSD and SmartOS delivered the highest, and nearly identical, performance scores on the N150. For memory, the story was even more dramatic: the Ubuntu LX Zone on SmartOS was the top performer, beating bare-metal FreeBSD by nearly 25%, while the Ubuntu jail on the i7 also surpassed its host by over 7%.&lt;/p&gt;
&lt;h3&gt;Final Thoughts: The Verdict for My Client's New Server&lt;/h3&gt;
&lt;p&gt;So, what's the bottom line for my client's new MiniPC? This benchmarking journey has made the path forward much clearer.&lt;/p&gt;
&lt;p&gt;At the beginning of this process, my main question was whether to stick with a KVM-based setup or make the switch to bhyve. The performance data answers that decisively. The legacy KVM on SmartOS showed a crippling performance penalty, making it a non-starter. Given that, the extra effort to migrate the existing VMs to a bhyve-compatible format is absolutely worth it. The performance gain is just too significant to ignore.&lt;/p&gt;
&lt;p&gt;The final question, then, is &lt;em&gt;which&lt;/em&gt; host OS to use for bhyve: SmartOS or FreeBSD? This is a much tougher call, as both platforms demonstrated incredible strengths.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;SmartOS&lt;/strong&gt;, powered by the illumos kernel, was a true surprise. It delivered astonishing performance on the target N150 hardware. Its key advantage is the raw speed of its containerization for both CPU and memory tasks. The Ubuntu LX Zone not only ran flawlessly but delivered top-tier CPU scores and outperformed the bare-metal FreeBSD baseline in memory by a massive 25% margin. This points to a highly efficient kernel and offers the tantalizing prospect of running ultra-fast Linux containers alongside performant bhyve VMs on the same host.&lt;/p&gt;
&lt;p&gt;On the other hand, &lt;strong&gt;FreeBSD&lt;/strong&gt; proved its mastery of bhyve virtualization. The tests on the i7 hardware showed its implementation to be the gold standard, offering virtually zero performance overhead for full hardware virtualization. Its native Jails are equally efficient, and its Linux compatibility layer is so effective that an Ubuntu jail delivered the &lt;em&gt;fastest CPU performance&lt;/em&gt; of all containers tested on FreeBSD. For workloads that &lt;em&gt;must&lt;/em&gt; live in a full VM, FreeBSD offers the most performant and native bhyve experience, with the reasonable expectation that its support for newer hardware like the N150 will only improve over time.&lt;/p&gt;
&lt;p&gt;Ultimately, the choice comes down to the primary workload. It's a decision between the raw container speed and Linux flexibility of SmartOS versus the pure, uncompromising HVM performance of FreeBSD.&lt;/p&gt;
&lt;p&gt;But one thing is certain: thanks to this deep dive, the path forward is much clearer, and it's paved by bhyve.&lt;/p&gt;</description><dc:creator xmlns:dc="http://purl.org/dc/elements/1.1/">Stefano Marinelli</dc:creator><pubDate>Fri, 19 Sep 2025 10:50:00 +0200</pubDate><guid isPermaLink="false">https://it-notes.dragas.net/2025/09/19/freebsd-vs-smartos-whos-faster-for-jails-zones-bhyve/</guid><category>freebsd</category><category>smartos</category><category>illumos</category><category>linux</category><category>jail</category><category>bhyve</category><category>zones</category><category>data</category><category>hosting</category><category>server</category><category>sysadmin</category><category>virtualization</category><category>ownyourdata</category></item></channel></rss>