A client of mine has several Windows Server VMs, which I had not migrated to FreeBSD/bhyve until a few weeks ago. These VMs were originally installed with the traditional BIOS boot mode, not UEFI, on Proxmox. Fortunately, their virtual disks are on ZFS, which allowed me to test and achieve the final result in just a few steps.
This is because Windows VMs (server or otherwise) often installed on KVM (Proxmox, etc.), especially older ones, are non-UEFI, using the traditional BIOS boot mode. bhyve doesn’t support this setup, but Windows allows changing the boot mode, and I could perform the migration directly on the target FreeBSD server.
Setting Up the Network
Before starting, as usual in my setups, I manually created the bridge and configured a few pf rules for outbound NAT, which these servers need. Let’s begin with the modifications to /etc/rc.conf
:
cloned_interfaces="bridge0"
ifconfig_bridge0="inet 192.168.33.1 netmask 255.255.255.0"
gateway_enable="YES"
pf_enable="YES"
Next, for the pf.conf
:
ext_if="igb0"
set block-policy return
set skip on bridge0
set skip on lo0
# Allow the VMs to connect to the outside world
nat on $ext_if from {192.168.33.0/24} to any -> ($ext_if)
block in all
antispoof for $ext_if inet
pass in inet proto icmp
pass out quick keep state
pass in inet proto tcp from any to any port ssh flags S/SA keep state
To ensure everything is working as expected, I recommend rebooting the server. It’s not strictly necessary since you could reload network configurations and start pf without a reboot, but I prefer a few extra reboots in these cases. Why? Because system administrators are often afraid to reboot production servers, unsure if they will come back up smoothly. A configuration error could prevent a successful reboot, and this would be a significant issue if the server is already in production. I’ve faced this situation myself—many times! To mitigate this, I make frequent reboots, especially after changing network configurations, ensuring everything starts correctly on reboot.
Installing and Configuring vm-bhyve
Next, I installed vm-bhyve
on the target FreeBSD host and configured it:
pkg install vm-bhyve-devel bhyve-firmware
zfs create zroot/VMs
sysrc vm_enable="YES"
sysrc vm_dir="zfs:zroot/VMs"
vm init
cp /usr/local/share/examples/vm-bhyve/* /zroot/VMs/.templates/
vm switch create -t manual -b bridge0 public
Usually, I enable the serial console on tmux, but in this case, I’ll skip that since Windows VMs need a graphical console. If the FreeBSD server is not on your local network, I suggest connecting via SSH with port forwarding for the necessary VNC ports (e.g., -L5900:127.0.0.1:5900
) to connect to bhyve via SSH tunnel. Never expose the VNC port directly!
Creating the VM Template
Now, I created an “empty” VM using the “Windows” template with vm-bhyve
and adjusted the configuration afterward:
vm create -t windows -s 1G -m 16G -c 4 vm115
I created a virtual disk of 1 GB because it will be replaced by the dataset sent from the current production server, so it’s a fictitious size. At that point, I deleted the disk0
image file:
rm /zroot/VMs/vm115/disk0.img
Modifying the VM Configuration
Next, I modified the VM configuration as follows:
- Changed the disk configuration to use a zvol, simplifying the send-receive operation from the original Proxmox host.
- In some cases, older Windows installations may not support the
nvme
driver. If this happens, useahci-hd
instead, as after the conversion Windows may fail to boot and display a BSOD. - Changed the network adapter driver to
virtio-net
(since it was already installed on the Proxmox VM). - Set the network adapter MAC address to match the original VM.
After running vm configure vm115
, the configuration should look something like this:
loader="uefi"
graphics="yes"
xhci_mouse="yes"
cpu="4"
memory="16G"
ahci_device_limit="8"
network0_type="virtio-net"
network0_switch="public"
disk0_type="nvme"
disk0_name="disk0"
disk0_dev="sparse-zvol"
utctime="no"
uuid="myUUID"
network0_mac="theProxmoxVirtualMachineMacAddress"
Copying the Virtual Disk
I took a snapshot of the virtual disks of the VMs on the original server and copied them to the target server:
zfs snapshot vmpool/vm-115-disk-0@toSend01
zfs send -Rcv vmpool/vm-115-disk-0@toSend01 | mbuffer -m 128M | ssh user@FreebsdHostIP "zfs receive -F zroot/VMs/vm115/disk0"
The VM’s disk will be transferred to the target host.
Converting BIOS to UEFI
At this point, bhyve expects a UEFI VM, but the VM we just copied is BIOS-based. In other words, it won’t boot as is.
Luckily, Microsoft provides a tool to convert partitions and change the server from BIOS to UEFI. Of course, I would only use such a tool on a snapshot, and while I could do it directly on the source server, I prefer minimizing downtime and avoiding touching the original server. I want to perform the conversion directly on the FreeBSD host, which also makes disaster recovery easier, allowing me to manage everything on FreeBSD without first restoring the VM on a Proxmox server.
The first step is to download a Windows installation ISO (in my case, Windows Server). I downloaded the ISO from here: Windows Server Evaluation Center.
Once ready, launch the ISO as if you were installing Windows on the VM:
vm install vm115 SERVER_EVAL_x64FRE_en-us.iso
Running the Windows Repair Tool
Now the VM will boot. Connect via VNC (if using VNC tunneling, connect to 127.0.0.1:5900
) to proceed. The ISO will prompt you to press a key to boot from the CDROM—go ahead and do that.
Proceed as if you were repairing your installation ("Repair your computer"), then select Troubleshoot -> Command Prompt.
Once at the prompt, run a validation check first (to avoid making changes if the disk has any unusual configurations):
mbr2gpt /validate /disk:0
If the validation is successful, proceed with the actual conversion:
mbr2gpt /convert /disk:0
Windows will convert the partition and indicate success at the end.
Booting the VM
Close the command prompt and shut down the VM. Everything should be ready now—easy peasy.
Boot the VM with:
vm start vm115
Connecting via VNC to the VM will bring up the Windows boot screen. In case of BSOD, change the nvme
driver to ahci-hd
and it should be working. If the snapshot was taken while the VM was powered on, it may perform a disk check. If everything went as planned, you should see the Windows login screen.
Additionally, Windows will likely require reactivation of the VM, as it will detect hardware changes.
Final Considerations
At this point, you need to decide whether to continue with this VM or resynchronize it with the original (e.g., if the contents have changed). In the first case, no further action is needed. In the second case, the proper procedure would be to power off the original VM, create a new snapshot, roll back the snapshot on the FreeBSD host, transfer it incrementally—a process I’ve described in a previous post—and then redo the conversion.
To configure the VM to boot automatically when the FreeBSD host boots, add the necessary settings to /etc/rc.conf
, as described in a previous article.