In the ever-changing landscape of system administration, network configurations often need to grow and evolve to meet new challenges and requirements. This post details my journey from a simple VPS setup to a complex, multi-node network using FreeBSD, jails, VPNs, and advanced routing techniques. Along the way, I'll explore the reasons behind each change and delve into why certain solutions, while functional, may not always be ideal in the long run.
Initial Setup: The Single VPS
My story begins with a single VPS (let's call it VPSSmall) hosted on Hetzner, running FreeBSD. This initial configuration was straightforward and served its purpose well for a time:
- Created an internal bridge (
bridge0
) with IP 192.168.123.1 - Used BastilleBSD to create VNET jails with IPs in the 192.168.123.X range
- Set up port forwarding to the jails using pf's rdr rules
- Utilized a /64 IPv6 block, subdivided into /72 subnets for the bridge and jails
This setup allowed for easy management of multiple services within isolated jails, all sharing the same network namespace.
graph TD
A[Internet] --> B[VPSSmall]
B --> C[bridge0 192.168.123.1]
C --> D[Jail 1]
C --> E[Jail 2]
C --> F[Jail 3]
Growing Pains: Adding a Second VPS
As is often the case in system administration, my needs grew over time. The number of jails increased, and their resource requirements expanded. To address this, I added a second VPS (VPSBig) hosted on a Proxmox server. This VPS isn't directly exposed, and relies on NAT to connect to the outside world. This introduced a new challenge: how to maintain flexibility in moving jails between VPSs without changing their network configurations?
To solve this, I implemented the following setup:
- Installed ZeroTier on both VPSs in bridge mode
- Created
bridge0
on VPSBig (without an IP) - Added ZeroTier interfaces to both bridges
This configuration allowed for seamless movement of jails between nodes by simply transferring the ZFS dataset. The jails could retain their IP addresses regardless of which physical VPS they were running on.
graph TD
A[Internet]
subgraph VPSSmall[" "]
B[VPSSmall]
D[bridge0 VPSSmall]
F[Jail 1]
G[Jail 2]
B --> D
D --> F
D --> G
end
subgraph VPSBig[" "]
C[VPSBig]
E[bridge0 VPSBig]
H[Jail 3]
I[Jail 4]
C --> E
E --> H
E --> I
end
A <--> |WAN/Zerotier| B
A <-.-> |NAT/Zerotier| C
D <--> |Via ZeroTier| E
While this setup was functional, it had some drawbacks that I'll discuss in the next section.
The Limitations of Bridging
The bridged setup using ZeroTier, while effective, wasn't without its issues. Here's why I found bridging, in this case, wasn't an ideal long-term solution:
Performance Overhead: Bridging all traffic between VPSs can introduce additional latency and processing overhead, especially when dealing with high-volume traffic.
Scalability Concerns: As the number of VPSs and jails grows, managing a large bridged network becomes increasingly complex. Each new node added to the network increases the potential for broadcast storms and can lead to unnecessary traffic across the entire network.
Security Implications: In a bridged network, all nodes essentially exist on the same network segment. This can potentially allow for lateral movement between jails or VPSs if not carefully managed, increasing the attack surface.
Dependency on ZeroTier: While ZeroTier is a powerful tool, relying on a third-party service for critical infrastructure introduces an external point of failure and potential security considerations.
Limited Control: Bridging provides less granular control over traffic flow compared to routing. This can make it harder to implement complex network policies or optimize traffic paths.
Broadcast Domain Size: Large bridged networks can result in expansive broadcast domains, which can lead to increased network congestion and reduced overall performance.
These limitations prompted me to seek a more robust, scalable, and controllable solution, leading to the next evolution of my network setup.
Refining the Setup: Wireguard and VXLAN
To address the limitations of the bridged setup, I implemented a new configuration involving Wireguard and VXLAN:
- Created a Wireguard VPN between VPSSmall and VPSBig
- Implemented a VXLAN over the Wireguard tunnel
- Replaced ZeroTier with the VXLAN for inter-VPS communication
This setup, described in detail here, offered several advantages:
- Improved Security: Wireguard provided a secure, encrypted tunnel between the VPSs.
- Better Performance: Direct Wireguard VPN connection often results in lower latency compared to ZeroTier.
- Greater Control: By managing all the components of the VPN myself, I have more control over the network configuration and problems.
- Reduced Dependency: Eliminating ZeroTier removed a third-party dependency from my critical infrastructure.
While this setup was a significant improvement, it still relied on bridging via VXLAN, which didn't fully address all the scalability and control issues. This realization led to the final evolution of my network.
The Final Evolution: Routing Instead of Bridging
The last step in my network's evolution was to move from a bridged to a routed setup. This change offered even more flexibility and scalability. The new configuration:
- VPSSmall: Uses 192.168.122.x/24 and a /72 IPv6 subnet
- VPSBig: Uses 192.168.123.x/24 and its original /72 IPv6 subnet
- Future nodes can use new private IPv4 ranges and /72 IPv6 subnets as needed
This routed setup provides several key benefits:
- Improved Scalability: Each VPS or future node can have its own subnet, making it easier to add new nodes without reconfiguring the entire network.
- Better Traffic Control: Routing allows for more granular control over traffic flow, enabling complex network policies and optimizations.
- Enhanced Security: With distinct subnets, it's easier to implement security policies and control inter-subnet communication.
- Reduced Broadcast Domain: Each subnet forms its own broadcast domain, reducing unnecessary network traffic and improving overall performance.
To ensure jails on VPSBig use the Wireguard tunnel for outgoing traffic while VPSBig itself uses its default gateway, I leveraged FreeBSD's FIB feature. This allows for separate routing tables, providing even more flexibility in managing network traffic.
Implementing Multiple FIBs
Edit /etc/sysctl.conf
to enable multiple FIBs:
net.fibs=2
This allows me to use two separate routing tables.
VPSBig Wireguard Configuration
[Interface]
PrivateKey = *VPSBigPrivateKey*
Address = 10.77.0.2/32,oneOfMyIpv6/128
Table = off
PostUp = route -q -n add -inet 0.0.0.0/0 -interface wg0 -fib 1
PostUp = route -q -n add -inet6 ::/1 -interface wg0 -fib 1
PostUp = route -q -n add -inet6 8000::/1 -interface wg0 -fib 1
[Peer]
PublicKey = *VPSSmallPublicKey*
AllowedIPs = 0.0.0.0/0,::0/0
Endpoint = *endpointip:port*
PresharedKey = *presharedkey*
PersistentKeepalive = 30
Let's break down this configuration:
Table = off
: This disables Wireguard's automatic routing table management. We're doing this because we want to manually configure the routing.The
PostUp
commands are crucial for our manual routing setup:route -q -n add -inet 0.0.0.0/0 -interface wg0 -fib 1
: This adds a default route for IPv4 traffic through the Wireguard interface (wg0
) in the alternate routing table (FIB 1).The next two commands do the same for IPv6 traffic, covering the entire IPv6 address space (
::/1
and8000::/1
together cover all IPv6 addresses).AllowedIPs = 0.0.0.0/0,::0/0
: This tells Wireguard to route all traffic through this peer. However, because we've setTable = off
, Wireguard won't actually create these routes - we're doing it manually with ourPostUp
commands.PersistentKeepalive = 30
: This sends a keepalive packet every 30 seconds, which is necessary because VPSBig is behind NAT and needs to keep the NAT session alive.
VPSSmall Wireguard Configuration
[Interface]
PrivateKey = *VPSSmallPrivateKey*
ListenPort = *port*
Address = 10.77.0.1/24,oneOfMyIpv6/128
[Peer]
PublicKey = *VPSSmallPublicKey*
PresharedKey = *presharedkey*
AllowedIPs = 10.77.0.2/32, 192.168.123.0/24, *theRemote/72ipv6class*/72
In this configuration:
Address = 10.77.0.1/24,oneOfMyIpv6/128
: This sets up the Wireguard interface with an IPv4 and IPv6 address.AllowedIPs = 10.77.0.2/32, 192.168.123.0/24, *theRemote/72ipv6class*/72
: This tells Wireguard to route traffic for VPSBig's Wireguard IP (10.77.0.2), VPSBig's local subnet (192.168.123.0/24), and VPSBig's IPv6 subnet through this peer.
The Role of FIB in Routing
The use of FIB (Forwarding Information Base) 1 in the VPSBig configuration is key to our setup. By adding routes to FIB 1, we're creating a separate routing table that can be used by our jails. This allows us to:
- Keep the main system (VPSBig itself) routing through its default gateway.
- Route all traffic from the jails through the Wireguard tunnel to VPSSmall.
We achieve this by configuring the jails to use FIB 1:
exec.prestart += "ifconfig epairXa fib 1";
exec.prestart += "ifconfig epairXb mtu 1380";
This ensures that each jail uses the alternate routing table (FIB 1) instead of the default routing table, effectively sending all its traffic through the Wireguard tunnel.
If the WireGuard MTU is 1420, setting the jail's MTU to 1380 should be safe enough.
NAT Configuration on VPSSmall
To complete the setup, we need to configure NAT on VPSSmall to allow the jails on VPSBig to access the internet:
nat on vtnet0 from 192.168.123.0/24 to ! <private> -> vtnet0:0
This NAT rule translates the source IP of packets coming from VPSBig's subnet (192.168.123.0/24) to VPSSmall's public IP when they're destined for non-private IP addresses.
Final Network Diagram
graph TD
Internet[Internet]
subgraph VPSSmall[" "]
B["VPSSmall (192.168.122.x/24)"]
D[Jail 1 - Reverse Proxy]
E[Jail 2 - Mastodon Backup]
end
subgraph VPSBig[" "]
C["VPSBig (192.168.123.x/24)"]
F[Jail 3]
G[Jail 4]
H[Jail 5]
end
Internet <--> |WAN/Wireguard| B
Internet <-.-> |NAT/Wireguard| C
B <--> |Via Wireguard| C
B --> D
B --> E
C --> F
C --> G
C --> H
Conclusion
My journey from a simple single-VPS setup to this complex, multi-node network illustrates the power and flexibility of FreeBSD. By transitioning from a bridged to a routed setup, I've created a solution that offers improved scalability, security, and control.
Key takeaways from this evolution include:
- Adaptability is Crucial: As your needs grow, be prepared to evolve your network architecture.
- Understand the Tradeoffs: Each networking approach (bridging, VPNs, routing) has its pros and cons. Choose the one that best fits your current and future needs.
- Leverage Advanced Features: FreeBSD's features like jails, FIBs, and pf allow for powerful and flexible network configurations.
- Security is Paramount: Always consider the security implications of your network design, especially when dealing with multiple nodes and public-facing services.
- Documentation is Key: Keep detailed notes of your network evolution. It helps in troubleshooting and future planning.
Whether you're managing a small personal server or a large-scale infrastructure, the techniques described here can help you build a robust and adaptable network. Remember, network design is an iterative process. Don't be afraid to evolve your setup as your needs change.