Despite the ongoing RAM shortages, computers remain incredibly cheap. Recently there has been a glut of so-called "mini servers"; tiny, sometimes fanless machines, most of which fit in the palm of your hand. These range anywhere from a hundred bucks to thousands of dollars, but even the cheapest ones are more than powerful enough to run a home lab, host some web sites, file servers, game servers, media servers -- anything, really.
So I got one, a dual-core (4 thread) intel N150 with 8G of RAM, a 2TB nvme SSD, and two wired 2.5Gb ethernet ports. It also has some GPIO pins, which are unfortunately hidden away inside the case, which is a pity. It's got pretty low power consumption, so I plan to leave it running all the time.
I think the way I've setup my networking on this host is a little unusual, so I'm documenting it here. I also have a much more powerful desktop that I use as a workstation and for gaming, which I've set up in a similar way, but which I may turn on and off occasionally.
My (host) network stack
Much of my career with computers was spent chasing network issues, and I have grown a deep hatred for most kinds of NAT. So my goal with this setup is, to the extent possible, for every discrete to get its own, real, routable IP address, and to do this without depending on any esoteric router features, tunneling, or overlay networking.
I want the opposite of a VPN; where a VPN allows machines outside of your home to appear as if they're in your home network, I want machines inside of my home to appear as if they're outside of my home network; from a security posture, they are treated as if they are an external network, which just happens to have really low latency ☺.
I also run several virtual machines, mostly 9front, and I want those VMs to be able to configure themselves automatically, such that if I change networks, I should not have to go through each VM to manually update their IP addresses. This is not just for the purposes of moving between networks or surviving network renumbering; I don't want to have to think about what IP address to give the machine; I should be able to create a short-lived VM with nothing more than a name.
Finally, I don't want to purchase any additional hardware beyond the server itself. I'm not interested in having a rack of servers in my closet; In my home I have a gaming PC, a laptop, the aforementioned mini server, a small ethernet switch, and a router which connects to my ISP's modem. The laptops connect via wifi, the other machines have wired connections to my router.
Here's an outline of the network topology within my mini server:
Each application resides in its own network namespace. If I'm running
a VM using QEMU, I use a macvtap
device. If I'm running an ordinary process, I use an macvlan device.
The macvlan devices, including the gateway device gw0, are all
attached to the same dummy interface. In this way, they all share the
same broadcast domain, and can communicate directly with one
another (I am using these devices in bridge mode).
The magic happens in the mux namespace. I move all physical interfaces
into the mux network namespace, where dhcpcd will request a DHCPv6
Prefix Delegation from the network. If it receives a prefix, it will
assign it to gw0, and radvd will advertise that prefix to all the
other macvlan devices, so they can configure themselves.
I receive a /56 prefix from my ISP (spectrum). I take one prefix out
of that for my home network, and I've configured my router to make the
remaining 254 prefixes available for DHCPv6 prefix delegation. Just about
every ISP I've used in the past 20 years has allowed me to use my own
router, and since I started getting IPv6 support (maybe 15 years now),
I've always gotten a /56 or larger delegation, so I'm comfortable relying
on this.
Network Security
The reason I delegate a separate IPv6 prefix to my server is so so I can use a separate set of firewall rules for it. My router is configured to block most inbound, unsolicited traffic to my home network, which would be a non-starter for hosting internet-reachable applications.
I have a few goals for security:
- Traffic from my home network to a server's network must pass through my firewall; it must not go directly to my server.
- I don't want to have to do anything if my ISP changes my prefix
- I don't want a compromised container to be able to snoop traffic on my home network, which has poorly secured IoT devices.
- Services should be reachable from the internet, on the ports of their choice.
- Traffic to and from my home network is prioritized over server traffic.
My biggest concern is that I catch the attention of a botnet or a particularly aggressive AI scraper, and they send enough traffic, for a long enough time, that my ISP decides they don't want my business anymore.
The general approach that I'm taking is to allow most traffic from the internet to the service network, then each network namespace using nftables, and each VM using whatever firewall that VM's OS implements, always exposing only those ports which I intend to use.
I have setup some rudimentary monitoring that shows me traffic levels If I don't like what I see, I'll blacklist prefixes, and if they still send me traffic, I will just give up and route traffic through a cheap VPS, where termination of service won't disrupt my home network.
Server Configuration
I use Guix SD to manage most of my computers, including this mini server. I've developed my own channel with additional package and service definitions needed for this configuration. With those in place, the configuration specific to my server is gathered into a handful of modules.
I wrote a daemon, called ipmuxd, which provisions ipvlan or macvlan
devices on demand on behalf of other processes. Together with the
unshare(1)
command and some other tools, I'm able to create anonymous network
namespaces and furnish them with network interfaces in such a way that
I can secure easily (with ordinary file permissions on a unix socket),
and which automatically cleans up interfaces when a service exits.
To move my physical network interfaces into the "mux" namespace early on in the boot process, I created a udev rules file:
ACTION=="add", SUBSYSTEM=="net", PROGRAM="test ! -e /var/run/netns/mux", RUN{program}+="ip netns add mux"
ACTION=="add", SUBSYSTEM=="net", ENV{INTERFACE}=="enp*|eth*" RUN{program}+="ip link set $env{INTERFACE} netns mux"
One thing I noticed is that when I stopped specifying the net.ifnames=0
parameter to my kernel, which restored the old-school "eth0, eth1"
type interface names, udev no longer got events for the ethernet devices
during the boot process, so I had to manually trigger it by adding this
to one of my startup scripts:
udevadm trigger --action=add --subsystem-match=net
With the physical interfaces moved into the "mux" namespace, I configure
dhcpcd(8) to request a prefix
delegation and assign it to the internal gw0 device:
duid ll
ipv6only
noipv6rs
ipv6ra_noautoconf
interface enp2s0
ipv6rs
ia_pd xxxxxx gw0/0/64/0
I run radvd(8) in the "mux" namespace with a pretty simple configuration:
interface gw0 {
AdvSendAdvert on;
prefix ::/64 {
AdvOnLink on;
AdvAutonomous on;
};
};
The prefix ::/64 directive is special; it tells radvd to advertise any
non-local prefixes assigned to the device in response to any solicitations
received on the device. Together with the dhcpcd configuration above,
we can seamlessly advertise the delegation we received from the network,
without hard-coding the prefix. I can see dhcpcd receive the prefix:
dhcpcd[201]: gw0: adding address 2603:babe:cafe:8c01:dc81:e6ff:fe79:3776/64
dhcpcd[201]: gw0: adding route to 2603:babe:cafe:8c01::/64
In the "mux" namespace, the prefix is assigned to gw0:
$ sudo ip -n mux -brief addr show dev gw0
gw0@host0 UP 2603:babe:cafe:8c01:dc81:e6ff:fe79:3776/64 fe80::bafe:efae:fefe:3776/64
In the default network namespace, which my login session and some system
services like ntp and openssh use, I create a macvlan device, attached
to the same dummy device as gw0. The Linux kernel's slaac client
automatically configures v6 addresses on this device in response to
radvd's advertisements:
$ ip -brief addr show dev macv0
macv0@if2 UP 2603:babe:cafe:8c01:5f50:a4ad:784a:1afc/64 fe80::9711:792c:4015:ef5c/64
$ ip -6 route
2603:babe:cafe:8c01::/64 dev macv0 proto kernel metric 256 expires 78363sec pref medium
fe80::/64 dev macv0 proto kernel metric 256 pref medium
default via fe80::dc81:fafa:fefe:fafe dev macv0 proto ra metric 1024 expires 1550sec hoplimit 64 pref medium
I enabled IP forwarding in the mux namespace (sysctl -w net.ipv6.conf.all.forwarding=1). Since the delegated prefix is assigned to gw0, I don't need to do anything further; the kernel automatically creates routes in the "mux" namespace:
$ sudo ip -n mux -6 route
2603:babe:cafe:8c01::/64 dev gw0 proto dhcp metric 1005 expires 77848sec pref medium
default via fe80::1afd:bad:bad:babe dev enp2s0 proto ra metric 1003 expires 1500sec mtu 1480 pref medium
Since routes exist for both directions, packets will be copied between
gw0 and enp2s0, and traffic will flow:
$ ping6 -n google.com
PING google.com (2607:f8b0:4009:819::200e): 56 data bytes
64 bytes from 2607:f8b0:4009:819::200e: icmp_seq=0 ttl=115 time=30.774 ms
64 bytes from 2607:f8b0:4009:819::200e: icmp_seq=1 ttl=115 time=27.123 ms
Router Configuration
I have a Mikrotik hAP ac, a combination switch/router/wireless access point running Mikrotik's Linux distribution, RouterOS. I don't want to rely on any unique features of this hardware, because I don't want to limit myself to mikrotik if I need to replace it. So my router configuration is modest.
The router has 5 ethernet ports; one for the WAN, and 4 that are configured as a bridge together with the wireless interface. The first thing I did was to take the ports my servers connect to out of the bridge; I do not want them to be part of my home network's broadcast domain, where they could snoop on the mDNS requests flying around.
I configured its dhcp6 server to lease prefixes from the delegation
pool, good for 24 hours:
/ipv6 dhcp-server set dhcp6 address-pool=delegation interface=bridge lease-time=1d
The delegation pool is automatically filled by the router's DHCP client, with the
prefix delegated by my ISP.
Firewalling
Because my IPv6 prefix can change, I have to be careful about the firewall. What we
can do is match the bits between the /56 and /64:
/ipv6 firewall address-list add address=2000:0:0:1::/e000:0:0:00ff:: list="svc_net1"
/ipv6 firewall filter add action=accept dst-address-list=svc_net1
This allows all inbound traffic whose destination is to a public address
(2000::/3 == 2000::/e0::) with the first bit before the host-specific
portion set.
It's pretty nice... except that RouterOS doesn't support it! Its underlying Linux kernel does; I can create a netfilter ruleset like so:
table inet filter {
chain input {
type filter hook input priority filter; policy drop;
ip6 daddr & e000:0:0:ff:: == 2000:0:0:1:: counter accept
}
}
or an ip6tables command line like so:
ip6tables -A INPUT -d 2000:0:0:1::/e000:0:0:ff:: -j ACCEPT
and it will allow traffic to the prefix delegated to my server, regardless
of what the ISP portion of the prefix is (as long as it's in the GLA range
2000::/3). Unfortunately, this useful functionality isn't reachable
through the RouterOS interface:
[admin@RouterOS] > /ipv6 firewall address-list add address=2000:0:0:1::/e000:0:0:ff:: list=svc_net1
failure: 2000:0:0:1::/e000:0:0:ff:: is not a valid dns name
So, instead, I have created static bindings for the IAID used by each server's DHCP client to a fixed prefix, and added those prefixes to an address list.
A necessary evil: IPv4 access with NAT64
It's 2026, and yet:
$ dig +short steampowered.com AAAA github.com AAAA stackoverflow.com AAAA
$
(Are you kidding me?) While I am inclined to say "good riddance"
for many sites which don't have IPv6 support, there will no doubt
be times when I need to contact a v4 address. So, with reluctance,
I've configured my dhcp4 client to request a v4 address for the physical
interface in the mux namespace, by removing the ipv6only option.
Then I installed tayga, a NAT64 gateway which translates ipv6 packets to and from ipv4 packets with a 1:1 NAT. I am using addresses from the class E address range, which should be unused on the internet or my local network.
# tayga.conf
tun-device nat64
ipv4-addr 240.0.0.0
dynamic-pool 240.0.0.0/4
prefix 64:ff9b::/96
data-dir /var/db/tayga
Tayga receives packets from a TUN device, which I've named nat64. I've
assigned the NAT64 prefix 64:ff9b::/96 and the reverse prefix 240.0.0.0/4, so
the kernel will naturally forward NAT64 traffic into and out of it:
$ ip -n mux addr show dev nat64
2: nat64: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 500
link/none
inet 240.0.0.0/4 scope global nat64
valid_lft forever preferred_lft forever
inet6 64:ff9b::/96 scope global
valid_lft forever preferred_lft forever
My server runs its own DNS cache, unbound, which I've configured to
synthesize AAAA records if they don't exist for a domain. It runs like any
other container in the diagram above, but has the well-known address
fd64:cafe::53:
server:
interface: "fd64:cafe::53"
hide-version: "yes"
hide-identity: "yes"
access-control: fd64:cafe::/64 allow
module-config: "dns64 validator iterator"
use-syslog: "no"
prefer-ip6: "yes"
do-nat64: "yes"
remote-control:
control-enable: "no"
I also added an address in the fd64:cafe::/64 prefix to the interface used by
radvd, so it will advertise the prefix to all attached namespaces. It can be
used to reach host-interal services like a DNS cache, although in most cases
I will favor unix sockets for IPC within the same host, when feasible.
Finally, to allow the outbound ipv4 packets to share use the one IPv4
address assigned to this system, I added the following nftables
ruleset:
table inet nat64 {
chain postrouting {
type filter hook postrouting priority srcnat
ip saddr 240.0.0.0/4 masquerade
}
}
With everything in place, I can reach ipv4-only sites from my namespaces which only have ipv6 addresses available:
$ ping6 -n github.com
PING github.com (64:ff9b::8c52:7103): 56 data bytes
64 bytes from 64:ff9b::8c52:7103: icmp_seq=0 ttl=48 time=19.046 ms
64 bytes from 64:ff9b::8c52:7103: icmp_seq=1 ttl=48 time=15.442 ms
Stable(-ish) IPs
I'm utilizing anonymous network namespaces and macvlan devices pretty heavily; ethernet addresses will change between runs of a given application. While I'm relying on DNS to resolve namespace addresses, it's better if they don't change too frequently, so I can set a reasonable TTL on my DNS records. So I utilize the following sysctl options:
net.ipv6.conf.default.addr_gen_mode=2
net.ipv6.conf.default.stable_secret=<stable-secret>
and I derive the <stable-secret> value from a unique identifier for a
workload which doesn't change between reloads; its name. While I was
still building things out, addresses were stable enough with this config
that I was able to make due by keeping an /etc/hosts file synchronized
across my 3 machines.
DNS server
My server runs an instance of shibari, which serves AAAA records for
every namespace which registers on the host. I wanted a system which
would automatically detect when a process exited, and delete the
record for it. So I wrote a local service, phonebook. In a startup script for an
application, I open the socket /run/phonebook/publish and send the
domain name I wish to use, and the interface whose addresses I want it
to be associated with.
The phonebookd process listening on the other end will query and
monitor netlink for any changes to the interface's address, and keep a
file in the format expected by tinydns-data up to date with A and AAAA
records for every registered application. That file is then used to
build the DNS database, at most once per second. Sending a SIGHUP
signal to shibari makes it reload its database from disk.
The open socket then becomes a sort of reference count; as long as one
process has a file descriptor for the client end of the socket, phonebookd
will continue to maintain DNS records for its interface. Once all processes
with a file descriptor exit or close their copy, the record is removed.
I have two zones per server: one named .internal., which contains the
host-local addresses, and one named $(hostname), which is a subdomain
of a public domain which I own.
Public DNS delegation
I set up a static delegation for each server to its own subdomain, which is the server's hostname. As long as my ISP does not change my IPv6 prefix, which it hasn't done in almost a year, I shouldn't have to change the delegation. If that becomes a problem, I can set up Dynamic DNS to update the delegation whenever the IP changes.
With the delegation in place, I can reach published servers from the internet, provided their firewall rules allow it.
LetsEncrypt certificates
With public DNS in place for each container, I'm able to answer the HTTP-01 challenge to acquire LetsEncrypt certificates, so I can serve TLS that other devices will trust.
Firewalling
As I've mentioned earlier, I am concerned about attracting too much unwanted attention. While it won't prevent a DDoS attack from affecting my internet connection and my relationship with my ISP, I'm following these general rules of thumb for firewall rules:
- New connections over a certain rate from a given IP will result in
a blanket ban for that
/48prefix for an hour. - Each namespace only accepts packets for ports that its application intends to send or receive.
I surface the size of the denylist into the top right corner of my screen, and periodically log it to a file. I will keep an eye on it, and if I don't like what I see, I'll look into tunneling traffic from a VPS which I can shut off if things get bad, instead of serving traffic from the internet directly.
In addition, I configured some QoS in my home router to prioritize the prefix used for my home LAN over other prefixes.
Problems and Future work
My configuration is a work in progress. A lot of fiddling was required to get the correct routes advertised to the correct interfaces, to avoid attracting traffic to the wrong destinations. Even now, I am still working through an issue where radvd is unable to reply to solicitations from interfaces in newly created network namespaces for almost a minute.
There is always something to improve, but here is what is top of mind.
- I should setup my DNS cache to persist its cache to disk, so I don't kick off a storm of queries on every restart.
- My workstation and mini-server are connected to the same switch. It would be nice if they talked directly through the switch instead of going out to my router and back. That would require a proper routing protocol rather than rinky-dink dhcpv6-pd.
- I would like to restrict other machines from requesting DHCP prefix delegations. For now, I'm comfortable just making the pool small, so at least they cannot request too many.
- I'd like to extend the
phonebookdservice to make it feasible to manage the TXT records required by the DNS-01 challenge for LetsEncrypt, so that I do not have to run an HTTP to answer the HTTP-01 challenge.
- See also
-
Debug log: Plan9port on sway
Nov 2024
Wayland debugging -
Using ipvtap devices for 9front VMs
Oct 2025
part 1: hacking on the 9front kernel -
Writing a 9P server from scratch
Sep 2015
Using the plan9 file system protocol -
Build log: IP auto-config for ipvlan mode l2 devices
Feb 2025
Interfacing OCaml with netlink and C -
Build log: IP auto-config for ipvlan mode l2 devices, part 2
Mar 2025
Adding DNS support -
Plumbing rules for Puppet manifests
Mar 2014
Quickly navigating puppet modules with Acme