Deploying Anycast DNS Using OpenBSD and BGP DNS for mesh network using vmm/vmd + OpenBGPD + relayd + unbound/nsd 23 September 2018
Posted in: OpenBSD BGP DNS NYCMesh routing

My home network is connected to NYCMesh, a community-owned open network. Recently, the failure of an SD card inside a Raspberry Pi at an adjacent large hub has left my area of the network without a caching recursive resolver to serve DNS for both the .mesh TLD and the wider internet. I stood up my own instance of the anycast DNS resolver to service DNS in my neighborhood of the network.


Inside the mesh, DNS is serviced by the anycast IP address by announcing a BGP route for this IP address. Nodes near to me will use my instance for DNS resolution because the routing topology will prefer my instance over a distant instance.

The major components of this build will be:

The gateway machine has already been configured as a router to allow forwarding of packets, and functions as a router, LAN DNS forwarder, and web server.

Setup Virtual Machine

The virtual machine base system is installed mostly using the autoinstall facility, you may prefer a manual installation.


vm "nycmesh-dns" {
    owner jon:wheel
    memory 512M
    # First disk from 'vmctl create "/home/vm/nycmesh-dns.img" -s 1G'
    disk "/home/vm/nycmesh-dns.img"
    #boot "/bsd.rd" # For install
    interface {
        switch "vmnet"
        locked lladdr 00:00:0A:46:91:C2


option domain-name "";
use-host-decl-names on;
filename "auto_install";

# vmd service zone
subnet netmask {
  option routers;
  option domain-name-servers,,;

  host nycmesh-dns {
    hardware ethernet 00:00:0A:46:91:C2;



# autoinstall response file for unattended installation
#Password for root account = plaintext / encrypt(1) / "*************" to disable
Password for root account = *************
Change the default console to com0 = yes
Which speed should com0 use = 19200
Public ssh key for root account = ssh-rsa AAAA…XYZZY
Start sshd(8) by default = yes
Do you expect to run the X Window System = no
Setup a user = no
Allow root ssh login = prohibit-password
What timezone are you in = America/New_York
Which disk is the root disk = sd0
URL to autopartitioning template for disklabel =
Location of sets = http
HTTP proxy URL = none
HTTP Server =
Server directory = /pub/OpenBSD/6.3/amd64
Set name(s) = -comp* -game* -x* -man*

We may now access this virtual machine only via ssh.

Zonefile pull on the VM

NYCMesh generally uses kresd/knot as their DNS server and keeps the zone files and configuration in a git repo. Because OpenBSD has a fairly old version of knot, I decided to use the base system DNS servers to serve the zone files. (I should probably move this to a Linux VM running kresd/knot to be in line with the rest of the mesh.)

First I checked out a copy of the git repo using anonymous HTTP so I wouldn’t need github credentials on the VM.

pkg_add git python-2.7.14p1 bash
git clone

I setup a script to auto-pull the zonefile updates, based on the same script for Linux/Unbound.


export PATH=$PATH:/usr/local/bin

# OpenBSD + Unbound + NSD

cd /root/nycmesh-dns
git pull

NEWCOMMIT=`git rev-parse HEAD`
OLDCOMMIT=`cat commit`

  exit 0

cp -f *.zone /var/nsd/zones/master
rcctl restart nsd unbound
git rev-parse HEAD > commit

I later added a cron entry.

*/10    *       *       *       *       cd /root/nycmesh-dns && /root/nycmesh-dns/ 2>&1 > /dev/null

Setup NSD and Unbound on the VM

First tweak the networking confiruration (/etc/hostname.vio0):

inet alias

nsd will serve zone files from git.


        hide-version: yes
        verbosity: 1
        database: "" # disable database

## bind to a specific address/port

        control-enable: yes

        name: "mesh"
        zonefile: "master/"
        name: ""
        zonefile: "master/"
        name: ""
        zonefile: "master/"

unbound will serve as a recursive resolver.


        private-domain: "mesh"
        domain-insecure: "mesh"
        do-not-query-localhost: no
        interface:       # listen on alternative port
        interface: ::1
        do-ip6: no

        prefetch: yes

        # override the default "any" address to send queries; if multiple
        # addresses are available, they are used randomly to counter spoofing

        access-control: refuse
        access-control: allow
        access-control: allow
        access-control: allow
        access-control: ::0/0 refuse
        access-control: ::1 allow

        hide-identity: yes
        hide-version: yes

        control-enable: yes
        control-use-cert: no
        control-interface: /var/run/unbound.sock

        name: "mesh."
        name: ""
        name: ""

Start both servers and try to see if you can resolve ns.mesh.

nycmesh-dns# rcctl restart nsd unbound
nycmesh-dns# host ns.mesh
Using domain server:

ns.mesh has address

relayd health check

relayd adds a route to if the healthcheck passes. If the DNS server stops responding, the route is removed from the kernel and BGP retracts it from peers.


! host -W 1 ns.mesh. $1


log updates

timeout 2000
interval 3
table <dns-servers> { ip ttl 1 retry 0 }
router "anycast-dns" {
  #forward to <dns-servers> check icmp
  forward to <dns-servers> check script "/usr/local/bin/"
  rtlabel export

Start relayd and verify the route gets added.

kibble# rcctl restart relayd
kibble# traceroute
traceroute to (, 64 hops max, 40 byte packets
 1 (  1.162 ms  0.327 ms  0.485 ms

BGP announcement of

Setting up BGP is a whole task in and of itself, but I have included a partial BGP configuration for reference.


# global configuration
AS 65009
network inet static # This is the line that causes our dynamically inserted routes to get picked up
#network inet connected
# restricted socket for bgplg(8)
socket "/var/www/run/bgpd.rsock" restricted

# neighbors and peers
group "nycmesh" {
        neighbor {
                remote-as 64996
                descr   "Node 1340"
                announce self

# do not send or use routes from EBGP neighbors without
# further explicit configuration
#deny from ebgp
#deny to ebgp

allow from group nycmesh

Further reading

Full configuration available on github