Skip to content

Network Architecture

HybridOps uses Proxmox SDN as the on-prem Layer 3 core, a dedicated on-prem VyOS edge for site extension, a Hetzner VyOS edge pair as the static public face for Site-A, and GCP Cloud Router as the cloud routing hub. EVE-NG remains an isolated lab path for Academy and validation work, not the live operator baseline.

Current operator baseline

HybridOps currently separates networking responsibilities this way:

  • On-prem core / segmentation
    Proxmox SDN provides VLAN-aware bridges, routed VNets, gateway IPs, DHCP, and inter-VLAN policy enforcement.

  • On-prem site extension
    A dedicated VyOS VM on Proxmox originates approved on-prem prefixes and extends the site into the routed WAN domain through dual tunnels.

  • Static WAN edge
    A Hetzner VyOS pair provides the fixed public face for Site-A, terminates the GCP HA VPN underlay, and hosts the shared control-plane services needed for DNS, observability, and decision logic.

  • Cloud routing hub
    GCP Cloud Router, HA VPN, and NCC provide the cloud-side routing hub.

  • Observability
    Observability remains isolated on its own subnet and is extended to the edge control-plane as a separate service layer.

  • Lab / Academy
    EVE-NG stays in the lab VLAN and is used for simulations, training, and validation. It is not in the operational WAN path.

Topology overview

flowchart TB
    subgraph onprem["On-prem Site"]
      prox["Proxmox SDN core\nvmbr0 + routed VNets"]
      vyos["On-prem VyOS edge\nsite extension"]
      lab["EVE-NG lab\nVLAN 50 only"]
      prox --> vyos
      prox -. lab only .-> lab
    end

    subgraph hetz["Hetzner Site-A edge"]
      edgea["VyOS edge-a"]
      edgeb["VyOS edge-b"]
      ctrl["Shared control host\nPowerDNS / runner / control services"]
    end

    subgraph gcp["GCP hub"]
      hub["Cloud Router + HA VPN + NCC"]
    end

    vyos --> edgea
    vyos --> edgeb
    edgea --> hub
    edgeb --> hub
    ctrl --- edgea
    ctrl --- edgeb

On-prem VLAN allocation

VLAN VNet Subnet Gateway DHCP Pool Purpose
10 vnetmgmt 10.10.0.0/24 10.10.0.1 10.10.0.120-10.10.0.220 Management
11 vnetobs 10.11.0.0/24 10.11.0.1 10.11.0.120-10.11.0.220 Observability
12 vnetdata 10.12.0.0/24 10.12.0.1 10.12.0.120-10.12.0.220 Shared services / data
20 vnetdev 10.20.0.0/24 10.20.0.1 10.20.0.120-10.20.0.220 Development
30 vnetstag 10.30.0.0/24 10.30.0.1 10.30.0.120-10.30.0.220 Staging
40 vnetprod 10.40.0.0/24 10.40.0.1 10.40.0.120-10.40.0.220 Production
50 vnetlab 10.50.0.0/24 10.50.0.1 10.50.0.120-10.50.0.220 Lab / Academy

Design bands:

  • 10-19: platform plane
  • 20-29: development
  • 30-39: staging
  • 40-49: production
  • 50-59: lab / Academy
  • 60-99: reserved

Addressing model

Within each /24 on-prem subnet:

  • .1 is the routed gateway
  • .2-.9 are reserved for infrastructure services
  • .10-.99 are static assignments from NetBox IPAM
  • .100-.200 are DHCP allocations where enabled
  • .201-.254 remain reserved for future use

Static allocations should be consumed from NetBox/IPAM-backed outputs rather than copied into module or blueprint inputs by hand.

Routing boundaries

The current shipped routing model is:

  • Proxmox SDN owns east-west on-prem routing across the approved VLANs.
  • The on-prem VyOS edge originates approved on-prem prefixes into the Site-A routed domain.
  • The Hetzner VyOS pair presents Site-A to GCP with stable public endpoints.
  • GCP Cloud Router learns only the approved on-prem prefixes and does not learn lab-only or internal link networks.
  • EVE-NG remains out of the routed WAN baseline unless a lab exercise explicitly attaches it through temporary, controlled interfaces.

Use the Network routing contract for ASNs, tunnel /30 allocations, and import/export boundaries.

DNS and control-plane boundaries

The current shared control-plane model is:

  • PowerDNS primary and control services on the shared Hetzner control host
  • optional on-prem secondary/resolver path for local resilience
  • DNS cutover through HyOps DNS routing modules and blueprints

This keeps naming, decision control, and runner execution separate from the data-plane edge appliances.

Validation pointers

For operator validation, use the HyOps runbooks rather than the old external repository walkthroughs:

Typical on-prem quick checks remain:

ip -d link show vmbr0
ip -o addr show | grep '10\\.'
iptables -t nat -L POSTROUTING -n -v
systemctl status dnsmasq

Typical HyOps state checks:

hyops state show --env <env> --module core/onprem/network-sdn
hyops state show --env <env> --module org/hetzner/vyos-edge-foundation
hyops state show --env <env> --module platform/network/vyos-edge-wan
hyops state show --env <env> --module platform/network/vyos-site-extension-onprem

Lab boundary

EVE-NG remains important, but its role is deliberate:

  • Academy delivery
  • protocol and vendor interoperability labs
  • failure simulation
  • tooling validation before promoting changes into the live HyOps path

That lab surface should stay isolated from the operator baseline. Use ADR-0201 – EVE-NG Network Lab Architecture and the lab HOWTOs for that work.


Maintainer: HybridOps
License: MIT-0 for code, CC-BY-4.0 for documentation unless otherwise stated.