Skip to content

Instantly share code, notes, and snippets.

@scyto
Last active April 27, 2025 18:27
Show Gist options
  • Save scyto/76e94832927a89d977ea989da157e9dc to your computer and use it in GitHub Desktop.
Save scyto/76e94832927a89d977ea989da157e9dc to your computer and use it in GitHub Desktop.
my proxmox cluster

ProxMox Cluster - Soup-to-Nutz

aka what i did to get from nothing to done.

note: these are designed to be primarily a re-install guide for myself (writing things down helps me memorize the knowledge), as such don't take any of this on blind faith - some areas are well tested and the docs are very robust, some items, less so). YMMV

Purpose of Proxmox cluster project

Required Outomces of cluster project

image

The first 3 NUCs are the new proxmox cluster, the second set of 3 NUCs is the old Hyper-V nodes.

Updates as of 2025.04.20 Been running great, still had issues with IPv4 dual fabric. Have refactored that with some great suggestions from commenters. Now need to see if longhaul tests prove out if these have helped.

Also made ceph take a hard dependecy on frr service being started - this may help some scenarios, but not if the thunderbolt interfaces are down, still not sure how to help folks there (this applies mostly to MS-101 users, see commnts sections of indidual gists, esp old deprecated openfrabric mesh gist)

Outcomes

  1. Hardware and Base Proxmox Install

  2. Thunderbolt Mesh Networking Setup

  3. v2 - Enable Dual Stack IPv4 / IPv4) Openfabric Routing Mesh

    • Enable Dual Stack (IPv4 and IPv6) Openfabric Routing on Mesh Network deprecated - Old gist here
  4. Setup Cluster

  5. Setup Ceph and High Availability

  6. Create CephFS and storage for ISOs and CT Templates

  7. Setup HA Windows Server VM + TPM

  8. How to migrate Gen2 Windows VM from Hyper-V to Proxmox

    1. Notes on migrating my real world domain controller #2
    2. Notes on migrating my real world domain controller #1 (FSMO holder, AAD Sync and CA server)
    3. Notes on migrating my windows (server 2019) admin center VM
  9. Migrate HomeAssistant VM from Hyper-V

  10. Migrate my debian VM based docker swarm from Hyper-V to proxmox

  11. Extra Credit (optional):

    1. Enable vGPU Passthrough (+windows guest, CT guest configs
    2. Install Lets Encrypt Cert (CloudFlare as DNS Provder
    3. Azure Active Directory Auth
    4. Install Proxmox Backup Server (PBS) on synology with CIFS backend
    5. Send email alerts via O365 using Postfix HA Container
  12. Random Notes & Troubleshootig

TODO

  • add TLS to the mail relay? with LE certs? maybe?
  • maybe send syslog to my syslog server (securely)
  • figure out ceph public/cluster running on different networks - unclear its needed for this size of install
  • get all nodes listening to my network UPS and shut down before power runs out
  • using one of these three ceph volume plugins Brindster/docker-plugin-cephfs flaviostutz/cepher n0r1sk/docker-volume-cephfs each has different strengths and weaknesses (i will like choose either the n0r1sk or the Brindster one) - until i figure out ceph networking more this is dead in the water as ceph isn't reachable from LAN or docker swarm VMs - so using virtiofs linked in main items above.

Purpose of cluster

I have been using Hyper-V for my docker swarm cluster VM hosts (see other gists). Original intenttion was to try and get Thunderbolt Networking for a Hyper-V cluster going and clustered storage for the VMs. This turns out to be super hard when using NUCs as cluster nodes due to too few disks. I looked at solar winds as alternative but this was both complex and not pervasive.

I had been watching proxmox for years and thought now was a good time to jump in and see what it is all about. (i had never booted or looked at proxmox UI before doing this - so this documentation is soup to nuts and intended for me to repro if needed)

Goals of Cluster

  1. VMs running on clustered storage {completed}
  2. Use of ThunderBolt for ~26Gbe Cluster VM operations (replication, failover etc)
    • Thunderbolt meshs with OSPF routing {completed}
    • Ceph over thunderbolt mesh {completed}
    • VM running with live migration {completed}
    • VM running with HA failove of node failure {completed}
    • Seperate VM/CT Migration network over thunderbolt mesh {not started}
  3. Use low powered off the shelf Intel NUCs {completed}
  4. Migrate VMs from Hyper-V:
    • Windows Server Domain Controler / DNS / DHCP / CA / AAD SYNC VMs {not started}
    • Debian Dcoker Host (for my 3 running 3 node swarm) VMs {not started}
    • HomeAssistant VM {not started}
  5. Sized to last me 5+ years (lol, yeah, right)

Hardware Selected

  1. 3x 13th Gen Intel NUCs (NUC13ANHi7):
    • Core i7-1360P Processor(12 Cores, 5.0 GHz, 16 Threads)
    • Intel Iris Xe Graphics
    • 64 GB DDR4 3200 CL22 RAM
    • Samsung 870 EVO SSD 1TB Boot Drive
    • Samsung 980 Pro NVME 2 TB Data Drive
    • 1x Onboard 2.5Gbe LAN Port
    • 2x Onboard Thunderbolt4 Ports
    • 1 x 2.5Gbe usinng Intel NUCIOALUWS nvme epxansion port
  2. 3 x OWC TB4 Cables

Key Software Components Used

  1. Proxmox v8.x
  2. Ceph (included with Proxmox)
  3. LLDP (included with Proxmox)
  4. Free Range Routing - FRR OSPF - (included with Proxmox)
  5. nano ;-)

Key Resources Leveraged

Proxmox/Ceph Guide from packet pushers

Proxmox Forum - several community members were invaluable in providing me a breadcrumb trail.

systemd.link manual pages

udevadm manual

udev manual

@scyto
Copy link
Author

scyto commented Apr 26, 2025

Just want to give you a big thanks for this guide. I followed it up to step 6 and did not have any issues. Speed tests are showing 26Gb/s across the nodes.

you are welcome, it was mostly notes for me on how i setup my system so i wouldn't forget, thank you for using it and validating the instructions! glad to hear it worked for you

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment