Skip to content

Instantly share code, notes, and snippets.

View scyto's full-sized avatar

scyto

  • Seattle, WA, USA
  • 11:52 (UTC -07:00)
View GitHub Profile

📚 Proxmox FRR OpenFabric IPv6 Initial Setup (fc00::/128 Design)


🔢 Overview

This document describes the original setup to establish an FRR (Free Range Routing) OpenFabric IS-IS based IPv6 routed mesh over Thunderbolt networking between Proxmox nodes, using static /128 loopback addresses in the fc00::/8 ULA space.

This provided:

@scyto
scyto / ceph-ip-migration-on-proxmox.md
Last active April 27, 2025 05:17
migrate ceph network from /128 IP addresses to /64

🌟 Proxmox Ceph IPv6 Monitor IP Migration Best Practices

I learn't an important lesson today, never ever remove ms_bind_ipv4 = false from ceph.conf or the cephFS will be fucked.


📚 Purpose

This document describes a safe, production-grade method for migrating a Proxmox+Ceph cluster to new /64 IPv6 loopback addresses for monitor daemons (MONs) without downtime, following best practices and using native Proxmox tools (pveceph, GUI).

@scyto
scyto / dual-stack-openfabric-mesh-v2.md
Last active April 27, 2025 18:36
New version of my mesh network using openfabric

Enable Dual Stack (IPv4 and IPv6) OpenFabric Routing

Version 2.2 (2025.04.25)

this gist is part of this series

This assumes you are running Proxmox 8.4 and that the line source /etc/network/interfaces.d/* is at the end of the interfaces file (this is automatically added to both new and upgraded installations of Proxmox 8.2).

This changes the previous file design thanks to @NRGNet and @tisayama to make the system much more reliable in general, more maintainable esp for folks using IPv4 on the private cluster network (i still recommend the use of the IPv6 FC00 network you will see in these docs)

@scyto
scyto / docker-cephfs-virtiods.md
Last active April 21, 2025 20:50
Hypervisor Host Based CephFS pass through with VirtioFS

Using VirtioFS backed by CephFS for bind mounts

This is currently a work-in-progress documentation - rough notes for me, maybe missing a lot or wrong

The idea is to replace GlusterFS running inside the VM with storage on my cephfs cluster. This is my proxmox cluster and it runs both the storage and is the hypervisor for my docker VMs.

Other possible approaches:

  • ceph fuse clien in VM to mount cephFS or CephRBD over IP
  • use of ceph docker volume plugin (no useable version of this yet exists but it is being worked on)

Assumptions:

@scyto
scyto / nas-debian.md
Last active February 24, 2025 09:27
NAS-homebrew-install

Install Debian

non graphical, SSH and basic tools only

apt-get install nano sudo nfs samba-common

usermod -aG sudo [your-username]

switch to username all commands thereafter use sudo when needed

add contrib sources (why?)

@scyto
scyto / docker-auto-label.md
Last active September 6, 2024 03:57
docker auto label

Auto Label

This container puts a label on each machine based on a config that matches service names. If the service is running the label gets a 1 and if the label get as 0 its not running. this can be used with constraints to either locate serices on a node with another service OR make sure a service doesn't land on a node with another service

I would love to find a better version of this that does this without the need for the manual config file (you can use a file bindmount instead of a config if you prefer)

Swarm Consideration

State is all read-only in a config

@scyto
scyto / portception.md
Last active October 3, 2023 04:13
portception

Portception - deploying portainer with portainer in a swarm

No one should be like scyto, no one should do this..... be prepared to see your portainer disappear in a puff of smoke if you get this wrong

Prep

  1. This assumes all nodes are manager nodes
  2. This assumes you already have agents managed as s stack / swarm service via portainer (see my other not recommended stack)
  3. this assumes you have the portainer bind mounts on some shared medium (ceph, gluster, NFS, SMB - if you run it on one of the last two don't blame me if it corrupts)
  4. my suggestiton is get your non-managed portainer working with your shared storage before you go any further
  5. BACKUP ALL YOUR STACKS / SECRETS AND CONFIGS - WORST CASE YOU CAN RECREATE EVERY STACK / SECRET / CONFIG BY HAND FAIRLY QUICKLY

THIS GIST IS NOW DEPRECATED NEW ONE IS AVAILABLE HERE I WONT BE UPDATING THIS ONE OR REPLYING TO COMMENTS ON THIS ONE (COMMENTS NOW DISABLED).

Enable Dual Stack (IPv4 and IPv6) OpenFabric Routing

this gist is part of this series

This assumes you are running Proxmox 8.2 and that the line source /etc/network/interfaces.d/* is at the end of the interfaces file (this is automatically added to both new and upgraded installations of Proxmox 8.2).

This changes the previous file design thanks to @NRGNet for the suggestions to move thunderbolt settings to a file in /etc/network/interfaces.d it makes the system much more reliable in general, more maintainable esp for folks using IPv4 on the private cluster network (i still recommend the use of the IPv6 FC00 network you will see in these docs)

@scyto
scyto / .migrate-docker-swarmVMs.md
Last active January 17, 2025 23:49
Migrate Docker Swarm VMs from Hyper-V to Proxmox

Introduction

This one is the one that has to work, even more so the domain controllers. This is what my swarm looks like

you may want to read from the bottom up as later migrations are where i had the process more locked and less experimentation

The plan

So the plan is as follows (and is based on my experience with home assistant oddlye enough)

  1. Backup node 1 VM with synology hyper-v backup
@scyto
scyto / homeassistant-migration.md
Last active September 23, 2023 19:35
Migrating Home Assistant OVA VM from Hyper-V to Proxmox / QEMU

Migrating Home Assistant OVA VM from Hyper-V to Proxmox

Now that i have nailed the qm disk import command and given all linux kernel have the virtio drivers in them after 5.6 this should be a breeze!

Export

Export VHD from Hyper-V into share proxmox can see (tbh at this point if you don't know how...)

Create VM on proxmox

I created a 4GB VM with no disks at all andthe virtio network. Make sure you connect it to a live bridge or hass will hand at starting network manager I added a TPM drive