this gist is part of this series
- add
thunderbolt
andthunderbolt-net
kernel modules (this must be done all nodes - yes i know it can sometimes work withoutm but the thuderbolt-net one has interesting behaviou' so do as i say - add both ;-)nano /etc/modules
add modules at bottom of file, one on each line- save using
x
theny
thenenter
doing this means we don't have to give each thunderbolt a manual IPv6 addrees and that these addresses stay constant no matter what
Add the following to each node using nano /etc/network/interfaces
If you see any sections called thunderbolt0 or thunderbol1 delete them at this point.
Doing this means we don't have to give each thunderbolt a manual IPv6 or IPv4 addrees and that these addresses stay constant no matter what.
Add the following to each node using nano /etc/network/interfaces
this to remind you not to edit en05 and en06 in the GUI
This fragment should go between the existing auto lo
section and adapater sections.
iface en05 inet manual
#do not edit it GUI
iface en06 inet manual
#do not edit in GUI
If you see any thunderbol sections delete them from the file before you save it.
*DO NOT DELETE the source /etc/network/interfaces.d/*
this will always exist on the latest versions and should be the last or next to last line in /interfaces file
This is needed as proxmox doesn't recognize the thunderbolt interface name. There are various methods to do this. This method was selected after trial and error because:
- the thunderboltX naming is not fixed to a port (it seems to be based on sequence you plug the cables in)
- the MAC address of the interfaces changes with most cable insertion and removale events
-
use
udevadm monitor
command to find your device IDs when you insert and remove each TB4 cable. Yes you can use other ways to do this, i recommend this one as it is great way to understand what udev does - the command proved more useful to me thanthe syslog
orlspci command
for troublehsooting thunderbolt issues and behavious. In my case my two pci paths are0000:00:0d.2
and0000:00:0d.3
if you bought the same hardware this will be the same on all 3 units. Don't assume your PCI device paths will be the same as mine. -
create a link file using
nano /etc/systemd/network/00-thunderbolt0.link
and enter the following content:
[Match]
Path=pci-0000:00:0d.2
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en05
- create a second link file using
nano /etc/systemd/network/00-thunderbolt1.link
and enter the following content:
[Match]
Path=pci-0000:00:0d.3
Driver=thunderbolt-net
[Link]
MACAddressPolicy=none
Name=en06
This section en sure that the interfaces will be brought up at boot or cable insertion with whatever settings are in /etc/network/interfaces - this shouldn't need to be done, it seems like a bug in the way thunderbolt networking is handled (i assume this is debian wide but haven't checked).
Huge thanks to @corvy for figuring out a script that should make this much much more reliable for most
- create a udev rule to detect for cable insertion using
nano /etc/udev/rules.d/10-tb-en.rules
with the following content:
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en05", RUN+="/usr/local/bin/pve-en05.sh"
ACTION=="move", SUBSYSTEM=="net", KERNEL=="en06", RUN+="/usr/local/bin/pve-en06.sh"
-
save the file
-
create the first script referenced above using
nano /usr/local/bin/pve-en05.sh
and with the follwing content:
#!/bin/bash
LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en05"
echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"
# If multiple interfaces go up at the same time,
# retry 10 times and break the retry when successful
for i in {1..10}; do
echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
/usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
break
}
echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
sleep 3
done
save the file and then
- create the second script referenced above using
nano /usr/local/bin/pve-en06.sh
and with the follwing content:
#!/bin/bash
LOGFILE="/tmp/udev-debug.log"
VERBOSE="" # Set this to "-v" for verbose logging
IF="en06"
echo "$(date): pve-$IF.sh triggered by udev" >> "$LOGFILE"
# If multiple interfaces go up at the same time,
# retry 10 times and break the retry when successful
for i in {1..10}; do
echo "$(date): Attempt $i to bring up $IF" >> "$LOGFILE"
/usr/sbin/ifup $VERBOSE $IF >> "$LOGFILE" 2>&1 && {
echo "$(date): Successfully brought up $IF on attempt $i" >> "$LOGFILE"
break
}
echo "$(date): Attempt $i failed, retrying in 3 seconds..." >> "$LOGFILE"
sleep 3
done
and save the file
- make both scripts executable with
chmod +x /usr/local/bin/*.sh
- run
update-initramfs -u -k all
to propogate the new link files into initramfs - Reboot (restarting networking, init 1 and init 3 are not good enough, so reboot)
##3 Install LLDP - this is great to see what nodes can see which.
- install lldpctl with
apt install lldpd
on all 3 nodes - execute
lldpctl
you should info
if you are having speed issues make sure the following is set on the kernel command line in /etc/default/grub
file
intel_iommu=on iommu=pt
one set be sure to run update-grub
and reboot
everyones grub command line is different this is mine because i also have i915 virtualization, if you get this wrong you can break your machine, if you are not doing that you don't need the i915 entries you see below
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt"
(note if you have more things in your cmd line DO NOT REMOVE them, just add the two intel ones, doesnt matter where.
cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
you should get two lines on an intel system with P and E cores. first line should be your P cores second line should be your E cores
for example on mine:
root@pve1:/etc/pve# cat /sys/devices/cpu_core/cpus && cat /sys/devices/cpu_atom/cpus
0-7
8-15
- make a file at
/etc/network/if-up.d/thunderbolt-affinity
- add the following to it - make sure to replace
echo X-Y
with whatever the report told you were your performance cores - e.g.echo 0-7
#!/bin/bash
# Check if the interface is either en05 or en06
if [ "$IFACE" = "en05" ] || [ "$IFACE" = "en06" ]; then
# Set Thunderbot affinity to Pcores
grep thunderbolt /proc/interrupts | cut -d ":" -f1 | xargs -I {} sh -c 'echo X-Y | tee "/proc/irq/{}/smp_affinity_list"'
fi
- save the file - done
I have only tried this on 6.8 kernels, so YMMV If you want more TB messages in dmesg to see why connection might be failing here is how to turn on dynamic tracing
For bootime you will need to add it to the kernel command line by adding thunderbolt.dyndbg=+p
to your /etc/default/grub file, running update-grub
and rebooting.
To expand the example above"
`GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt thunderbolt.dyndbg=+p"`
Don't forget to run update-grub
after saving the change to the grub file.
For runtime debug you can run the following command (it will revert on next boot) so this cant be used to cpature what happens at boot time.
`echo -n 'module thunderbolt =p' > /sys/kernel/debug/dynamic_debug/control`
these tools can be used to inspect your thundebolt system, note they rely on rust to be installedm you must use the rustup script below and not intsall rust by package manager at this time (9/15/24)
apt install pkg-config libudev-dev git curl
curl https://sh.rustup.rs -sSf | sh
git clone https://github.com/intel/tbtools
restart you ssh session
cd tbtools
cargo install --path .
I think we will all likely get different results. It all depends on what iteration of this git everyone followed. I have the latest version applied to my nodes as I had to redo some things recently and I figured I might as well go through and make sure the newest scripts were in place.
I think some will have en05/en06 if they are using the thunderbolt files and others will have everything merged into the interfaces file itself (that was my original config long ago as the calling /etc/network/interfaces.d/* wasn't working for some reason.
I'm not sure that what I posted above will work for all, but, my intention was to get anyone from a node that is just plain stuck at what looks like almost a blank screen to at least booting, then the next step, getting networking operational again. I did note some frr issues with that starting but I think it was related to not going all of the way through (getting hung up on en05/en06 and just not processing the rest of the interfaces file.
So the first step to get the silly thing to boot was to remove the unlimited time for networking services to come up (not sure why they didn't have a timeout there. I set mine to 30 seconds for both the 'systemd-networkd-wait-online.service' and 'networking.service' files. Both of those were located under /etc/systemd/system/network-online.target.wants/networking.service/ for me. I'm not entirely sure that the systemd-networkd-wait-online.service is even needed and we might be able to just remove that service all together.
I had issues with ifup for some reason giving me permission denied. I'm not sure what is up with that. I believe my issues were related to duplicate interfaces, some in the /etc/network/interfaces file as well as the /etc/network/interfaces.d/thunderbolt file.
I also tried various attempts before of commenting out the source /etc/network/interfaces.d/* as you have tried and got different results each time which I thought was odd. In the end though, what is working for me is not having auto eno5 and eno6 (commented out). Those are not set to "auto start" now however they start anyway due to other scripts. The method I used is obviously not ideal but it should hopefully help with getting a completely broken node up and running, at least in a basic state with networking + ceph. frr is running without any additional changes for me. I do believe, and I'd have to double check, that only one link is working at a time (either en05 or en06) so there is something going on there. For now, I'm leaving 1 node upgraded and will wait until I have more time to investigate further which likely won't be until later next week. I'm hoping someone smarter than me might sort this out before that time but if not I will continue on my journey.
Lastly, I feel I need to mention that I tried a LOT of different modifications. I believe that I documented any changes I left and removed any that I reverted, but it is very possible that I missed something I modified in the process which might be very relevant to getting things working. When I boot, it still times out on both of the networking related services, however things work for now. If anyone gets to where I'm at, be aware that when some servers are migrated to to pve9, they might get hung trying to migrate back to pve8. You will have to shut down the VM and then migrate it if this happens. I also don't think it's a good idea to have things this way for very long but this is my home lab so I'm not super concerned.