Lately I moved to a new VPS to get better bang for the buck, old machine is called Carrie, new maching is RBG. I have many different installations, and they each needed different handling.
Using a naiive rsync between the two Debian machines I discovered the hard was that synching over /opt/docker-data (even while both docker and containerd were down) was not working because different versions use different storage metadata formats and such.
You can either export stopped containers, or commit them to images. You can export and import images, but not volumes. why? Because stupid reasons. I was too tired and short of patiance to find out. I had to trust the backups of mailcow and discourse, and I really hope I will not have any issues with tt-rss.
Update: Aparently I forgot I was an idiot. the dockrd on the new machine was using the default location /var/lib/docker and on the old machine it's a symlink to the new location /opt/docker-data. I should have covered all my bases and put that in the /etc/docker/daemon.json but I was not aware of that option way back when I started. so maybe I can copy over the volumes with rsync. duh.
Next I did do some synching. using rsync -avP --del I took in:
- /root
- /opt/mailcow
- /var/discourse
- /home
- /backup
- /usr/local/
- /var/lib/bind
- /var/cache/bind
- /var/lib/mysql
- /etc/.git
I then carefully used git diff to unify the old settings of my /etc with my new debian's default files. That takes a while, but it's worth it.
mkdir -p /home/docker-backup/cont /home/docker-backup/img
cd /home/docker-backup/
for cont in `docker container ls -q --all` ; do
docker export -o cont/$cont.tar $cont ; echo $cont
done
# save images that can't be pulled from remote registries
docker image save mastofeeder > img/mastofeeder.tar
docker image save local_discourse/app > img/local_discourse\\app.tar
When the stack is stopped the backup is only partial. you have to backup while the stack is up but make sure no new mail comes in while to taker the final snapshot.
The trick is to stop the nginx (it stops anyway so you don't get any hits on dynamic websites while moving) and also block smtp. You do that either by literally blocking the pop, imap and smtp ports in the FW or killing the dovecot and postfix, but first the dogwatch so they don't rehatch.
docker compose -f /opt/mailcow-dockerized/docker-compose.yml start
docker stop mailcow-watchdog-mailcow-1 mailcow-postfix-mailcow-1 mailcow-dovecot-mailcow-1
MAILCOW_BACKUP_LOCATION=/backup/mailcow/ \
/opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh backup all
This is a painful issue. Discourse does its own backups, you can't schedule them with your own cron jobs, and you can't really tell it when to back up. you get only to tell it how often to dump a backup in the granularity of days. not even hours.
I have an inkling this is not entirely correct and there must be a way to trigger a backup via the rails console. I will pursue this some other time, for now I assumed no new posts happened since the last backup because it is a very low volume site.
As always, I forgot to get all the TTLs shortened in advance from 1 week to 1 hour. Well, crap.
I moved the different bits of the system in /etc/bind, /var/cache/bind (dns sec keys) and /var/lib/bind (zones). Cursed whoever at Debian decided it should be distributed, and stopped the old named after the new machine became the primary for my mirror DNS.
Then I had to change the NS records at the registrar. Crossing fingers.
The default docker packages from debian are no good for some reason, Mailcow complains they are too old. so follow the instructions to install docker-ce.
Then I headed to /opt/mailcow-dockerized and ran the update script. the unbound container would not load, but I first wanted to do a recovery and see what happens.
MAILCOW_BACKUP_LOCATION=/backup/mailcow/ \
/opt/mailcow-dockerized/helper-scripts/backup_and_restore.sh restore
I ran the update again, this time unbound comes up but instead of "started" it says "Healthy". Not sure what that is about, but I dediced it's green so I'm not going to worry about it and just in case ran restore again. This time It also ran without any error message so I am a tiny bit happier.
A few more restarts later, the unbound is running, the netfilter service is dying 2-3 times a minute but all in all the service works, so I decided to visit this later.
Next came discourse. the basic idea was to rsync the /var/discourse
directory and run launcher rebuild app
,
but that broke with the wrong ownership of the postgresql files. solution was moving aside
shared/standalone/postgres_data
, let it create it again, see what ownership it gets and chown
the old postgres_data, and rename it back to place. site came up like a charm.
Next I had a few blogs that had come up with the wrong encoding, looking like this:
The solution in those cases is:
- take a mysqldump with special switches
- sed the latin1 tables to utf8
- jam it back in
e.g.:
mysqldump ira_blog --skip-set-charset --default-character-set=latin1 | \
sed 's/CHARSET=latin1 COLLATE=latin1_swedish_ci/CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci/g' > irablog.sql
mysql ira_blog --default-character-set=utf8mb4 < irablog.sql
Then I had 3 blogs without any text rendered. I assumed this was a mulfunction of some of the plugins, but using the WP CLI I discovered none of them had a "default theme", and setting one resolved it.
File under "Shit that is still written in perl" :(
Logs said DBD::mysql was missing, but cpan could not build it because it was missing a mysql_config binary. Once I found and installed libmariadb-dev the GCC would not compile it properly.
finally, I disvovered I needed apt install libclass-dbi-mysql-perl
I was about to give up on that and insttalled the datadog agent to try it out. it's nice once you find the arms and legs in /etc/datadog but the free version is keeping only 1 day of stats. leaving it alongside for now because free, and maybe I can hook it up to send me alerts for free...