Or in my case, to many disks. I have 4x Samsung 990 Drives that I want to setup in RAID10 with BTRFS and move my /home
directory over to these disks. Here are the steps I took to get everything setup and moved over. I went with RAID10 on BTRFS, but you can use this guide to move any subvolume to a new disk, whatever the setup might be.
All of this was done on Fedora 38 workstation. Your millage may vary.
WARNING: BACKUP YOUR DATA BEFORE DOING THIS!
Seriously, back up your data first if you care about it. I use restic and my fancy restic-systemd-units to take incremental backups to my TrueNAS Scale server and to a local secondary hard drive in my workstation. I will probably be moving to incremental local backups with snapper in the future.
Using the GNOME Disks app on I created 4 ext4 drives with LUKS encryption. Use the same LUKS password for each drive. Unmount the disks and take note of the Make note of the /dev/mapper/luks-UUID
/dev/nvme3n1
/dev/mapper/luks-438ecb52-24f2-44f7-abf1-03c0be08f6d7
/dev/nvme4n1
/dev/mapper/luks-7f3d9233-8104-466e-80be-5582ab04d61c
/dev/nvme5n1
/dev/mapper/luks-17425e70-5942-4813-94cf-504e80afc961
/dev/nvme6n1
/dev/mapper/luks-2590bf2e-defe-4ae6-b54c-443c458eb35d
Add the new LUKS devices to /etc/crypttab
so the system decrypt them after prompting for the password.
luks-438ecb52-24f2-44f7-abf1-03c0be08f6d7 UUID=438ecb52-24f2-44f7-abf1-03c0be08f6d7
luks-7f3d9233-8104-466e-80be-5582ab04d61c UUID=7f3d9233-8104-466e-80be-5582ab04d61c
luks-17425e70-5942-4813-94cf-504e80afc961 UUID=17425e70-5942-4813-94cf-504e80afc961
luks-2590bf2e-defe-4ae6-b54c-443c458eb35d UUID=2590bf2e-defe-4ae6-b54c-443c458eb35d
Using the /dev/mapper/luks-UUID
devices above create the filesystem with the RAID10 settings. Also use -f
because there will be ext4 filesystems on each device from being lazy on the LUKS setup above with GNOME Disks.
mkfs.btrfs -m raid10 -d raid10 /dev/mapper/luks-438ecb52-24f2-44f7-abf1-03c0be08f6d7 /dev/mapper/luks-7f3d9233-8104-466e-80be-5582ab04d61c /dev/mapper/luks-17425e70-5942-4813-94cf-504e80afc961 /dev/mapper/luks-2590bf2e-defe-4ae6-b54c-443c458eb35d -f
# mkfs.btrfs -m raid10 -d raid10 /dev/mapper/luks-438ecb52-24f2-44f7-abf1-03c0be08f6d7 /dev/mapper/luks-7f3d9233-8104-466e-80be-5582ab04d61c /dev/mapper/luks-17425e70-5942-4813-94cf-504e80afc961 /dev/mapper/luks-2590bf2e-defe-4ae6-b54c-443c458eb35d -f
btrfs-progs v6.3.2
See https://btrfs.readthedocs.io for more information.
NOTE: several default settings have changed in version 5.15, please make sure
this does not affect your deployments:
- DUP for metadata (-m dup)
- enabled no-holes (-O no-holes)
- enabled free-space-tree (-R free-space-tree)
Label: (null)
UUID: de7d7870-c33b-4bd2-a46d-b8ff806727f5
Node size: 16384
Sector size: 4096
Filesystem size: 7.28TiB
Block group profiles:
Data: RAID10 2.00GiB
Metadata: RAID10 512.00MiB
System: RAID10 16.00MiB
SSD detected: yes
Zoned device: no
Incompat features: extref, skinny-metadata, no-holes, free-space-tree
Runtime features: free-space-tree
Checksum: crc32c
Number of devices: 4
Devices:
ID SIZE PATH
1 1.82TiB /dev/mapper/luks-438ecb52-24f2-44f7-abf1-03c0be08f6d7
2 1.82TiB /dev/mapper/luks-7f3d9233-8104-466e-80be-5582ab04d61c
3 1.82TiB /dev/mapper/luks-17425e70-5942-4813-94cf-504e80afc961
4 1.82TiB /dev/mapper/luks-2590bf2e-defe-4ae6-b54c-443c458eb35d
Take note of the UUID (de7d7870-c33b-4bd2-a46d-b8ff806727f5
) from the output.
Take note of the old /home
directory information
$ df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/luks-12a79b0a-d32f-44f0-b038-05e1f676bda8 930G 779G 149G 84% /home
# grep -F /home /etc/fstab
UUID=532c1b37-faf8-4e26-a0e7-3b54a5521c64 /home btrfs subvol=home,compress=zstd:1,x-systemd.device-timeout=0 0 0
- Create a mount point:
mkdir /mnt/btrfs
- Mount the UUID from the
mkfs.btrfs
command output:mount /dev/disk/by-uuid/de7d7870-c33b-4bd2-a46d-b8ff806727f5 /mnt/btrfs
- Create a read-only snapshot of the
/home
directory:btrfs subvolume snapshot -r /home home_$(date '+%Y%m%d')
- Send this read-only subvolume snapshot to the newly mounted RAID10 BTRFS filesystem:
btrfs send /home_$(date '+%Y%m%d') | sudo btrfs receive /mnt/btrfs/
. This took me about 45 minutes to move 812G of data. Go for a walk or something! - Now create another snapshot:
sudo btrfs subvolume snapshot /mnt/btrfs/home_$(date '+%Y%m%d') /mnt/btrfs/home
. You can alsomv
the/mnt/btrfs/home_$(date '+%Y%m%d')
directory to/mnt/btrfs/home
if you want and clear the read-only flag off the snapshot, but if you want to do incremental backups of your/home
directory with something like snapper it is better to create a read-write snapshot of/mnt/btrfs/home_$(date '+%Y%m%d')
directory instead. - Edit
/etc/fstab
with the UUID from themkfs.btrfs
of mount point/home
- Reboot!
An example of an updated /home entry in /etc/fstab
for step 6 above.
# grep -F /home /etc/fstab
UUID=de7d7870-c33b-4bd2-a46d-b8ff806727f5 /home btrfs subvol=home,compress=zstd:1,x-systemd.device-timeout=0 0 0
You are now now running off of your new RAID10 BTRFS filesystem. Once you settle into your new /home
you can safely delete home_$(date '+%Y%m%d')
subvolume and your old home
subvolume off of your older drive.
$ df -h /home
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/luks-de7d7870-c33b-4bd2-a46d-b8ff806727f5 3.7T 811G 2.9T 22% /home