Last active
December 24, 2023 23:34
-
-
Save mattbryson/2e0a7c65e857ed03b3c3ed0d576856c7 to your computer and use it in GitHub Desktop.
Recover Data from ReadyNas Duo hard drives
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# Recover Data from ReadyNas Duo RAID 1 drives | |
# | |
# DISCLAIMER : This could corrupt all your data - if you are at all worried get a professional to recover the data for you | |
# | |
# The tools mentioned below have the ability to corrupt and wipe all your data | |
# | |
# #### USE AT YOUR OWN RISK #### | |
# | |
# With that out the way.... | |
# | |
# It worked for me: 2 X 1TB drives, one would not even be recognised as a drive, the other would mount but with lots of IO errors. | |
# | |
# You will need a new drive of the same size to clone to, and possibly another drive to then copy the files to. | |
# (The ReadyNas FS is a pain to mount - so I copied the repaired disk files back to OSX) | |
# | |
# You will need linux to repair and mount the drives | |
# If your on Windows or OSX spin up a VM like ubuntu server (Alpine etc are too lightweight and don't have the tools) | |
# If you use a VM be sure to map the drive attached to the host over to the VM so it can access it. | |
# | |
# First we need to clone one of the bad disks (from RAID 1) to a new good disk to stop IO errors when working with the disc | |
# | |
# attach both the damaged and a working blank drive with equal or greater storage, and find the drive names | |
# | |
sudo fdisk -l | |
# triple check you have the correct drive letters, if you get them the wrong way round you could wipe your data | |
# If in doubt unmount the good drive and check again to see whats attached as the bad drive. | |
# now install gddresuce: https://www.gnu.org/software/ddrescue/ | |
sudo apt install gddrescue | |
# Clone the damaged drive data to the working drive | |
# https://datarecovery.com/rd/how-to-clone-hard-disks-with-ddrescue/ | |
# | |
# There are many options for ddresuce, I went for disk-to-disk as I could not mount the drive successfully | |
# -f force wipe the destination drive (required for disk to disk clone) | |
# -r3 3 passes at recovering corrupted data (does put more strain on the drive, but worked in my scenario) | |
# | |
# Copy FROM BAD drive (sdb in my case) TO GOOD drive (sdc in my case). | |
# Make sure this is the correct way round or you could loose data. | |
sudo ddrescue -f -r3 /dev/sdb /dev/dsc /tmp/ddresuce.log | |
# This will take some time. For a 1TB drive it took 30 hours to run! | |
# ..... | |
# 30 hours later | |
# check if you can see the partition tables on the NEW drive (for me /dev/sdc) | |
sudo fdisk -l | |
# For me no devices were listed under /dev/sdc, so no partitions could be found. | |
# If you CAN see partitions, then skip to mounting the drive - if not.... | |
# You need to try to fix the partition table with testdisk | |
sudo apt install testdisk | |
# and run testdisk | |
testdisk | |
# Follow the prompts (https://www.cleverfiles.com/howto/testdisk-review.html) | |
# > pick the new drive (dev/sdc) | |
# > choose EFI GPT partition table | |
# > Select Analyse | |
# > Perform a Quick Search to find the partition tables | |
# > If its listed with a P next to it, you can then choose "Write to disk" to restore the partition table. | |
# > If not, try a "Deep Search" - this can take hours (8 hrs for my 1TB drive) and then restore the table | |
# > Quit testdisk | |
# Now you have hopefully a working clone of the ReadyNas drive | |
# To mount the newly created clone of the drive | |
# Search for the LVM group name | |
sudo vgscan | |
# > Found volume group "c" using metadata type lvm2 | |
# C is the Ready Nas Volume. | |
# I had issues here with a metadata error (I think this was from mounting one of the corrupted drive previously, and then mounting the clone that had the same ID) | |
# to fix the meta data I ran | |
sudo vgck --updatemetadata c | |
# now activate the LVM group (C being the name of the group from vgscan) | |
sudo vgchange -ay c | |
# > 1 logical volumes(s) in volume group "c" now active | |
# now mount the drive. | |
# We cant just use `mount` as the ReadyNas drives use a odd block size, so we need to use fuse | |
sudo apt install fuse | |
# using fuse mount the drive: dev/c/c being the LVM location of the good drive, media/readyNasDuo being the mount location. | |
fuse-ext2 -o sync_read,allow_other,rw+ /dev/c/c /media/readyNasDuo | |
# finally check your data | |
ls /media/readyNasDuo | |
# Next I copied this data back over to my host OSX | |
# I had a drive shared via VMWare, and used the vmware tools to mount the host drive | |
vmhgfs-fuse .host:/ /mnt/hgfs -o allow_other | |
# and now I could copy all the files from the new recovered NAS drive over to my host | |
rsync -av /medai/readyNasDuo /mnt/hgfs/myNewDrive | |
# And after about 24 hours of copying I had all my data back | |
# finally! | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment