Skip to content

Instantly share code, notes, and snippets.

@popmonkey
Last active July 6, 2025 04:25
Show Gist options
  • Save popmonkey/f715ef25faace68a245b5fc7296daa10 to your computer and use it in GitHub Desktop.
Save popmonkey/f715ef25faace68a245b5fc7296daa10 to your computer and use it in GitHub Desktop.
Time Machine like backup based on rsync
#!/bin/bash
#==============================================================================
#
# Rewind: an Enhanced Time Machine-like Backup Script
#
# Description: A command-line tool for creating incremental, space-efficient
# backups using rsync and hard links. It supports creating new
# backups, restoring from existing ones, and listing all
# available backups.
#
# Author: Popmonkey and Gemini
# Version: 3.7
#
#==============================================================================
# --- Environment Setup ---
# Force a UTF-8 locale to handle special characters in filenames correctly across
# different systems (Linux, macOS). This prevents 'multibyte conversion' errors.
export LANG=C.UTF-8
export LC_ALL=C.UTF-8
# --- Global Variables ---
# Determine the script's own directory for reliable file access.
SCRIPT_DIR="$(dirname "$0")"
# --- Color Definitions ---
# Check if the script is running in an interactive terminal (stdout is a tty).
if [ -t 1 ]; then
# If it is a terminal, define color codes for formatted output.
C_RED='\033[0;31m'
C_GREEN='\033[0;32m'
C_CYAN='\033[0;36m'
C_YELLOW='\033[0;33m' # For the summary section
C_GRAY='\033[0;90m' # Light gray for informational messages
C_NC='\033[0m' # No Color (to reset text formatting)
else
# If output is redirected to a file or pipe, disable colors.
C_RED=''
C_GREEN=''
C_CYAN=''
C_YELLOW=''
C_GRAY=''
C_NC=''
fi
# --- Logging Functions ---
# Standardized functions for different types of console output.
# log_error: Prints a message in red to standard error.
log_error() {
echo -e "${C_RED}Error: $1${C_NC}" >&2
}
# log_success: Prints a message in green to standard output.
log_success() {
echo -e "${C_GREEN}$1${C_NC}"
}
# log_info: Prints a message in gray to standard output.
log_info() {
echo -e "${C_GRAY}$1${C_NC}"
}
# log_summary: Prints a message in yellow for the final summary.
log_summary() {
echo -e "${C_YELLOW}$1${C_NC}"
}
# --- Configuration ---
# Set default values for key script variables.
INPROGRESS_PREFIX="inprogress-"
# --- Usage Information ---
# Functions to display help messages for the script and its commands.
# usage: Displays the main help message with a list of commands.
usage() {
echo "Usage: $0 <command> [options]"
echo ""
log_info "A script to create, manage, and restore time machine-like backups."
echo ""
echo "Commands:"
echo " backup Create a new backup."
echo " restore Restore files from a specific backup."
echo " list List all available backups in a repository."
echo " help Display this help message."
echo ""
echo "For command-specific options, run: $0 <command> --help"
}
# usage_backup: Displays help specific to the 'backup' command.
usage_backup() {
echo "Usage: $0 backup --source <dir> --repository <dir> [options]"
echo ""
echo "Options for backup:"
echo " -s, --source <dir> (Required) The directory to back up."
echo " -r, --repository <dir> (Required) The directory where backups are stored."
echo " -e, --exclude <file> Path to a custom rsync exclude file."
echo " -n, --dry-run Simulate the backup without making changes."
}
# usage_restore: Displays help specific to the 'restore' command.
usage_restore() {
echo "Usage: $0 restore --repository <dir> --target <dir> [options]"
echo ""
echo "Options for restore:"
echo " -r, --repository <dir> (Required) The directory where backups are stored."
echo " -t, --target <dir> (Required) The directory to restore files to."
echo " -f, --from <name> The name of the backup to restore (e.g., 'back-...'). Defaults to 'current'."
echo " -n, --dry-run Simulate the restore without making changes."
}
# usage_list: Displays help specific to the 'list' command.
usage_list() {
echo "Usage: $0 list --repository <dir>"
echo ""
echo "Options for list:"
echo " -r, --repository <dir> (Required) The directory where backups are stored."
}
# --- Core Functions ---
# print_rsync_summary: Parses the rsync log to print a summary of file operations.
print_rsync_summary() {
local log_file="$1"
if [ ! -s "$log_file" ]; then
return
fi
log_summary "--- Operation Summary ---"
# Count files that were newly copied or updated (i.e., data was transferred).
# These are files that were new or changed since the last backup.
local copied_updated
copied_updated=$(grep -c '^[>c]f' "$log_file")
log_summary " Files Copied/Updated: $copied_updated"
# Extract the total file count from rsync's stats output.
local total_files
total_files=$(awk '/Number of files:/ {gsub(",", "", $4); print $4; exit}' "$log_file")
# In a --link-dest scenario, any file that wasn't copied/updated was
# successfully hard-linked from the previous backup.
# We also check if total_files has a value to avoid errors.
local linked=0
if [[ -n "$total_files" && "$total_files" -ge "$copied_updated" ]]; then
linked=$((total_files - copied_updated))
fi
log_summary " Files Hard-Linked: $linked"
# The 'Unchanged' category is removed as it's redundant and confusing in this model.
# A file is either new/updated or it's hard-linked from the previous state.
log_summary "-------------------------"
}
# run_rsync: A wrapper function to execute rsync, handle progress, and manage logs.
run_rsync() {
local rsync_args=("$@")
local dry_run_flag=""
for arg in "${rsync_args[@]}"; do
if [[ "$arg" == "--dry-run" || "$arg" == "-n" ]]; then
dry_run_flag="--dry-run"
break
fi
done
rsync_args+=("--itemize-changes")
local rsync_full_log
rsync_full_log=$(mktemp)
local rsync_exit_code_file
rsync_exit_code_file=$(mktemp)
trap 'rm -f "$rsync_full_log" "$rsync_exit_code_file"' EXIT SIGHUP SIGINT SIGQUIT SIGTERM
if [ -z "$dry_run_flag" ] && command -v stdbuf >/dev/null 2>&1; then
log_info "Starting transfer..."
# AWK script to process rsync output and create a detailed progress bar.
# It keeps track of the last file being processed and displays it next to the progress counter.
local progress_awk_script='
BEGIN { current_file = ""; }
# Match itemized lines to capture the current filename.
/^[<>c.d*h]/ && NF > 1 {
current_file = "";
for (i = 2; i <= NF; i++) { current_file = current_file " " $i; }
sub(/^ /, "", current_file);
}
# Match progress lines to print the status.
/\(xfr#|to-chk=|ir-chk=/ {
match($0, /\(.*(xfr|to-chk|ir-chk).*\)/);
if (RSTART) {
progress_info = substr($0, RSTART, RLENGTH);
term_width = ENVIRON["COLUMNS"];
if (term_width == "" || term_width == 0) { term_width = 80; }
padded_progress = sprintf("%-30s", progress_info);
max_file_len = term_width - length(padded_progress) - 1;
if (max_file_len < 10) { max_file_len = 10; }
truncated_file = current_file;
if (length(truncated_file) > max_file_len) {
truncated_file = "..." substr(truncated_file, length(truncated_file) - max_file_len + 4);
}
output_str = padded_progress truncated_file;
# Print the formatted string, padding with spaces to clear the line.
printf "%s%*s\r", output_str, term_width - length(output_str), "";
}
}
'
# Group the commands to correctly capture the rsync exit code via PIPESTATUS.
{
stdbuf -oL rsync "${rsync_args[@]}" --info=stats2,progress2 --no-inc-recursive 2>&1 | tee "$rsync_full_log" | awk "$progress_awk_script"
# Capture the exit code of the first command in the pipe (rsync).
echo ${PIPESTATUS[0]} > "$rsync_exit_code_file"
}
rsync_exit_code=$(cat "$rsync_exit_code_file")
echo "" # Newline to move past the progress bar.
else
# Fallback for dry runs or if 'stdbuf' is not found.
if [ -n "$dry_run_flag" ]; then
log_info "Simulating operation (dry run)..."
else
log_info "stdbuf not found. Falling back to per-file progress."
fi
rsync "${rsync_args[@]}" -P --info=stats2 > "$rsync_full_log" 2>&1
rsync_exit_code=$?
fi
print_rsync_summary "$rsync_full_log"
local errors
if [ -s "$rsync_full_log" ]; then
errors=$(grep -E 'rsync error:|rsync: failed to|Permission denied|getcwd\(\):' "$rsync_full_log")
fi
if [ $rsync_exit_code -ne 0 ] || [ -n "$errors" ]; then
log_error "--- Rsync Errors/Warnings Detected ---"
log_info "For details, check the full log: $rsync_full_log"
trap - EXIT SIGHUP SIGINT SIGQUIT SIGTERM
else
rm -f "$rsync_full_log"
fi
return $rsync_exit_code
}
# perform_backup: Handles the logic for creating a new backup.
perform_backup() {
local src="$1"
local repo="$2"
local user_exclude_override="$3"
local dry_run_flag="$4"
log_info "--- Starting Backup ---"
log_info "Source: $src"
log_info "Repository: $repo"
if [ ! -d "$src" ]; then
log_error "Source directory '$src' not found."
return 1
fi
# Determine the exclude file to use.
local exclude_file
if [ -n "$user_exclude_override" ]; then
exclude_file="$user_exclude_override"
if [ ! -f "$exclude_file" ]; then
log_error "Custom exclude file '$exclude_file' not found."
return 1
fi
else
# Default to 'exclude_rsync' in the root of the source directory.
exclude_file="$src/exclude_rsync"
if [ ! -f "$exclude_file" ]; then
log_info "Default exclude file not found. Creating '$exclude_file'."
if ! touch "$exclude_file" >/dev/null 2>&1; then
log_error "Could not create default exclude file at '$exclude_file'. Permission denied?"
return 1
fi
echo "# Add files and directories to exclude from backup, one per line." > "$exclude_file"
echo "# This file lives in the root of your source directory." >> "$exclude_file"
fi
fi
log_info "Exclude File: $exclude_file"
[ "$dry_run_flag" == "--dry-run" ] && log_info "Mode: Dry Run"
log_info "-----------------------"
mkdir -p "$repo" || { log_error "Could not create repository directory '$repo'."; return 1; }
local latest_inprogress
latest_inprogress=$(find "$repo" -maxdepth 1 -type d -name "$INPROGRESS_PREFIX*" -print -quit)
local temp_backup_dir
if [ -z "$latest_inprogress" ]; then
log_info "No incomplete backup found. Starting a new one."
local timestamp
timestamp=$(date "+%Y-%m-%dT%H-%M-%S")
temp_backup_dir="$repo/$INPROGRESS_PREFIX$timestamp"
mkdir -p "$temp_backup_dir"
else
log_info "Found incomplete backup, attempting to resume: $(basename "$latest_inprogress")"
temp_backup_dir="$latest_inprogress"
fi
local link_dest_option=""
if [ -L "$repo/current" ]; then
link_dest_option="--link-dest=$(readlink -f "$repo/current")"
log_info "Linking against last backup: $(basename "$(readlink "$repo/current")")"
else
log_info "No previous backup found. Creating a full initial backup."
fi
local rsync_args=(-av)
[ -n "$dry_run_flag" ] && rsync_args+=("$dry_run_flag")
rsync_args+=(--exclude-from="$exclude_file")
[ -n "$link_dest_option" ] && rsync_args+=("$link_dest_option")
rsync_args+=("$src/" "$temp_backup_dir")
if ! run_rsync "${rsync_args[@]}"; then
log_error "rsync command failed. The 'current' symlink was not updated."
log_info "The incomplete backup directory '$(basename "$temp_backup_dir")' has been left for inspection."
return 1
fi
if [ -n "$dry_run_flag" ]; then
log_success "Dry run complete. No changes were made."
[ -z "$latest_inprogress" ] && rmdir "$temp_backup_dir"
return 0
fi
log_success "Rsync completed successfully."
local final_timestamp
final_timestamp=$(date "+%Y-%m-%dT%H-%M-%S")
local final_backup_dir="$repo/back-$final_timestamp"
if mv "$temp_backup_dir" "$final_backup_dir"; then
log_success "Finalized backup as: $(basename "$final_backup_dir")"
rm -f "$repo/current"
(cd "$repo" && ln -s "$(basename "$final_backup_dir")" "current")
log_success "Updated 'current' symlink to point to new backup."
else
log_error "Failed to rename temporary backup directory."
return 1
fi
}
# perform_restore: Handles the logic for restoring files from a backup.
perform_restore() {
local repo="$1"
local backup_name="$2"
local restore_target="$3"
local dry_run_flag="$4"
[ -z "$backup_name" ] && backup_name="current"
if [ "$backup_name" == "current" ]; then
if [ -L "$repo/current" ]; then
local resolved_name
resolved_name=$(basename "$(readlink "$repo/current")")
log_info "Resolving 'current' to '$resolved_name'"
backup_name="$resolved_name"
else
log_error "The 'current' backup does not exist."
return 1
fi
fi
local backup_path="$repo/$backup_name"
if [ ! -d "$backup_path" ]; then
log_error "Backup '$backup_name' not found in repository '$repo'."
list_backups "$repo"
return 1
fi
log_info "--- Starting Restore ---"
log_info "Restoring from: $backup_path"
log_info "Restoring to: $restore_target"
[ "$dry_run_flag" == --dry-run ] && log_info "Mode: Dry Run"
log_info "------------------------"
mkdir -p "$restore_target" || { log_error "Could not create target directory '$restore_target'."; return 1; }
local rsync_args=(-a)
[ -n "$dry_run_flag" ] && rsync_args+=("$dry_run_flag")
rsync_args+=("$backup_path/" "$restore_target/")
if run_rsync "${rsync_args[@]}"; then
log_success "Restore operation completed."
else
log_error "Restore operation failed."
return 1
fi
}
# list_backups: Displays all completed backups in the repository.
list_backups() {
local repo="$1"
log_info "--- Available Complete Backups in '$repo' ---"
if [ ! -d "$repo" ]; then
log_info "Repository directory not found."
log_info "------------------------------------------"
return
fi
if [ -L "$repo/current" ]; then
local current_target
current_target=$(readlink "$repo/current")
echo -e " ${C_CYAN}current -> $(basename "$current_target")${C_NC}"
else
log_info " (No current backup set)"
fi
echo ""
find "$repo" -maxdepth 1 -type d -name "back-*" -exec basename {} \; | sort
log_info "------------------------------------------"
}
# handle_command: The main dispatcher that calls the appropriate function based on the command.
handle_command() {
local command=$1
shift
case $command in
backup)
if [ -z "$SRC_DIR" ] || [ -z "$REPO_DIR" ]; then
log_error "A --source and --repository are required for backup."
usage_backup
return 1
fi
perform_backup "$SRC_DIR" "$REPO_DIR" "$EXCLUDE_FILE" "$DRY_RUN"
;;
restore)
if [ -z "$REPO_DIR" ] || [ -z "$TARGET_DIR" ]; then
log_error "A --repository and --target are required for restore."
usage_restore
return 1
fi
perform_restore "$REPO_DIR" "$RESTORE_FROM" "$TARGET_DIR" "$DRY_RUN"
;;
list)
if [ -z "$REPO_DIR" ]; then
log_error "A --repository is required to list backups."
usage_list
return 1
fi
list_backups "$REPO_DIR"
;;
help | -h | --help | "")
usage
;;
*)
log_error "Unknown command: '$command'"
usage
return 1
;;
esac
}
# --- Main Script Logic ---
main() {
if [ $# -eq 0 ]; then
usage
exit 1
fi
local COMMAND=$1
# --- Unified Argument Parsing ---
local SRC_DIR=""
local REPO_DIR=""
local TARGET_DIR=""
local RESTORE_FROM=""
local EXCLUDE_FILE=""
local DRY_RUN=""
# Check for command-specific help first.
for arg in "$@"; do
if [[ "$arg" == "--help" || "$arg" == "-h" ]]; then
case $COMMAND in
backup) usage_backup; exit 0 ;;
restore) usage_restore; exit 0 ;;
list) usage_list; exit 0 ;;
*) usage; exit 0 ;;
esac
fi
done
# Parse all other arguments.
local remaining_args=()
local all_args=("$@")
i=0
while [[ $i -lt ${#all_args[@]} ]]; do
arg="${all_args[$i]}"
case $arg in
-s|--source) ((i++)); SRC_DIR="${all_args[$i]}" ;;
-r|--repository) ((i++)); REPO_DIR="${all_args[$i]}" ;;
-t|--target) ((i++)); TARGET_DIR="${all_args[$i]}" ;;
-f|--from) ((i++)); RESTORE_FROM="${all_args[$i]}" ;;
-e|--exclude) ((i++)); EXCLUDE_FILE="${all_args[$i]}" ;;
-n|--dry-run) DRY_RUN="--dry-run" ;;
*) remaining_args+=("$arg") ;;
esac
((i++))
done
# Sanitize directory paths to remove any trailing slashes.
# This prevents issues like '//' in constructed paths.
[ -n "$SRC_DIR" ] && SRC_DIR="${SRC_DIR%/}"
[ -n "$REPO_DIR" ] && REPO_DIR="${REPO_DIR%/}"
[ -n "$TARGET_DIR" ] && TARGET_DIR="${TARGET_DIR%/}"
# Pass control to the command handler.
handle_command "$COMMAND" "${remaining_args[@]}"
local exit_code=$?
if [ $exit_code -eq 0 ]; then
log_success "Operation finished."
else
log_error "Operation failed."
fi
exit $exit_code
}
# This check prevents the script from executing when it is sourced for testing.
# The main function is only called when the script is executed directly.
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
main "$@"
fi
#!/bin/bash
#==============================================================================
#
# Rewind Test Suite
#
# Description: A comprehensive test script to validate the functionality
# of the 'rewind.sh' backup script. It covers initial backups,
# incremental changes, restore operations, and error handling.
#
# Usage: Place this script in the same directory as 'rewind.sh'
# and execute it directly: ./run_tests.sh
#
#==============================================================================
# --- Test Suite Configuration ---
# The name of the backup script to be tested.
REWIND_SCRIPT_NAME="rewind.sh"
# Directory where all test files and folders will be created.
# Using $$ ensures a unique directory for each run, preventing conflicts.
TEST_AREA="rewind_test_suite_$$"
# --- Color Definitions for Output ---
C_GREEN='\033[0;32m'
C_RED='\033[0;31m'
C_CYAN='\033[0;36m'
C_YELLOW='\033[0;33m'
C_GRAY='\033[0;90m'
C_NC='\033[0m'
# --- Test Counters ---
PASS_COUNT=0
FAIL_COUNT=0
# --- Helper Functions ---
# Prints a section header.
print_header() {
echo -e "\n${C_CYAN}======================================================================${C_NC}"
echo -e "${C_CYAN} $1"
echo -e "${C_CYAN}======================================================================${C_NC}"
}
# The core checking function. It evaluates a command's exit code and reports pass/fail.
# Usage: check [--invert] <description> <command_to_run>
# The --invert flag treats a non-zero exit code as a PASS.
check() {
local invert=false
if [[ "$1" == "--invert" ]]; then
invert=true
shift
fi
local description="$1"
shift
local command_to_run=("$@")
echo -e -n "${C_YELLOW}[TESTING]${C_NC} $description..."
# Capture stdout and stderr for analysis if needed
local output
output=$("${command_to_run[@]}" 2>&1)
local exit_code=$?
# Store output in a global variable for tests that need to check it
LAST_OUTPUT="$output"
# Invert logic: pass if --invert is true AND exit code is non-zero
# Normal logic: pass if --invert is false AND exit code is zero
if { $invert && [ $exit_code -ne 0 ]; } || { ! $invert && [ $exit_code -eq 0 ]; }; then
echo -e "\r${C_GREEN}[ PASS ]${C_NC} $description"
((PASS_COUNT++))
return 0
else
echo -e "\r${C_RED}[ FAIL ]${C_NC} $description"
echo -e "${C_GRAY} -> Exit code: $exit_code. Output:${C_NC}\n$output"
((FAIL_COUNT++))
return 1
fi
}
# --- Environment Setup and Teardown ---
# Sets up the entire testing directory structure.
setup_environment() {
print_header "Setting Up Test Environment"
# This test script expects to be in the same directory as the script being tested.
if [ ! -f "./$REWIND_SCRIPT_NAME" ]; then
echo -e "${C_RED}Error: The script '$REWIND_SCRIPT_NAME' was not found in the current directory.${C_NC}" >&2
echo "Please place this test script in the same directory as '$REWIND_SCRIPT_NAME'." >&2
exit 1
fi
# Create a dedicated test directory in the current location.
mkdir -p "$TEST_AREA"
# Copy the script to be tested into the test area.
cp "./$REWIND_SCRIPT_NAME" "$TEST_AREA/rewind.sh"
# Enter the test area to keep all test artifacts contained.
cd "$TEST_AREA" || exit 1
# Now create the rest of the structure inside the test area.
mkdir -p source_dir repository_dir restore_target_dir
chmod +x ./rewind.sh
echo "Test environment created in './$TEST_AREA/'"
}
# Cleans up all created test files and directories.
cleanup() {
# If we are in the test directory, navigate out of it first.
if [[ "$PWD" == *"$TEST_AREA" ]]; then
cd ..
fi
# Now, safely remove the test directory.
if [ -d "$TEST_AREA" ]; then
rm -rf "$TEST_AREA"
echo -e "\n${C_GREEN}Test environment cleaned up.${C_NC}"
fi
}
# --- Test Cases ---
test_case_1_initial_backup() {
print_header "Test Case 1: Initial Backup and Attribute Preservation"
# 1.1: Create a complex source directory
echo "-> Setting up a source directory with varied files..."
mkdir -p source_dir/a_subdir
touch source_dir/regular_file.txt
touch source_dir/FileA
touch source_dir/filea
echo "content" > source_dir/a_subdir/content.log
ln -s regular_file.txt source_dir/my_soft_link
chmod 755 source_dir/regular_file.txt # Give it some non-default permissions
# 1.2: Run the initial backup
./rewind.sh backup --source ./source_dir --repository ./repository_dir > /dev/null 2>&1
local first_backup_path
first_backup_path=$(readlink repository_dir/current)
check "Initial backup command runs successfully" [ -L "repository_dir/current" ]
check "Backup directory '$first_backup_path' was created" [ -d "repository_dir/$first_backup_path" ]
# 1.3: Verify backup integrity
# Use diff to compare. It should return exit code 0 (no differences).
check "Backup is an identical copy of the source (attributes preserved)" \
diff -r --exclude='exclude_rsync' ./source_dir/ "repository_dir/$first_backup_path/"
}
test_case_2_incremental() {
print_header "Test Case 2: Incremental Backup and Hard Linking"
local first_backup_path
first_backup_path=$(readlink repository_dir/current)
# 2.1: Modify the source directory
echo "-> Modifying the source directory..."
touch source_dir/a_new_file.dat
echo "modified content" > source_dir/a_subdir/content.log
# Add a 1-second delay to ensure the backup timestamp is different.
sleep 1
# 2.2: Run the second backup and check that the command itself succeeds.
local backup_output
if ! backup_output=$(./rewind.sh backup --source ./source_dir --repository ./repository_dir 2>&1); then
echo -e "\r${C_RED}[ FAIL ]${C_NC} Second backup command executes without error"
echo -e "${C_GRAY} -> Command failed unexpectedly. Output:${C_NC}\n$backup_output"
((FAIL_COUNT++))
return
fi
check "Second backup command executes without error" true
local second_backup_path
second_backup_path=$(readlink repository_dir/current)
if ! check "A new backup directory was created (symlink updated)" [ "$first_backup_path" != "$second_backup_path" ]; then
echo -e "${C_RED} -> Critical test failed. Subsequent checks in this test case will be skipped.${C_NC}"
return
fi
# 2.3: Verify summary, hard links, and new files
check "Summary reports 2 files copied/updated" echo "$backup_output" | grep -q "Files Copied/Updated: 2"
check "Summary reports 4 files hard-linked" echo "$backup_output" | grep -q "Files Hard-Linked: 4"
local inode1
inode1=$(ls -i "repository_dir/$first_backup_path/FileA" | awk '{print $1}')
local inode2
inode2=$(ls -i "repository_dir/$second_backup_path/FileA" | awk '{print $1}')
check "Unchanged files are hard-linked (inodes match)" [ -n "$inode1" ] && [ "$inode1" == "$inode2" ]
check "--invert" "Modified file has different content in new backup" \
diff -q "repository_dir/$first_backup_path/a_subdir/content.log" "repository_dir/$second_backup_path/a_subdir/content.log"
check "New file exists in the second backup" [ -f "repository_dir/$second_backup_path/a_new_file.dat" ]
}
test_case_5_deletion() {
print_header "Test Case 5: Deletion and Verification"
local second_backup_path
second_backup_path=$(readlink repository_dir/current)
# 5.1: Delete a file from the source
echo "-> Deleting a file from the source directory..."
rm source_dir/filea
sleep 1
# 5.2: Run the third backup
local backup_output
if ! backup_output=$(./rewind.sh backup --source ./source_dir --repository ./repository_dir 2>&1); then
echo -e "\r${C_RED}[ FAIL ]${C_NC} Third backup command executes without error"
echo -e "${C_GRAY} -> Command failed unexpectedly. Output:${C_NC}\n$backup_output"
((FAIL_COUNT++))
return
fi
check "Third backup command executes without error" true
local third_backup_path
third_backup_path=$(readlink repository_dir/current)
if ! check "A new backup directory was created after deletion" [ "$second_backup_path" != "$third_backup_path" ]; then
echo -e "${C_RED} -> Critical test failed. Subsequent checks in this test case will be skipped.${C_NC}"
return
fi
# 5.3: Verify file states and summary
check "Summary reports 0 files copied/updated after deletion" echo "$backup_output" | grep -q "Files Copied/Updated: 0"
check "Summary reports 5 files hard-linked after deletion" echo "$backup_output" | grep -q "Files Hard-Linked: 5"
check "--invert" "Deleted file does not exist in new backup" [ -e "repository_dir/$third_backup_path/filea" ]
check "Deleted file still exists in previous backup" [ -e "repository_dir/$second_backup_path/filea" ]
local inode2
inode2=$(ls -i "repository_dir/$second_backup_path/regular_file.txt" | awk '{print $1}')
local inode3
inode3=$(ls -i "repository_dir/$third_backup_path/regular_file.txt" | awk '{print $1}')
check "Unchanged files are still hard-linked after deletion" [ -n "$inode2" ] && [ "$inode2" == "$inode3" ]
}
test_case_3_restore() {
print_header "Test Case 3: Restore Functionality"
# This complex line robustly finds the FIRST backup created, not just the one before 'current'
local first_backup_path
first_backup_path=$(find repository_dir -maxdepth 1 -type d -name "back-*" | sort | head -n 1 | xargs basename)
# Restore from the *first* backup to ensure we're not just restoring 'current'
./rewind.sh restore --repository ./repository_dir --target ./restore_target_dir --from "$first_backup_path" > /dev/null 2>&1
check "Restore command runs successfully" [ "$(ls -A restore_target_dir)" ]
check "Restored content is identical to the original backup" \
diff -r "repository_dir/$first_backup_path/" ./restore_target_dir/
}
test_case_4_error_handling() {
print_header "Test Case 4: Error Condition Handling"
# Use --invert: The test PASSES if the rewind script exits non-zero (which is expected)
check "--invert" "Fails gracefully with non-existent source" \
./rewind.sh backup --source ./i_do_not_exist --repository ./repository_dir
check "Correct error message for non-existent source" \
echo "$LAST_OUTPUT" | grep -q "Source directory './i_do_not_exist' not found"
# The 'list' command should exit 0 even if the repo doesn't exist, so no --invert
check "Runs without error when listing a non-existent repository" \
./rewind.sh list --repository ./i_do_not_exist
check "Correct error message for non-existent repo (list)" \
echo "$LAST_OUTPUT" | grep -q "Repository directory not found"
# Use --invert for permission denied
chmod 555 ./repository_dir # Make read-only
check "--invert" "Fails gracefully with read-only repository" \
./rewind.sh backup --source ./source_dir --repository ./repository_dir
chmod 755 ./repository_dir # Restore permissions
check "Correct error message for read-only repo" \
echo "$LAST_OUTPUT" | grep -q "Could not create repository directory"
}
# --- Main Execution ---
main() {
# Ensure cleanup happens even if the script is interrupted
trap cleanup EXIT SIGHUP SIGINT SIGQUIT SIGTERM
setup_environment
test_case_1_initial_backup
test_case_2_incremental
test_case_5_deletion
test_case_3_restore
test_case_4_error_handling
print_header "Test Suite Finished"
echo -e "Summary: ${C_GREEN}${PASS_COUNT} passed, ${C_RED}${FAIL_COUNT} failed.${C_NC}"
# Return a non-zero exit code if any tests failed
if [ $FAIL_COUNT -ne 0 ]; then
exit 1
fi
exit 0
}
# Run the main function
main
@popmonkey
Copy link
Author

popmonkey commented Jul 5, 2025

worked with Gemini to rewrite

my prompts were around making making separate commands with clear arguments, the ability to list available backups and run the restore.

Gemini generated some syntax errors but was able to repair them when i showed the error text as part of the prompt. minor logic errors as well that required a bit of debugging.

it took around 2 hours of iterating with additional prompting to convert my original script to this version so overall very impressive

https://gist.github.com/popmonkey/f715ef25faace68a245b5fc7296daa10/526b2bba02122eb2d93e752d72a57b6b958c0752 into a much richer tool

@popmonkey
Copy link
Author

popmonkey commented Jul 5, 2025

i then had several iterations with Gemini to make things better and to provide a nice quiet progress indicator. extracting progress from rsync was a lot of trial and error to get right, Gemini was definitely making a ton of mistakes reading stdin/stderr

to get the rsync stuff done i started a new Gemini chat and created a simple test shell script. Iterated on that until "we" got it working.

Gemini is sometimes amazing and sometimes kinda dumb. For example I asked it to make the script print every command it was running and it tried to generate echo statements instead of using something like set -x for testings.

anyway, after that "we finally got it working and i took that info back to the original chat to provide a solution.

that was https://gist.github.com/popmonkey/f715ef25faace68a245b5fc7296daa10/2633143766bd914e833cdf55ab702fbcece7d81a

finally, i fed the entire script to a new Gemini 2.5 Pro chat and asked it to consolidate code, make it more readable, etc.

some new syntax errors were introduced but were easy to clean up by feeding the errors back into the chat.

getting this all working took around 1 hour. so 3 hours from my initial script to the current version.

@popmonkey
Copy link
Author

popmonkey commented Jul 6, 2025

so from https://gist.github.com/popmonkey/f715ef25faace68a245b5fc7296daa10/616f5f4558aa5321787830e7f58c12e7d82287af i asked Gemini to add a transfer summarization at the end. then i did some testing and found quite a few subtle bugs and addressed those.

funny thing tho - at the end i asked to add a descriptive banner at the top and the mofo made one that included this:

Author: Gemini

lol

I said, umm no... "Popmonkey and Gemini" or you're fired!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment