Skip to content

Instantly share code, notes, and snippets.

@Kerryliu
Last active October 15, 2025 12:10
Show Gist options
  • Save Kerryliu/c380bb6b3b69be5671105fc23e19b7e8 to your computer and use it in GitHub Desktop.
Save Kerryliu/c380bb6b3b69be5671105fc23e19b7e8 to your computer and use it in GitHub Desktop.
TrueNAS UGREEN DXP4800 Plus Status LED Guide

UGREEN DXP4800 Plus TrueNAS Status LED Guide

20240609_035951642_iOS

The following is a quick guide on getting basic status LED functionality working with TrueNAS running on the UGREEN DXP4800 Plus. Theoretically, it should work on all models (with some small revisions to the script), but I only have a DXP4800 Plus. :)

This guide is for cron job that runs a script to update the LEDs every couple minutes, but I'm sure the following can be modified for blinky LEDs as well.

Steps:

  1. Manually build or download the ugreen_leds_cli tool from https://github.com/miskcoo/ugreen_dx4600_leds_controller.
  2. Plop it somewhere on your NAS (E.g. a dataset).
  3. In the same dataset, create your .sh script that controls the LEDs. At the bottom of this gist is my modified version of meyergru's.
  4. Make the script executable: chmod +X your-script.sh.
    • You may also need to make ugreen_leds_cli executable as well.
  5. In TrueNas, navigate over to System SettingsAdvanced
  6. Under Init/Shutdown Scripts Create the following to load the i2c-dev module on boot:
    • Description: Enable i2c-dev
    • Type: Command
    • Command: modprobe i2c-dev
    • When: Pre Init
  7. Under Cron Jobs we then create a task to run every x minutes:
    • Description: Update Status LEDS
    • Command: /mnt/path/to/your/script.sh
    • Run as User: root
    • Schedule: */5 * * * * (or however often you desire)
  8. Reboot and wait a bit for your cron job to run.

Sources:

Example script:

#! /bin/bash

#set -x

SCRIPTPATH=$(dirname "$0")
echo $SCRIPTPATH

devices=(p n x x x x)
map=(power netdev disk1 disk2 disk3 disk4)

# Check network status
gw=$(ip route | awk '/default/ { print $3 }')
if ping -q -c 1 -W 1 $gw >/dev/null; then
    devices[1]=u
fi

# Map sdX1 to hardware device
declare -A hwmap
echo "Mapping devices..."
while read line; do
    MAP=($line)
    device=${MAP[0]}
    hctl=${MAP[1]}
    partitions=$(lsblk -l -o NAME | grep "^${device}[0-9]\+$")
    for part in $partitions; do
        hwmap[$part]=${hctl:0:1}
        echo "Mapped $part to ${hctl:0:1}"
    done
done <<< "$(lsblk -S -o NAME,HCTL | tail -n +2)"

# Print the hwmap for verification
echo "Hardware mapping (hwmap):"
for key in "${!hwmap[@]}"; do
    echo "$key: ${hwmap[$key]}"
done

# Check status of zpool disks
echo "Checking zpool status..."
while read line; do
    DEV=($line)
    partition=${DEV[0]}
    echo "Processing $partition with status ${DEV[1]}"
    if [[ -n "${hwmap[$partition]}" ]]; then
        index=$((${hwmap[$partition]} + 2))
        echo "Device $partition maps to index $index"
        if [ ${DEV[1]} = "ONLINE" ]; then
            devices[$index]=o
        else
            devices[$index]=f
        fi
    else
        echo "Warning: No mapping found for $partition"
    fi
done <<< "$(zpool status -L | grep -E '^\s+sd[a-h][0-9]')"

# Output the final device statuses
echo "Final device statuses:"
for i in "${!devices[@]}"; do
    echo "$i: ${devices[$i]}"
    case "${devices[$i]}" in
        p)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 255 255 -on -brightness 64
            ;;
        u)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 255 255 -on -brightness 64
            ;;
        o)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 0 255 0 -on -brightness 64
            ;;
        f)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 0 0 -blink 400 600 -brightness 64
            ;;
        *)
            "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -off
            ;;
    esac
done
@rybackisback
Copy link

rybackisback commented Nov 30, 2024

Thanks to all of you for this. The KITT lights were driving me crazy...lol.

@Weingartens using your script is great for my 8800, but one question. I'm horrible at coding, but I think I see it's set to blink green if there's a problem with the drive, correct?

I ask because my Drive 1 light is blinking green, but TrueNAS doesn't show any smart errors or zfs pool issues, so I'm not sure what I might have done wrong. I'm running 8 spinning NAS drives (mostly Seagate) and an 8800 Plus.

Thought I'd add....6 disks are setup as one pool, 2 are in a mirror pool and the third is a hot spare. The hot spare is set to power down after 10 minutes of inactivity, but all others are always on. Could bay 1 have the hot spare in it and thus be powered down (hence flashing)? I didn't label my disks and can't seem to figure out how to know which one (sda, sdb, etc) is in which bay without shutting down removing one and booting up 7 times to label them all....so...here I am...lol.

Any thoughts?

EDIT/UPDATE: Nevermind.... I shutdown my 8800 and pulled each drive after taking a screenshot of serial numbers and drive names. Then labeled each bay with the corresponding drive name. This lead me to the answer. The drive in bay 1 is the hot spare for the mirrored pool. I'm assuming since it's not a full member of the pool, it shows flashing on the LEDs. That works for me.

Cheers!

@republicandaddy / @Weingartens Are you please able to provide some detailed steps on how to get it to work. I am getting the error below when i follow the steps:
sudo: process 6789 unexpected status 0x57f
Killed
Executed CronTask - /mnt/Data/led/leds.sh > /dev/null: sudo: process 6789 unexpected status 0x57f
Killed

@S2ciOnur
Copy link

S2ciOnur commented Dec 18, 2024

@republicandaddy / @Weingartens Are you please able to provide some detailed steps on how to get it to work. I am getting the error below when i follow the steps: sudo: process 6789 unexpected status 0x57f Killed Executed CronTask - /mnt/Data/led/leds.sh > /dev/null: sudo: process 6789 unexpected status 0x57f Killed

same here.
Changed zpool to /sbin/zpool in the script.
the cli got rwxrwxrwx
it worked before i updated to the new stable TrueNas.
Ugreen DXP4800 Plus
TrueNas Scale ElectricEel-24.10.1
But now i get this:

Mapping devices...
Mapped sda1 to 0
Mapped sdb1 to 1
Mapped sdc1 to 2
Mapped sdd1 to 3
Hardware mapping (hwmap):
sda1: 0
sdd1: 3
sdb1: 1
sdc1: 2
Checking zpool status...
Processing sda1 with status ONLINE
Device sda1 maps to index 2
Processing sdb1 with status ONLINE
Device sdb1 maps to index 3
Processing sdc1 with status ONLINE
Device sdc1 maps to index 4
Processing sdd1 with status ONLINE
Device sdd1 maps to index 5
Final device statuses:
0: p
sudo: process 187901 unexpected status 0x57f
./led.sh: line 60: 187901 Killed "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 255 255 -on -brightness 64
1: u
sudo: process 187902 unexpected status 0x57f
./led.sh: line 60: 187902 Killed "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 255 255 255 -on -brightness 64
2: o
sudo: process 187903 unexpected status 0x57f
./led.sh: line 60: 187903 Killed "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 0 255 0 -on -brightness 64
3: o
sudo: process 187904 unexpected status 0x57f
./led.sh: line 60: 187904 Killed "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 0 255 0 -on -brightness 64
4: o
sudo: process 187905 unexpected status 0x57f
./led.sh: line 60: 187905 Killed "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 0 255 0 -on -brightness 64
5: o
sudo: process 187906 unexpected status 0x57f
./led.sh: line 60: 187906 Killed "$SCRIPTPATH/ugreen_leds_cli" ${map[$i]} -color 0 255 0 -on -brightness 64

@javanton
Copy link

javanton commented Dec 27, 2024

@S2ciOnur it happened to me the same. It is because of the latest update of TrueNAS to 24.10.1, they fixed a ZFS bug that allowed to run scripts in admin directory, when it should not be possible: https://forums.truenas.com/t/shell-script-permission-denied-with-24-10-1/27941

Changing the script location to your data pool fixes it :)

Extracted directly from TrueNAS 24.10.1 Release Notes:

The boot pool is now properly enforcing the default setuid and noexec options (NAS-127825). This restores the default boot pool behavior to be restricted from general use. Users that are currently attempting to exec scripts from a /home or other boot pool location should move these to a data pool location.

@rybackisback
Copy link

Can someone please point me to where the ugreen_leds_cli tool exist on https://github.com/miskcoo/ugreen_dx4600_leds_controller? Is it just the cli folder under ugreen_leds_controller that I need to download?

@S2ciOnur
Copy link

@S2ciOnur it happened to me the same. It is because of the latest update of TrueNAS to 24.10.1, they fixed a ZFS bug that allowed to run scripts in admin directory, when it should not be possible: https://forums.truenas.com/t/shell-script-permission-denied-with-24-10-1/27941

Changing the script location to your data pool fixes it :)

Extracted directly from TrueNAS 24.10.1 Release Notes:

The boot pool is now properly enforcing the default setuid and noexec options (NAS-127825). This restores the default boot pool behavior to be restricted from general use. Users that are currently attempting to exec scripts from a /home or other boot pool location should move these to a data pool location.

Thank you, i fixed it.

@S2ciOnur
Copy link

Can someone please point me to where the ugreen_leds_cli tool exist on https://github.com/miskcoo/ugreen_dx4600_leds_controller? Is it just the cli folder under ugreen_leds_controller that I need to download?

https://github.com/miskcoo/ugreen_leds_controller/releases/download/v0.1-debian12/ugreen_leds_cli

@rybackisback
Copy link

Can someone please point me to where the ugreen_leds_cli tool exist on https://github.com/miskcoo/ugreen_dx4600_leds_controller? Is it just the cli folder under ugreen_leds_controller that I need to download?

https://github.com/miskcoo/ugreen_leds_controller/releases/download/v0.1-debian12/ugreen_leds_cli

Thank you!

@hgranillo
Copy link

I tried this on my DXP4800 Plus, it worked like a charm. Thank you.

@flamel7ramond
Copy link

thanks for the code, it works like a charm !!
btw, for guys who want to place the file in the boot-ssd (the OS drive)
just place the files (which are 'your-script.sh' and 'ugreen_leds_cli') inside "/root/" directory. Then chmod these files to 755.
(IDK why the chmod -X via terminal is not working, just typing 755 instead)

@issuex
Copy link

issuex commented Sep 3, 2025

inside "/root/" directory.

Is it lost after upgrade ?

@flamel7ramond
Copy link

inside "/root/" directory.

Is it lost after upgrade ?

idk, but it isn't deleted after reboot.

@jagaliano
Copy link

jagaliano commented Oct 14, 2025

For those running TrueNAS Scale (CE) inside a Proxmox 9 VM, this script leverages the QEMU Guest Agent (installed by default) to query the VM for disk and zpool status. This allows you to visualize the state of your virtualized storage directly on the NAS hardware, even when running in a virtualized environment.

Please read the comments.

#!/bin/bash
#
# Control Ugreen NAS LEDs for a specific Proxmox VM by name
# Model: Ugreen DXP6800 Pro
# Tested in Proxmox 9.0.11 & TrueNAS 25.10-RC.1 - Goldeye
# Requires: QEMU guest agent installed & running inside the VM
#
# Usage:
#   ./ugreen-leds.sh <vm_name>
#

set -euo pipefail

### ────────────────────────────────────────────────
### CONFIGURATION
### ────────────────────────────────────────────────
QMCMD="/usr/sbin/qm"
JQCMD="/usr/bin/jq"
IPCMD="/sbin/ip"
BRIGHTNESS=64
SCRIPTPATH=$(dirname "$0")
UGREEN_CLI="${SCRIPTPATH}/ugreen_leds_cli"

# If running in QEMU with virtualized disks use...
# SMARTSTR="SMART Health Status:"
# If passing through the SATA interface...
SMARTSTR="SMART overall-health self-assessment test result"


### ────────────────────────────────────────────────
### USAGE & INPUT VALIDATION
### ────────────────────────────────────────────────
if [[ $# -eq 0 || "$1" == "--help" || "$1" == "-h" ]]; then
    echo "Usage: $0 <vm_name>"
    echo "Controls Ugreen LEDs based on Host & VM disk/network/ZFS status."
    exit 0
fi
VM_NAME="$1"

### ────────────────────────────────────────────────
### CHECK DEPENDENCIES
### ────────────────────────────────────────────────
for cmd in "$QMCMD" "$JQCMD" "$IPCMD"; do
    [[ -x "$cmd" ]] || { echo "Error: required command not found: $cmd"; exit 1; }
done
[[ -f "$UGREEN_CLI" ]] || { echo "Error: Ugreen LED CLI tool not found at $UGREEN_CLI"; exit 1; }
[[ -x "$UGREEN_CLI" ]] || { echo "Error: Ugreen LED CLI is not executable"; exit 1; }

### ────────────────────────────────────────────────
### RESOLVE VMID
### ────────────────────────────────────────────────
VMID=$($QMCMD list | awk -v n="$VM_NAME" '$2==n {print $1; exit}')
[[ -n "$VMID" ]] || { echo "Error: No VM found with name '$VM_NAME'"; exit 1; }

VMSTATUS=$($QMCMD status "$VMID" | awk '{print $2}')
[[ "$VMSTATUS" == "running" ]] || { echo "Error: VM '$VM_NAME' ($VMID) is not running"; exit 1; }

### ────────────────────────────────────────────────
### HELPER: Execute inside VM via Guest Agent
### ────────────────────────────────────────────────
guest_exec() {
    local vmid="$1"; shift
    "$QMCMD" guest exec "$vmid" -- "$@" \
        | "$JQCMD" -r '.["out-data"]' \
        | sed 's/\\n/\n/g'
}

# Verify guest agent works
if ! guest_exec "$VMID" /bin/true &>/dev/null; then
    echo "Error: QEMU Guest Agent not responding in VM $VMID"
    exit 1
fi

### ────────────────────────────────────────────────
### LED MAP
### ────────────────────────────────────────────────

# This is the real map when passing through the SATA interface to VM
declare -A led_map=(
    ["disk1"]="2:0:0:0"
    ["disk2"]="3:0:0:0"
    ["disk3"]="4:0:0:0"
    ["disk4"]="5:0:0:0"
    ["disk5"]="0:0:0:0"
    ["disk6"]="1:0:0:0"
)

# This is the map for virtualized disks
# declare -A led_map=(
#    ["disk1"]="1:0:0:1"
#    ["disk2"]="2:0:0:2"
#    ["disk3"]="3:0:0:3"
#    ["disk4"]="4:0:0:4"
#    ["disk5"]="5:0:0:5"
#    ["disk6"]="6:0:0:6"
# )
declare -A led_status
led_status["power"]="-off"
led_status["netdev"]="-off"
for key in "${!led_map[@]}"; do
    led_status["$key"]="-off"
done

### ────────────────────────────────────────────────
### NETWORK LED & POWER
### ────────────────────────────────────────────────
# Check for active physical network connectivity (Ethernet only)
check_network_connectivity() {
    local iface carrier operstate found_up=0
    for dev in /sys/class/net/*; do
        [[ -e "$dev/device" ]] || continue    # Skip virtual/non-physical
        [[ -d "$dev/wireless" ]] && continue  # Skip wireless interfaces

        # Check carrier and operstate
        local carrier_file="$dev/carrier"
        [[ -r "$carrier_file" ]] || continue
        read -r carrier < "$carrier_file" || carrier=0
        read -r operstate < "$dev/operstate" || operstate=down

        if [[ "$carrier" -eq 1 && "$operstate" == "up" ]]; then
            found_up=1
            break  # Stop after first active link
        fi
    done
    (( found_up == 1 ))
}

# LED color based on connectivity
led_status["power"]="-color 255 255 255 -on -brightness $BRIGHTNESS" # White
if check_network_connectivity; then
    led_status["netdev"]="-color 0 0 255 -on -brightness $BRIGHTNESS" # Blue = connected
else
    led_status["netdev"]="-color 255 0 0 -on -brightness $BRIGHTNESS" # Red = no network
fi


### ────────────────────────────────────────────────
### DISK HEALTH LED
### ────────────────────────────────────────────────
# Get the currently active disks and their names
active_disks=$(guest_exec "$VMID" /usr/bin/lsblk -S -d -o name,hctl,tran | tail -n +2)

while IFS= read -r line; do
    read -r disk_name hctl tran <<< "$line"
    hctl=$(awk '{print $2}' <<< "$line")

    # Map HCTL to LED
    led_name=""
    for key in "${!led_map[@]}"; do
        [[ "${led_map[$key]}" == "$hctl" ]] && led_name="$key"
    done
    [[ -n "$led_name" ]] || continue

    # SMART status
    smart_out=$(guest_exec "$VMID" /usr/sbin/smartctl -H "/dev/$disk_name" || true)
    smart_status=$(grep "$SMARTSTR" <<< "$smart_out" | awk '{print $4}' | tr -d '\n')

    case "${smart_status^^}" in
        OK|PASSED) led_status["$led_name"]="-color 0 255 0 -on -brightness $BRIGHTNESS" ;; # Green
        FAILED) led_status["$led_name"]="-color 255 0 0 -on -brightness $BRIGHTNESS" ;; # Red
        *) led_status["$led_name"]="-color 255 255 0 -on -brightness $BRIGHTNESS" ;; # Yellow
    esac

done <<< "$active_disks"

### ────────────────────────────────────────────────
### ZPOOL STATUS → BLINK ON DEGRADED / FAULTED
### ────────────────────────────────────────────────
check_zpool_status() {
    local zpool_output unhealthy_devices=()
    zpool_output=$(guest_exec "$VMID" /sbin/zpool status -v)  # -v for verbose
    # Use awk or grep sections
    mapfile -t unhealthy_devices < <(echo "$zpool_output" | awk '/state:/ && $2 !~ /ONLINE|AVAIL/ {print $1}' )
    for device in "${unhealthy_devices[@]}"; do
        # Resolve to physical disk
        parent_disk=$(guest_exec "$VMID" /usr/bin/zdb -C "$device" | grep 'path' | cut -d"'" -f2 | xargs basename | cut -d'/' -f3)  # Better for ZFS device path
        # Or stick with lsblk if partitions
        hctl=$(guest_exec "$VMID" /usr/bin/lsblk -S -o name,hctl | awk -v d="$parent_disk" '$1==d {print $2}')
        led_name=$(printf '%s\n' "${!led_map[@]}" | while read key; do [[ "${led_map[$key]}" == "$hctl" ]] && echo "$key"; done)
        [[ -n "$led_name" ]] && led_status["$led_name"]+=' -blink 500 500'
    done
}

check_zpool_status

### ────────────────────────────────────────────────
### UPDATE LEDs (send to Ugreen CLI)
### ────────────────────────────────────────────────
for led in "${!led_status[@]}"; do
    "$UGREEN_CLI" "$led" ${led_status[$led]} || echo "Failed to set $led" >&2
done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment