Skip to content

Instantly share code, notes, and snippets.

@ChristopherA
Last active October 25, 2025 22:36
Show Gist options
  • Save ChristopherA/39b5a9b51dd0ff7eac79da339aa233ee to your computer and use it in GitHub Desktop.
Save ChristopherA/39b5a9b51dd0ff7eac79da339aa233ee to your computer and use it in GitHub Desktop.
Alpine Linux on UTM: Automation Guide

Alpine Linux on UTM: Automation Guide

A comprehensive guide to automating Alpine Linux VMs on UTM for rapid deploy-test-destroy workflows.


Document Information

Abstract: This guide documents Alpine-specific discoveries and workflows for UTM automation, focusing on P2P protocol testing, rapid iteration, and minimal resource overhead.

Version: 1.0.0 (2025-10-25) - SSH authentication fixed, production-ready For information on this versioning scheme, see Status & Versioning

Status:Production - Extensively tested automation with dual SSH authentication

Prerequisites: Read UTM Automation Guide first for:

  • UTM architecture and utmctl basics
  • config.plist editing and UTM configuration caching
  • UEFI boot requirements and removable boot paths
  • Serial console configuration (TCP mode)
  • Network modes and guest agent setup

This guide assumes that knowledge and focuses exclusively on Alpine Linux specifics.

Tested Environment: Alpine Linux 3.22, UTM 4.x, macOS 26.x, ARM64 (M-series)

Origin: Real-world automation for testing P2P protocols (Tor, Lightning Network, BitTorrent DHT).

Copyright & License:

  • Text: Copyright © 2025 by Christopher Allen, licensed under CC-BY-SA-4.0
  • Code: Released to the public domain under CC0-1.0

Tags: #alpine · #utm · #automation · #linux


Support This Work

If you find this guide valuable, consider supporting my open source and digital civil rights advocacy efforts.

I work to represent smaller developers in a vendor-neutral, platform-neutral way, advancing the open web, digital civil liberties, and human rights. Your sponsorship helps sustain this work and ensures I can continue creating resources like this guide.

Become a sponsor: GitHub Sponsors (from $5/month)

This isn't just a transaction—it's an opportunity to plug into a network advancing the digital commons. Let's collaborate!

-- Christopher Allen


Alpine-Specific Quick Tips

Already read the UTM Automation Guide? Here are the Alpine-specific gotchas:

  1. Automated template creation works! ⚡ For complete ISO-to-template automation using answer files and serial console, see the Quick Start section below or the UTM Automation Guide's Installation Automation section for generic patterns.

  2. GRUB module loading fix is critical: Alpine's BOOTAA64.EFI bypasses the generic shim. You must prepend insmod commands to /boot/grub/grub.cfg. See Alpine GRUB Boot Flow.

  3. Use apk, not apt: Alpine's package manager is apk add, apk update, apk upgrade, etc.

  4. Use OpenRC, not systemd: Services use rc-update add <service> <runlevel> and rc-service <service> start, not systemctl.

  5. Cloud images don't work locally: Stick with template cloning. See why.

  6. musl libc, not glibc: Some binary-only software may not work. Most source code builds fine.


🎯 Production-Ready Automation

Looking for complete automation? See utm-alpine-kit

This gist documents the principles and knowledge for Alpine automation on UTM. The GitHub repo provides production-ready scripts implementing these principles.

  • ✅ One-command template creation (~2 minutes)
  • ✅ Instant cloning (0-1 second) with automatic configuration
  • ✅ Dual SSH authentication (key + password)
  • ✅ Automated testing workflows
  • ✅ 2,500+ lines of documentation
  • ✅ Real-world troubleshooting

Continue reading this gist for deep technical understanding and Alpine-specific knowledge.


Table of Contents


Why Alpine for UTM?

Advantages

Minimal Footprint:

  • Installed system: ~500MB
  • RAM usage: ~150MB idle, comfortable at 512MB
  • Boot time: <5 seconds
  • Multiple VMs on modest hardware

Purpose-Built for Ephemeral Workloads:

  • Answer file support (setup-alpine -f)
  • Fast package installation (apk)
  • Stable releases (not rolling)
  • Built-in Alpine Local Backup (lbu) for diskless setups

Simple and Reliable:

  • musl libc (smaller, cleaner than glibc)
  • BusyBox utilities (minimal footprint)
  • OpenRC init system (fast, simple)
  • Clear upgrade paths between versions

Perfect Use Cases

Deploy-test-destroy workflows - Clone → provision → test → destroy <5 min ✅ P2P protocol testing - Run 3-5 node clusters on modest hardware ✅ CI/CD environments - Fast, reproducible test environments ✅ Multi-VM testing - Low resource overhead per VM ✅ Rust development - Quick builds, minimal dependencies

Considerations

⚠️ Different from mainstream distributions:

  • musl libc instead of glibc (some binary compatibility differences)
  • OpenRC instead of systemd (different service management)
  • BusyBox instead of GNU coreutils (slightly different command flags)
  • Smaller package ecosystem (but comprehensive for development)

⚠️ Not ideal for:

  • Applications requiring glibc (though musl compatibility is good)
  • Complex systemd-dependent software
  • Desktop environments (Alpine focuses on servers/embedded)

Quick Start

For detailed setup instructions, see utm-alpine-kit repository.

Automated Template Creation (~2 minutes) ⚡

Complete ISO-to-template automation using answer files and serial console:

git clone https://github.com/ChristopherA/utm-alpine-kit.git
cd utm-alpine-kit
./scripts/create-alpine-template.sh --iso ~/.cache/vms/alpine-virt-3.22.0-aarch64.iso

Key automation techniques:

  • AppleScript VM creation (utmctl create doesn't exist)
  • Serial console automation (TCP mode)
  • Answer file installation (ROOTSSHKEY, DISKOPTS)
  • Post-install SSH configuration

Full documentation: utm-alpine-kit Setup Guide

Manual Template Creation (~15 minutes)

For step-by-step manual creation:

  1. Download Alpine virt ISO
  2. Create VM via UTM GUI
  3. Run setup-alpine interactively
  4. Apply GRUB module loading fix (see Alpine GRUB Boot Flow)
  5. Configure SSH keys and services

Detailed walkthrough: utm-alpine-kit Template Creation Guide

Critical requirement: Alpine requires GRUB module loading fix for clean boot. See Alpine GRUB Boot Flow section below or utm-alpine-kit GRUB Fix Doc.


Answer File Automation

Alpine's setup-alpine supports unattended installation via answer files. This enables fully automated template creation from ISO to ready-to-clone VM in ~2 minutes.

What Works via Answer File

Network configuration (DHCP or static) ✅ Timezone and NTPAPK repositoriesSSH key deployment (ROOTSSHKEY) ✅ Disk mode selection (DISKOPTS) ✅ Hostname and keymap

Critical Limitations Discovered

Root password CANNOT be set

  • ROOTPASS variable exists but doesn't work
  • Password prompts appear even with ROOTSSHKEY configured
  • Workaround: Handle prompts in expect script, set password via SSH post-install

⚠️ DISKOPTS sets mode but doesn't execute

  • Sets setup-disk -m sys /dev/vda parameters
  • Does NOT actually run setup-disk
  • Workaround: Call setup-disk in same expect session after setup-alpine

Working pattern: Run setup-alpine + setup-disk in single expect session

Answer File Template Example

# Alpine Answer File Template
# Full version: templates/alpine-template.answers in utm-alpine-kit

# Keyboard and hostname
KEYMAPOPTS="us us"
HOSTNAMEOPTS="-n alpine-template"

# Networking (DHCP - bridged mode)
INTERFACESOPTS="auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp
"

# Root SSH Key (placeholder replaced by script)
# This is the key automation feature - works perfectly!
ROOTSSHKEY="%%SSH_KEY%%"

# Timezone
TIMEZONEOPTS="-z America/Los_Angeles"

# APK repositories (use fastest mirror)
APKREPOSOPTS="-1"

# NTP client
NTPOPTS="-c chrony"

# Disk setup (CRITICAL: Sets mode only, doesn't execute!)
DISKOPTS="-m sys /dev/vda"

# SSH server
SSHDOPTS="-c openssh"

# Essential services and packages
USEROPTS="-a"  # No additional user
APKCACHEOPTS="/var/cache/apk"

Serving Answer Files to VM

Alpine installer needs network access to download answer file:

# 1. Prepare answer file with SSH key substitution
SSH_KEY=$(cat ~/.ssh/id_ed25519_alpine_vm.pub)
sed "s|%%SSH_KEY%%|${SSH_KEY}|" alpine-template.answers > /tmp/alpine-answer.txt

# 2. Get host IP (macOS methods)
HOST_IP=$(ipconfig getifaddr en0 || ipconfig getifaddr en1)

# 3. Start HTTP server (Python 3 built into macOS)
cd /tmp
python3 -m http.server 8888 &
HTTP_PID=$!

# 4. In VM (via serial console automation):
setup-alpine -f http://${HOST_IP}:8888/alpine-answer.txt

# 5. Cleanup after installation
kill $HTTP_PID
rm /tmp/alpine-answer.txt

Automation with Expect Scripts

Handle password prompts and run setup-disk in same session:

#!/usr/bin/expect -f
# Simplified example - see scripts/lib/install-via-answerfile.exp for full version

set timeout 180
set host_ip [exec sh -c "ipconfig getifaddr en0 || ipconfig getifaddr en1"]
set answer_url "http://${host_ip}:8888/alpine-answer.txt"

# Connect to serial console (TCP mode)
spawn nc localhost 4444

# Wait for login
send "\r"
expect "login:"
send "root\r"
expect "#"

# Configure network
send "setup-interfaces\r"
expect "Which one" { send "eth0\r" }
expect "Ip address" { send "dhcp\r" }
expect "manual" { send "n\r" }
expect "#"

send "ifup eth0\r"
expect "#"
sleep 3

# Download answer file
send "setup-apkrepos -1\r"
expect "#"
send "apk update && apk add wget\r"
expect "#"
send "wget -O /tmp/answers.txt $answer_url\r"
expect "#"

# Run setup-alpine with answer file
send "setup-alpine -f /tmp/answers.txt\r"

# CRITICAL: Handle password prompts (even with ROOTSSHKEY!)
expect {
    -re "password.*root" {
        send "\r"  # Skip - will set via SSH later
        exp_continue
    }
    -re "Retype password:" {
        send "\r"
        exp_continue
    }
    -re "Bad password|password.*unchanged" {
        # Expected - password can't be set via answer file
        exp_continue
    }
    -re "complete" {
        expect "#"
    }
}

# CRITICAL: Run setup-disk in SAME session
# (DISKOPTS sets parameters but doesn't execute)
send "setup-disk -m sys /dev/vda\r"

expect {
    -re "continue\\?" {
        send "y\r"
        exp_continue
    }
    -re "Installation is complete" {
        expect "#"
    }
}

# Done - VM will reboot automatically
close

Key insights from testing:

  • Serial console must use TCP mode (not Ptty) for reliable automation
  • UTM must be restarted after config.plist changes (UTM caches configs)
  • Password prompts appear even with ROOTSSHKEY - handle in expect
  • setup-disk must run in same expect session (separate session loses state)

Complete Automation Architecture

Host (macOS)                          VM (Alpine)
├── AppleScript                       ├── Boot from ISO
│   └── Create VM                     ├── Serial console (TCP 4444)
├── PlistBuddy                        ├── Network via DHCP
│   └── Configure serial console      ├── Download answer file
├── HTTP Server (Python)              ├── setup-alpine -f
│   └── Serve answer file             ├── Handle password prompts
├── Expect Script                     ├── setup-disk (manual)
│   └── Automate installation         └── Reboot to disk
└── SSH
    └── Set password post-install

Time: ~2 minutes total

Full implementation: https://github.com/ChristopherA/utm-alpine-kit

  • scripts/create-alpine-template.sh (450 lines)
  • scripts/lib/install-via-answerfile.exp (196 lines)
  • templates/alpine-template.answers (140 lines)

See Also:


Alpine GRUB Boot Flow

The Problem: Boot Errors Without Fix

When Alpine boots without the GRUB fix, you'll see errors like:

error: ../../grub-core/script/function.c:119:can't find command `['.
error: ../../grub-core/script/function.c:119:can't find command `['.
error: ../../grub-core/script/function.c:119:can't find command `echo'.

The VM will still boot, but these errors indicate GRUB is trying to use commands before loading the modules that provide them.

Why This Happens

Alpine's grub-install --removable creates BOOTAA64.EFI with an embedded configuration inside the binary:

search --no-floppy --fs-uuid --set=root <UUID>
set prefix=($root)/grub
configfile ($root)/grub/grub.cfg

This loads /boot/grub/grub.cfg directly (NOT /boot/EFI/BOOT/grub.cfg).

However, grub-mkconfig (Alpine's default config generator) produces a /boot/grub/grub.cfg that starts with:

if [ -s $prefix/grubenv ]; then
  load_env
fi

Problem: GRUB tries to execute [ (test command) and echo before loading the modules that provide those commands!

The Solution

Prepend module loading commands to /boot/grub/grub.cfg before the auto-generated content:

insmod part_gpt
insmod part_msdos
insmod fat
insmod ext2
insmod test      # Provides [ and other conditionals
insmod echo      # Provides echo command
insmod normal    # Provides normal boot flow

These insmod commands are built-in and always work, even without modules loaded.

Why the Generic Shim Doesn't Work for Alpine

The generic UTM guide suggests creating /boot/EFI/BOOT/grub.cfg as a shim. This works for some distributions but NOT Alpine because:

  1. Alpine's BOOTAA64.EFI has embedded config pointing directly to /grub/grub.cfg
  2. The shim at /EFI/BOOT/grub.cfg is never executed (it's orphaned)
  3. Must fix the actual /boot/grub/grub.cfg that GRUB loads

Verification

Check what's embedded in BOOTAA64.EFI:

strings /boot/EFI/BOOT/BOOTAA64.EFI | grep -A3 "configfile"
# Output:
# search --no-floppy --fs-uuid --set=root E56A-A2FD
# set prefix=($root)/grub
# configfile ($root)/grub/grub.cfg

This confirms it loads /boot/grub/grub.cfg directly.

When to Re-apply the Fix

Re-apply the GRUB module loading fix after:

  • Running grub-mkconfig (regenerates /boot/grub/grub.cfg)
  • Kernel updates that trigger config regeneration
  • Alpine version upgrades

Automation tip: Create a script to apply the fix automatically.

See Also:


Alpine Package Management (apk)

Alpine uses apk instead of apt. This is critical for automation scripts and provisioning.

Why it matters for UTM automation:

  • Faster package installation (optimized for Alpine)
  • Smaller package sizes (minimal dependencies)
  • Different command syntax (apk add not apt install)
  • Repository structure differs from Debian/Ubuntu

Key differences:

Task Alpine Debian/Ubuntu
Update index apk update apt update
Install apk add pkg apt install pkg
Remove apk del pkg apt remove pkg

Alpine repository tiers:

  • main - Core packages (guaranteed stability)
  • community - Most development tools
  • testing - Bleeding edge (avoid in production)

Complete apk reference: utm-alpine-kit Alpine Reference

Common automation pattern:

# Update and install essentials in one go
apk update && apk add --no-cache build-base git curl

Critical for scripts: Always run apk update before apk add to avoid "package not found" errors.


Alpine Service Management (OpenRC)

Alpine uses OpenRC instead of systemd. This affects how you manage services in automation.

Why it matters for UTM automation:

  • Different commands for service management
  • No systemctl - use rc-service and rc-update instead
  • Simpler, faster, smaller footprint
  • Runlevel-based (not target-based like systemd)

Key command differences:

Task Alpine (OpenRC) Systemd
Enable on boot rc-update add svc default systemctl enable svc
Start now rc-service svc start systemctl start svc
Check status rc-service svc status systemctl status svc
View logs tail /var/log/messages journalctl -xe

Common runlevels:

  • boot - Early initialization (networking, hostname)
  • default - Normal operation (sshd, services)

Critical services for utm-alpine-kit:

  • qemu-guest-agent - Required for IP detection (utmctl ip-address)
  • sshd - SSH access
  • networking - Network connectivity

Complete OpenRC reference: utm-alpine-kit Alpine Reference

Common automation pattern:

# Enable and start service in one go
rc-update add qemu-guest-agent default && rc-service qemu-guest-agent start

Alpine Network Configuration

Alpine uses /etc/network/interfaces for network configuration. This is different from systemd-based distributions.

Why it matters for UTM automation:

  • Template VMs use DHCP by default (simple, works with bridged mode)
  • Each cloned VM gets unique IP via MAC address + DHCP
  • Static IPs possible but rarely needed for testing workflows

Common automation pattern:

# Template default (DHCP)
auto eth0
iface eth0 inet dhcp

# Networking must be enabled at boot
rc-update add networking boot

Complete network configuration reference: utm-alpine-kit Alpine Reference

Critical for cloning: DHCP works out-of-the-box with MAC address regeneration. Each clone automatically gets unique IP from your router.


Clone-Based Workflow

Core concept: Create template once, clone infinitely. Alpine's minimal size makes this ideal.

Why cloning works for Alpine:

  • Small template size (~500MB) = fast cloning
  • No cloud-init complexity
  • Pre-configured SSH keys and services
  • Clean, reproducible environments

Critical requirements for cloning:

  1. Unique MAC addresses - Required for DHCP (each clone gets different IP)
  2. UTM configuration caching - Must quit/restart UTM after config.plist edits
  3. QEMU guest agent - Enables IP detection via utmctl ip-address

Deploy-test-destroy workflow:

# Clone template (10-30 seconds)
./scripts/clone-vm.sh test-vm

# Run tests (time varies)
./scripts/provision-for-testing.sh test-vm 192.168.1.100 \
  https://github.com/user/repo.git "cargo test"

# Destroy when done (<10 seconds)
./scripts/destroy-vm.sh test-vm

Total time: < 2 minutes for complete cycle

Production implementation:

Complete workflow examples: utm-alpine-kit Rust Testing

Key insight: UTM's configuration caching is the most common automation stumbling block. Always quit→edit→restart UTM when modifying config.plist. See utm-alpine-kit UTM Fundamentals for details.


Alpine Cloud Images: Why Not Use Them?

Summary

Tested: 2025-10-09 Result: Alpine cloud images are NOT suitable for local UTM use.

What We Tested

  • nocloud_alpine-3.22.1-aarch64-uefi-tiny-r0.qcow2 (155MB)
  • nocloud_alpine-3.22.1-aarch64-uefi-cloudinit-r0.qcow2 (216MB)

Findings

What worked:

  • ✅ Both variants boot cleanly (no UEFI/GRUB errors)
  • ✅ Serial console functional
  • ✅ Cloud-init partially ran (hostname set from metadata)

What failed:

  • ❌ SSH keys not configured (authentication impossible)
  • ❌ No default credentials (cannot access system)
  • ❌ No QEMU guest agent pre-installed
  • ❌ Cloud-init ISO not fully recognized by Alpine

Why Template Cloning is Better

Aspect Cloud Image Template Clone
Initial setup Download 216MB Manual install ~15 min
Authentication ❌ Requires cloud-init ✅ SSH keys pre-configured
Guest agent ❌ Not included ✅ Installed and working
Cloning ⚠️ Need cloud-init per clone ✅ Simple disk copy
Customization ⚠️ Via cloud-init only ✅ Full control
Time per clone Unknown (couldn't test) <1 minute

Recommendation

Use manual template + cloning approach. The one-time 15-minute setup is worth the reliable, fast cloning workflow.

For complete technical details about what we tried with cloud images, see the full cloud images evaluation document in the project repository.


P2P Protocol Testing Setup

Alpine's minimal footprint makes it ideal for testing P2P protocols with multiple VMs.

Use Cases

  • Tor development: Run relay, guard, exit nodes in separate VMs
  • Lightning Network: Multi-node channel testing
  • BitTorrent DHT: Test node discovery and data exchange
  • Custom P2P protocols: Rust-based distributed systems

Rust Development Template

Create a specialized Rust template:

# Clone base template
./clone-vm.sh alpine-rust-dev

# Provision for Rust
VM_IP=$(utmctl ip-address alpine-rust-dev | head -1)

ssh root@${VM_IP} << 'EOF'
# Install Rust toolchain
apk add rust cargo

# Build essentials
apk add build-base openssl-dev

# Optimization: Use sccache
apk add sccache

# Configure cargo for faster builds
mkdir -p ~/.cargo
cat > ~/.cargo/config.toml << 'CARGO'
[build]
rustc-wrapper = "/usr/bin/sccache"
jobs = 2

[profile.dev]
opt-level = 0
debug = true

[profile.release]
opt-level = 3
lto = "thin"
CARGO

# Pre-warm cargo cache
cargo search --limit 0
EOF

# This becomes your Rust template - stop it and use as clone source
utmctl stop alpine-rust-dev

Tor Development Template

./clone-vm.sh alpine-tor-dev
VM_IP=$(utmctl ip-address alpine-tor-dev | head -1)

ssh root@${VM_IP} << 'EOF'
# Install Tor
apk add tor privoxy

# Configure Tor for development
cat > /etc/tor/torrc << 'TOR'
# SOCKS proxy
SocksPort 9050

# Control port for controller apps
ControlPort 9051
CookieAuthentication 1

# Logging
Log notice file /var/log/tor/notices.log
TOR

# Enable service
rc-update add tor default
rc-service tor start

# Verify
rc-service tor status
EOF

Network Analysis Tools

ssh root@${VM_IP} << 'EOF'
# Packet capture and analysis
apk add tcpdump wireshark-common

# Network monitoring
apk add iftop nethogs iperf3

# Network utilities
apk add nmap netcat-openbsd bind-tools

# Protocol testing
apk add curl wget httpie
EOF

Multi-VM P2P Testing Example

#!/bin/bash
# test-dht-cluster.sh - Create 5-node DHT test cluster

set -euo pipefail

# Create 5 VMs
for i in {1..5}; do
    echo "Creating dht-node-$i..."
    ./clone-vm.sh "dht-node-$i"
done

# Wait for all to boot
sleep 20

# Collect IPs
declare -a NODE_IPS
for i in {1..5}; do
    IP=$(utmctl ip-address "dht-node-$i" | head -1)
    NODE_IPS[$i]=$IP
    echo "dht-node-$i: $IP"
done

# Deploy code to all nodes
for i in {1..5}; do
    echo "Deploying to dht-node-$i..."
    ssh root@${NODE_IPS[$i]} << 'EOF'
cd /root
git clone https://github.com/your/dht-implementation
cd dht-implementation
cargo build --release
EOF
done

# Start DHT nodes (each knows about others)
for i in {1..5}; do
    echo "Starting DHT on dht-node-$i..."
    PEERS=""
    for j in {1..5}; do
        if [ $i -ne $j ]; then
            PEERS="$PEERS --peer ${NODE_IPS[$j]}:8080"
        fi
    done

    ssh root@${NODE_IPS[$i]} "cd /root/dht-implementation && nohup ./target/release/dht-node --port 8080 $PEERS > /var/log/dht.log 2>&1 &"
done

echo "✅ 5-node DHT cluster running"
echo "Monitor logs: ssh root@${NODE_IPS[1]} 'tail -f /var/log/dht.log'"

# Test DHT operations...
# ...your test code here...

# Cleanup when done
echo "Destroying cluster..."
for i in {1..5}; do
    ./destroy-vm.sh "dht-node-$i"
done

Performance: 5 VMs @ 512MB each = 2.5GB RAM total. Runs comfortably on 8GB host.


Troubleshooting

Most common issues and quick fixes:

  1. GRUB boot errors ("can't find command '['")

    • Cause: Alpine-specific GRUB module loading issue
    • Fix: See Alpine GRUB Boot Flow section above
    • Critical: This is the #1 issue for new Alpine UTM users
  2. No IP address / Network not starting

    • Cause: Networking service not enabled for boot runlevel
    • Fix: rc-update add networking boot && reboot
  3. QEMU guest agent not working

    • Cause: Service not installed or not running
    • Fix: apk add qemu-guest-agent && rc-update add qemu-guest-agent default
  4. SSH connection refused

    • Cause: SSH server not installed or not running
    • Fix: apk add openssh && rc-update add sshd default
  5. apk errors ("Unable to lock database", "UNTRUSTED signature")

    • Cause: Clock skew or stale package index
    • Fix: apk update (always run before apk add)

Complete troubleshooting guide: utm-alpine-kit Troubleshooting

Key insight: Most Alpine issues stem from forgetting OpenRC's runlevel system (rc-update add <service> <runlevel>) vs systemd's systemctl enable.


Resources

Alpine Linux Official

Documentation

Related Guides

  • UTM Automation Guide - Generic UTM automation (distribution-agnostic)
  • CLOUD_IMAGES_EVALUATION.md - Detailed cloud images testing (local reference document)

Community


Complete Automation Suite

GitHub Repository: https://github.com/ChristopherA/utm-alpine-kit

Production-ready automation toolkit with:

One-command template creation (~2 minutes) ✅ Instant VM cloning (0-1 second) with RAM/CPU resize options ✅ Dual SSH authentication (key + password for flexibility) ✅ Language-detecting provisioning (Rust, Go, Python, Node) ✅ Complete documentation (2,500+ lines) ✅ Real-world troubleshooting (bugs we actually hit) ✅ 7 Rust testing workflow examples

Scripts:

  • create-alpine-template.sh - Automated template from ISO (450 lines)
  • clone-vm.sh - Clone with --ram/--cpu options (285 lines)
  • provision-for-testing.sh - Auto-detecting provisioning (270 lines)
  • destroy-vm.sh - Clean VM destruction (195 lines)
  • list-templates.sh - VM inventory (125 lines)

Documentation:

  • Complete macOS setup guide (420 lines)
  • Technical deep dive on template creation (540 lines)
  • Troubleshooting with real bugs and solutions (570 lines)
  • Rust testing examples (7 complete workflows, 450 lines)

This gist provides Alpine-specific knowledge; the repository provides production-ready automation.


Changelog

v1.0.0 (2025-10-25)

  • Fixed: SSH key persistenceCRITICAL FIX
    • Added sync command to flush filesystem buffers before verification
    • SSH keys now persist reliably on templates and clones
    • Both SSH key and password authentication validated working
    • Root cause: Filesystem writes weren't being synced before verification
  • Improved: Performance
    • Template creation time: ~5 minutes → ~2 minutes (consistent)
    • Instant VM cloning (0-1 second) validated
    • Fast boot verified (~25 seconds to SSH-ready)
  • Validated: Production readiness
    • 100% success rate across 3 complete end-to-end tests from scratch
    • All warnings identified as harmless (cosmetic Alpine artifacts)
    • Dual authentication (key + password) working reliably
  • Updated: Documentation and timing claims
    • All timing references updated to reflect current performance
    • utm-alpine-kit repository upgraded to v1.0.0
  • Status: Production (extensively tested, production-ready)

v0.2.00 (2025-10-19)

  • Added: Automated template creation option
    • One-command ISO-to-VM automation (~2 minutes)
    • Answer file with SSH key deployment
    • Serial console + expect script automation
    • Complete working example with repo link
  • Added: Answer file automation section
    • What works via answer file (ROOTSSHKEY, DISKOPTS, etc.)
    • Critical limitations discovered (ROOTPASS, DISKOPTS execution)
    • Complete expect script patterns
    • HTTP server setup for answer file delivery
  • Updated: Clone scripts with production features
    • --ram and --cpu resize options documented
    • UTM configuration caching workflow explained
    • Robust error handling patterns
    • Deploy-test-destroy timing (<2 minutes)
  • Added: Cross-reference to utm-alpine-kit repository
    • Production-ready scripts (1,700+ lines)
    • Comprehensive documentation (2,500+ lines)
    • Real-world troubleshooting
  • Documented: Key automation discoveries
    • AppleScript required for VM creation (utmctl create doesn't exist)
    • UTM must be quit/restarted after config.plist changes
    • setup-disk must run in same expect session as setup-alpine
    • Password prompts appear even with ROOTSSHKEY
  • Status: Production (proven automation in real-world use)

v1.0.00 (2025-10-09)

  • Initial release - Alpine-specific automation guide
  • Extracted from generic UTM_AUTOMATION_GUIDE.md
  • Complete Alpine installation workflow
  • Alpine GRUB module loading fix documented
  • apk and OpenRC management
  • Clone-based workflow scripts
  • P2P testing setup
  • Cloud images evaluation
  • Comprehensive troubleshooting

Last Updated: 2025-10-19 Alpine Version: 3.22 UTM Version: 4.x Tested Environment: macOS 26.x, ARM64 (M-series)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment