A comprehensive guide to automating Alpine Linux VMs on UTM for rapid deploy-test-destroy workflows.
Abstract: This guide documents Alpine-specific discoveries and workflows for UTM automation, focusing on P2P protocol testing, rapid iteration, and minimal resource overhead.
Version: 1.0.0 (2025-10-25) - SSH authentication fixed, production-ready For information on this versioning scheme, see Status & Versioning
Status: ✅ Production - Extensively tested automation with dual SSH authentication
Prerequisites: Read UTM Automation Guide first for:
- UTM architecture and utmctl basics
- config.plist editing and UTM configuration caching
- UEFI boot requirements and removable boot paths
- Serial console configuration (TCP mode)
- Network modes and guest agent setup
This guide assumes that knowledge and focuses exclusively on Alpine Linux specifics.
Tested Environment: Alpine Linux 3.22, UTM 4.x, macOS 26.x, ARM64 (M-series)
Origin: Real-world automation for testing P2P protocols (Tor, Lightning Network, BitTorrent DHT).
Copyright & License:
- Text: Copyright © 2025 by Christopher Allen, licensed under CC-BY-SA-4.0
- Code: Released to the public domain under CC0-1.0
Tags: #alpine · #utm · #automation · #linux
If you find this guide valuable, consider supporting my open source and digital civil rights advocacy efforts.
I work to represent smaller developers in a vendor-neutral, platform-neutral way, advancing the open web, digital civil liberties, and human rights. Your sponsorship helps sustain this work and ensures I can continue creating resources like this guide.
Become a sponsor: GitHub Sponsors (from $5/month)
This isn't just a transaction—it's an opportunity to plug into a network advancing the digital commons. Let's collaborate!
-- Christopher Allen
Already read the UTM Automation Guide? Here are the Alpine-specific gotchas:
-
Automated template creation works! ⚡ For complete ISO-to-template automation using answer files and serial console, see the Quick Start section below or the UTM Automation Guide's Installation Automation section for generic patterns.
-
GRUB module loading fix is critical: Alpine's BOOTAA64.EFI bypasses the generic shim. You must prepend
insmodcommands to/boot/grub/grub.cfg. See Alpine GRUB Boot Flow. -
Use
apk, notapt: Alpine's package manager isapk add,apk update,apk upgrade, etc. -
Use OpenRC, not systemd: Services use
rc-update add <service> <runlevel>andrc-service <service> start, notsystemctl. -
Cloud images don't work locally: Stick with template cloning. See why.
-
musl libc, not glibc: Some binary-only software may not work. Most source code builds fine.
Looking for complete automation? See utm-alpine-kit
This gist documents the principles and knowledge for Alpine automation on UTM. The GitHub repo provides production-ready scripts implementing these principles.
- ✅ One-command template creation (~2 minutes)
- ✅ Instant cloning (0-1 second) with automatic configuration
- ✅ Dual SSH authentication (key + password)
- ✅ Automated testing workflows
- ✅ 2,500+ lines of documentation
- ✅ Real-world troubleshooting
Continue reading this gist for deep technical understanding and Alpine-specific knowledge.
- Alpine-Specific Quick Tips
- Production-Ready Automation
- Why Alpine for UTM?
- Quick Start
- Answer File Automation
- Alpine GRUB Boot Flow
- Alpine Package Management
- Alpine Service Management
- Alpine Network Configuration
- Clone-Based Workflow
- Alpine Cloud Images: Why Not Use Them?
- P2P Protocol Testing Setup
- Troubleshooting
- Resources
- Complete Automation Suite
- Changelog
Minimal Footprint:
- Installed system: ~500MB
- RAM usage: ~150MB idle, comfortable at 512MB
- Boot time: <5 seconds
- Multiple VMs on modest hardware
Purpose-Built for Ephemeral Workloads:
- Answer file support (
setup-alpine -f) - Fast package installation (apk)
- Stable releases (not rolling)
- Built-in Alpine Local Backup (lbu) for diskless setups
Simple and Reliable:
- musl libc (smaller, cleaner than glibc)
- BusyBox utilities (minimal footprint)
- OpenRC init system (fast, simple)
- Clear upgrade paths between versions
✅ Deploy-test-destroy workflows - Clone → provision → test → destroy <5 min ✅ P2P protocol testing - Run 3-5 node clusters on modest hardware ✅ CI/CD environments - Fast, reproducible test environments ✅ Multi-VM testing - Low resource overhead per VM ✅ Rust development - Quick builds, minimal dependencies
- musl libc instead of glibc (some binary compatibility differences)
- OpenRC instead of systemd (different service management)
- BusyBox instead of GNU coreutils (slightly different command flags)
- Smaller package ecosystem (but comprehensive for development)
- Applications requiring glibc (though musl compatibility is good)
- Complex systemd-dependent software
- Desktop environments (Alpine focuses on servers/embedded)
For detailed setup instructions, see utm-alpine-kit repository.
Complete ISO-to-template automation using answer files and serial console:
git clone https://github.com/ChristopherA/utm-alpine-kit.git
cd utm-alpine-kit
./scripts/create-alpine-template.sh --iso ~/.cache/vms/alpine-virt-3.22.0-aarch64.isoKey automation techniques:
- AppleScript VM creation (utmctl create doesn't exist)
- Serial console automation (TCP mode)
- Answer file installation (ROOTSSHKEY, DISKOPTS)
- Post-install SSH configuration
Full documentation: utm-alpine-kit Setup Guide
For step-by-step manual creation:
- Download Alpine virt ISO
- Create VM via UTM GUI
- Run
setup-alpineinteractively - Apply GRUB module loading fix (see Alpine GRUB Boot Flow)
- Configure SSH keys and services
Detailed walkthrough: utm-alpine-kit Template Creation Guide
Critical requirement: Alpine requires GRUB module loading fix for clean boot. See Alpine GRUB Boot Flow section below or utm-alpine-kit GRUB Fix Doc.
Alpine's setup-alpine supports unattended installation via answer files. This enables fully automated template creation from ISO to ready-to-clone VM in ~2 minutes.
✅ Network configuration (DHCP or static) ✅ Timezone and NTP ✅ APK repositories ✅ SSH key deployment (ROOTSSHKEY) ✅ Disk mode selection (DISKOPTS) ✅ Hostname and keymap
❌ Root password CANNOT be set
ROOTPASSvariable exists but doesn't work- Password prompts appear even with ROOTSSHKEY configured
- Workaround: Handle prompts in expect script, set password via SSH post-install
- Sets
setup-disk -m sys /dev/vdaparameters - Does NOT actually run
setup-disk - Workaround: Call
setup-diskin same expect session aftersetup-alpine
✅ Working pattern: Run setup-alpine + setup-disk in single expect session
# Alpine Answer File Template
# Full version: templates/alpine-template.answers in utm-alpine-kit
# Keyboard and hostname
KEYMAPOPTS="us us"
HOSTNAMEOPTS="-n alpine-template"
# Networking (DHCP - bridged mode)
INTERFACESOPTS="auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
"
# Root SSH Key (placeholder replaced by script)
# This is the key automation feature - works perfectly!
ROOTSSHKEY="%%SSH_KEY%%"
# Timezone
TIMEZONEOPTS="-z America/Los_Angeles"
# APK repositories (use fastest mirror)
APKREPOSOPTS="-1"
# NTP client
NTPOPTS="-c chrony"
# Disk setup (CRITICAL: Sets mode only, doesn't execute!)
DISKOPTS="-m sys /dev/vda"
# SSH server
SSHDOPTS="-c openssh"
# Essential services and packages
USEROPTS="-a" # No additional user
APKCACHEOPTS="/var/cache/apk"Alpine installer needs network access to download answer file:
# 1. Prepare answer file with SSH key substitution
SSH_KEY=$(cat ~/.ssh/id_ed25519_alpine_vm.pub)
sed "s|%%SSH_KEY%%|${SSH_KEY}|" alpine-template.answers > /tmp/alpine-answer.txt
# 2. Get host IP (macOS methods)
HOST_IP=$(ipconfig getifaddr en0 || ipconfig getifaddr en1)
# 3. Start HTTP server (Python 3 built into macOS)
cd /tmp
python3 -m http.server 8888 &
HTTP_PID=$!
# 4. In VM (via serial console automation):
setup-alpine -f http://${HOST_IP}:8888/alpine-answer.txt
# 5. Cleanup after installation
kill $HTTP_PID
rm /tmp/alpine-answer.txtHandle password prompts and run setup-disk in same session:
#!/usr/bin/expect -f
# Simplified example - see scripts/lib/install-via-answerfile.exp for full version
set timeout 180
set host_ip [exec sh -c "ipconfig getifaddr en0 || ipconfig getifaddr en1"]
set answer_url "http://${host_ip}:8888/alpine-answer.txt"
# Connect to serial console (TCP mode)
spawn nc localhost 4444
# Wait for login
send "\r"
expect "login:"
send "root\r"
expect "#"
# Configure network
send "setup-interfaces\r"
expect "Which one" { send "eth0\r" }
expect "Ip address" { send "dhcp\r" }
expect "manual" { send "n\r" }
expect "#"
send "ifup eth0\r"
expect "#"
sleep 3
# Download answer file
send "setup-apkrepos -1\r"
expect "#"
send "apk update && apk add wget\r"
expect "#"
send "wget -O /tmp/answers.txt $answer_url\r"
expect "#"
# Run setup-alpine with answer file
send "setup-alpine -f /tmp/answers.txt\r"
# CRITICAL: Handle password prompts (even with ROOTSSHKEY!)
expect {
-re "password.*root" {
send "\r" # Skip - will set via SSH later
exp_continue
}
-re "Retype password:" {
send "\r"
exp_continue
}
-re "Bad password|password.*unchanged" {
# Expected - password can't be set via answer file
exp_continue
}
-re "complete" {
expect "#"
}
}
# CRITICAL: Run setup-disk in SAME session
# (DISKOPTS sets parameters but doesn't execute)
send "setup-disk -m sys /dev/vda\r"
expect {
-re "continue\\?" {
send "y\r"
exp_continue
}
-re "Installation is complete" {
expect "#"
}
}
# Done - VM will reboot automatically
closeKey insights from testing:
- Serial console must use TCP mode (not Ptty) for reliable automation
- UTM must be restarted after config.plist changes (UTM caches configs)
- Password prompts appear even with ROOTSSHKEY - handle in expect
- setup-disk must run in same expect session (separate session loses state)
Host (macOS) VM (Alpine)
├── AppleScript ├── Boot from ISO
│ └── Create VM ├── Serial console (TCP 4444)
├── PlistBuddy ├── Network via DHCP
│ └── Configure serial console ├── Download answer file
├── HTTP Server (Python) ├── setup-alpine -f
│ └── Serve answer file ├── Handle password prompts
├── Expect Script ├── setup-disk (manual)
│ └── Automate installation └── Reboot to disk
└── SSH
└── Set password post-install
Time: ~2 minutes total
Full implementation: https://github.com/ChristopherA/utm-alpine-kit
scripts/create-alpine-template.sh(450 lines)scripts/lib/install-via-answerfile.exp(196 lines)templates/alpine-template.answers(140 lines)
See Also:
- UTM Automation Guide - Installation Automation - Generic patterns for any distribution
- utm-alpine-kit Setup Guide - Complete macOS setup walkthrough
- utm-alpine-kit Template Creation - Technical deep dive
When Alpine boots without the GRUB fix, you'll see errors like:
error: ../../grub-core/script/function.c:119:can't find command `['.
error: ../../grub-core/script/function.c:119:can't find command `['.
error: ../../grub-core/script/function.c:119:can't find command `echo'.
The VM will still boot, but these errors indicate GRUB is trying to use commands before loading the modules that provide them.
Alpine's grub-install --removable creates BOOTAA64.EFI with an embedded configuration inside the binary:
search --no-floppy --fs-uuid --set=root <UUID>
set prefix=($root)/grub
configfile ($root)/grub/grub.cfg
This loads /boot/grub/grub.cfg directly (NOT /boot/EFI/BOOT/grub.cfg).
However, grub-mkconfig (Alpine's default config generator) produces a /boot/grub/grub.cfg that starts with:
if [ -s $prefix/grubenv ]; then
load_env
fiProblem: GRUB tries to execute [ (test command) and echo before loading the modules that provide those commands!
Prepend module loading commands to /boot/grub/grub.cfg before the auto-generated content:
insmod part_gpt
insmod part_msdos
insmod fat
insmod ext2
insmod test # Provides [ and other conditionals
insmod echo # Provides echo command
insmod normal # Provides normal boot flowThese insmod commands are built-in and always work, even without modules loaded.
The generic UTM guide suggests creating /boot/EFI/BOOT/grub.cfg as a shim. This works for some distributions but NOT Alpine because:
- Alpine's
BOOTAA64.EFIhas embedded config pointing directly to/grub/grub.cfg - The shim at
/EFI/BOOT/grub.cfgis never executed (it's orphaned) - Must fix the actual
/boot/grub/grub.cfgthat GRUB loads
Check what's embedded in BOOTAA64.EFI:
strings /boot/EFI/BOOT/BOOTAA64.EFI | grep -A3 "configfile"
# Output:
# search --no-floppy --fs-uuid --set=root E56A-A2FD
# set prefix=($root)/grub
# configfile ($root)/grub/grub.cfgThis confirms it loads /boot/grub/grub.cfg directly.
Re-apply the GRUB module loading fix after:
- Running
grub-mkconfig(regenerates/boot/grub/grub.cfg) - Kernel updates that trigger config regeneration
- Alpine version upgrades
Automation tip: Create a script to apply the fix automatically.
See Also:
- utm-alpine-kit GRUB Boot Fix - Complete fix guide with automation
- UTM Automation Guide - GRUB Bootloader - Generic GRUB automation for all distributions
Alpine uses apk instead of apt. This is critical for automation scripts and provisioning.
Why it matters for UTM automation:
- Faster package installation (optimized for Alpine)
- Smaller package sizes (minimal dependencies)
- Different command syntax (
apk addnotapt install) - Repository structure differs from Debian/Ubuntu
Key differences:
| Task | Alpine | Debian/Ubuntu |
|---|---|---|
| Update index | apk update |
apt update |
| Install | apk add pkg |
apt install pkg |
| Remove | apk del pkg |
apt remove pkg |
Alpine repository tiers:
- main - Core packages (guaranteed stability)
- community - Most development tools
- testing - Bleeding edge (avoid in production)
Complete apk reference: utm-alpine-kit Alpine Reference
Common automation pattern:
# Update and install essentials in one go
apk update && apk add --no-cache build-base git curlCritical for scripts: Always run apk update before apk add to avoid "package not found" errors.
Alpine uses OpenRC instead of systemd. This affects how you manage services in automation.
Why it matters for UTM automation:
- Different commands for service management
- No
systemctl- userc-serviceandrc-updateinstead - Simpler, faster, smaller footprint
- Runlevel-based (not target-based like systemd)
Key command differences:
| Task | Alpine (OpenRC) | Systemd |
|---|---|---|
| Enable on boot | rc-update add svc default |
systemctl enable svc |
| Start now | rc-service svc start |
systemctl start svc |
| Check status | rc-service svc status |
systemctl status svc |
| View logs | tail /var/log/messages |
journalctl -xe |
Common runlevels:
boot- Early initialization (networking, hostname)default- Normal operation (sshd, services)
Critical services for utm-alpine-kit:
qemu-guest-agent- Required for IP detection (utmctl ip-address)sshd- SSH accessnetworking- Network connectivity
Complete OpenRC reference: utm-alpine-kit Alpine Reference
Common automation pattern:
# Enable and start service in one go
rc-update add qemu-guest-agent default && rc-service qemu-guest-agent startAlpine uses /etc/network/interfaces for network configuration. This is different from systemd-based distributions.
Why it matters for UTM automation:
- Template VMs use DHCP by default (simple, works with bridged mode)
- Each cloned VM gets unique IP via MAC address + DHCP
- Static IPs possible but rarely needed for testing workflows
Common automation pattern:
# Template default (DHCP)
auto eth0
iface eth0 inet dhcp
# Networking must be enabled at boot
rc-update add networking bootComplete network configuration reference: utm-alpine-kit Alpine Reference
Critical for cloning: DHCP works out-of-the-box with MAC address regeneration. Each clone automatically gets unique IP from your router.
Core concept: Create template once, clone infinitely. Alpine's minimal size makes this ideal.
Why cloning works for Alpine:
- Small template size (~500MB) = fast cloning
- No cloud-init complexity
- Pre-configured SSH keys and services
- Clean, reproducible environments
Critical requirements for cloning:
- Unique MAC addresses - Required for DHCP (each clone gets different IP)
- UTM configuration caching - Must quit/restart UTM after config.plist edits
- QEMU guest agent - Enables IP detection via
utmctl ip-address
Deploy-test-destroy workflow:
# Clone template (10-30 seconds)
./scripts/clone-vm.sh test-vm
# Run tests (time varies)
./scripts/provision-for-testing.sh test-vm 192.168.1.100 \
https://github.com/user/repo.git "cargo test"
# Destroy when done (<10 seconds)
./scripts/destroy-vm.sh test-vmTotal time: < 2 minutes for complete cycle
Production implementation:
- clone-vm.sh (285 lines) - Clone with --ram/--cpu options, MAC regeneration, IP detection
- provision-for-testing.sh (270 lines) - Language detection, auto-provisioning
- destroy-vm.sh (195 lines) - Clean VM destruction, SSH cleanup
Complete workflow examples: utm-alpine-kit Rust Testing
Key insight: UTM's configuration caching is the most common automation stumbling block. Always quit→edit→restart UTM when modifying config.plist. See utm-alpine-kit UTM Fundamentals for details.
Tested: 2025-10-09 Result: Alpine cloud images are NOT suitable for local UTM use.
nocloud_alpine-3.22.1-aarch64-uefi-tiny-r0.qcow2(155MB)nocloud_alpine-3.22.1-aarch64-uefi-cloudinit-r0.qcow2(216MB)
What worked:
- ✅ Both variants boot cleanly (no UEFI/GRUB errors)
- ✅ Serial console functional
- ✅ Cloud-init partially ran (hostname set from metadata)
What failed:
- ❌ SSH keys not configured (authentication impossible)
- ❌ No default credentials (cannot access system)
- ❌ No QEMU guest agent pre-installed
- ❌ Cloud-init ISO not fully recognized by Alpine
| Aspect | Cloud Image | Template Clone |
|---|---|---|
| Initial setup | Download 216MB | Manual install ~15 min |
| Authentication | ❌ Requires cloud-init | ✅ SSH keys pre-configured |
| Guest agent | ❌ Not included | ✅ Installed and working |
| Cloning | ✅ Simple disk copy | |
| Customization | ✅ Full control | |
| Time per clone | Unknown (couldn't test) | <1 minute |
Use manual template + cloning approach. The one-time 15-minute setup is worth the reliable, fast cloning workflow.
For complete technical details about what we tried with cloud images, see the full cloud images evaluation document in the project repository.
Alpine's minimal footprint makes it ideal for testing P2P protocols with multiple VMs.
- Tor development: Run relay, guard, exit nodes in separate VMs
- Lightning Network: Multi-node channel testing
- BitTorrent DHT: Test node discovery and data exchange
- Custom P2P protocols: Rust-based distributed systems
Create a specialized Rust template:
# Clone base template
./clone-vm.sh alpine-rust-dev
# Provision for Rust
VM_IP=$(utmctl ip-address alpine-rust-dev | head -1)
ssh root@${VM_IP} << 'EOF'
# Install Rust toolchain
apk add rust cargo
# Build essentials
apk add build-base openssl-dev
# Optimization: Use sccache
apk add sccache
# Configure cargo for faster builds
mkdir -p ~/.cargo
cat > ~/.cargo/config.toml << 'CARGO'
[build]
rustc-wrapper = "/usr/bin/sccache"
jobs = 2
[profile.dev]
opt-level = 0
debug = true
[profile.release]
opt-level = 3
lto = "thin"
CARGO
# Pre-warm cargo cache
cargo search --limit 0
EOF
# This becomes your Rust template - stop it and use as clone source
utmctl stop alpine-rust-dev./clone-vm.sh alpine-tor-dev
VM_IP=$(utmctl ip-address alpine-tor-dev | head -1)
ssh root@${VM_IP} << 'EOF'
# Install Tor
apk add tor privoxy
# Configure Tor for development
cat > /etc/tor/torrc << 'TOR'
# SOCKS proxy
SocksPort 9050
# Control port for controller apps
ControlPort 9051
CookieAuthentication 1
# Logging
Log notice file /var/log/tor/notices.log
TOR
# Enable service
rc-update add tor default
rc-service tor start
# Verify
rc-service tor status
EOFssh root@${VM_IP} << 'EOF'
# Packet capture and analysis
apk add tcpdump wireshark-common
# Network monitoring
apk add iftop nethogs iperf3
# Network utilities
apk add nmap netcat-openbsd bind-tools
# Protocol testing
apk add curl wget httpie
EOF#!/bin/bash
# test-dht-cluster.sh - Create 5-node DHT test cluster
set -euo pipefail
# Create 5 VMs
for i in {1..5}; do
echo "Creating dht-node-$i..."
./clone-vm.sh "dht-node-$i"
done
# Wait for all to boot
sleep 20
# Collect IPs
declare -a NODE_IPS
for i in {1..5}; do
IP=$(utmctl ip-address "dht-node-$i" | head -1)
NODE_IPS[$i]=$IP
echo "dht-node-$i: $IP"
done
# Deploy code to all nodes
for i in {1..5}; do
echo "Deploying to dht-node-$i..."
ssh root@${NODE_IPS[$i]} << 'EOF'
cd /root
git clone https://github.com/your/dht-implementation
cd dht-implementation
cargo build --release
EOF
done
# Start DHT nodes (each knows about others)
for i in {1..5}; do
echo "Starting DHT on dht-node-$i..."
PEERS=""
for j in {1..5}; do
if [ $i -ne $j ]; then
PEERS="$PEERS --peer ${NODE_IPS[$j]}:8080"
fi
done
ssh root@${NODE_IPS[$i]} "cd /root/dht-implementation && nohup ./target/release/dht-node --port 8080 $PEERS > /var/log/dht.log 2>&1 &"
done
echo "✅ 5-node DHT cluster running"
echo "Monitor logs: ssh root@${NODE_IPS[1]} 'tail -f /var/log/dht.log'"
# Test DHT operations...
# ...your test code here...
# Cleanup when done
echo "Destroying cluster..."
for i in {1..5}; do
./destroy-vm.sh "dht-node-$i"
donePerformance: 5 VMs @ 512MB each = 2.5GB RAM total. Runs comfortably on 8GB host.
Most common issues and quick fixes:
-
GRUB boot errors ("can't find command '['")
- Cause: Alpine-specific GRUB module loading issue
- Fix: See Alpine GRUB Boot Flow section above
- Critical: This is the #1 issue for new Alpine UTM users
-
No IP address / Network not starting
- Cause: Networking service not enabled for boot runlevel
- Fix:
rc-update add networking boot && reboot
-
QEMU guest agent not working
- Cause: Service not installed or not running
- Fix:
apk add qemu-guest-agent && rc-update add qemu-guest-agent default
-
SSH connection refused
- Cause: SSH server not installed or not running
- Fix:
apk add openssh && rc-update add sshd default
-
apk errors ("Unable to lock database", "UNTRUSTED signature")
- Cause: Clock skew or stale package index
- Fix:
apk update(always run beforeapk add)
Complete troubleshooting guide: utm-alpine-kit Troubleshooting
Key insight: Most Alpine issues stem from forgetting OpenRC's runlevel system (rc-update add <service> <runlevel>) vs systemd's systemctl enable.
- UTM Automation Guide - Generic UTM automation (distribution-agnostic)
- CLOUD_IMAGES_EVALUATION.md - Detailed cloud images testing (local reference document)
- Alpine Forums
- Alpine IRC
- Alpine GitLab
GitHub Repository: https://github.com/ChristopherA/utm-alpine-kit
Production-ready automation toolkit with:
✅ One-command template creation (~2 minutes) ✅ Instant VM cloning (0-1 second) with RAM/CPU resize options ✅ Dual SSH authentication (key + password for flexibility) ✅ Language-detecting provisioning (Rust, Go, Python, Node) ✅ Complete documentation (2,500+ lines) ✅ Real-world troubleshooting (bugs we actually hit) ✅ 7 Rust testing workflow examples
Scripts:
create-alpine-template.sh- Automated template from ISO (450 lines)clone-vm.sh- Clone with --ram/--cpu options (285 lines)provision-for-testing.sh- Auto-detecting provisioning (270 lines)destroy-vm.sh- Clean VM destruction (195 lines)list-templates.sh- VM inventory (125 lines)
Documentation:
- Complete macOS setup guide (420 lines)
- Technical deep dive on template creation (540 lines)
- Troubleshooting with real bugs and solutions (570 lines)
- Rust testing examples (7 complete workflows, 450 lines)
This gist provides Alpine-specific knowledge; the repository provides production-ready automation.
- Fixed: SSH key persistence ⚡ CRITICAL FIX
- Added
synccommand to flush filesystem buffers before verification - SSH keys now persist reliably on templates and clones
- Both SSH key and password authentication validated working
- Root cause: Filesystem writes weren't being synced before verification
- Added
- Improved: Performance
- Template creation time: ~5 minutes → ~2 minutes (consistent)
- Instant VM cloning (0-1 second) validated
- Fast boot verified (~25 seconds to SSH-ready)
- Validated: Production readiness
- 100% success rate across 3 complete end-to-end tests from scratch
- All warnings identified as harmless (cosmetic Alpine artifacts)
- Dual authentication (key + password) working reliably
- Updated: Documentation and timing claims
- All timing references updated to reflect current performance
- utm-alpine-kit repository upgraded to v1.0.0
- Status: Production (extensively tested, production-ready)
- Added: Automated template creation option ⚡
- One-command ISO-to-VM automation (~2 minutes)
- Answer file with SSH key deployment
- Serial console + expect script automation
- Complete working example with repo link
- Added: Answer file automation section
- What works via answer file (ROOTSSHKEY, DISKOPTS, etc.)
- Critical limitations discovered (ROOTPASS, DISKOPTS execution)
- Complete expect script patterns
- HTTP server setup for answer file delivery
- Updated: Clone scripts with production features
- --ram and --cpu resize options documented
- UTM configuration caching workflow explained
- Robust error handling patterns
- Deploy-test-destroy timing (<2 minutes)
- Added: Cross-reference to utm-alpine-kit repository
- Production-ready scripts (1,700+ lines)
- Comprehensive documentation (2,500+ lines)
- Real-world troubleshooting
- Documented: Key automation discoveries
- AppleScript required for VM creation (utmctl create doesn't exist)
- UTM must be quit/restarted after config.plist changes
- setup-disk must run in same expect session as setup-alpine
- Password prompts appear even with ROOTSSHKEY
- Status: Production (proven automation in real-world use)
- Initial release - Alpine-specific automation guide
- Extracted from generic UTM_AUTOMATION_GUIDE.md
- Complete Alpine installation workflow
- Alpine GRUB module loading fix documented
- apk and OpenRC management
- Clone-based workflow scripts
- P2P testing setup
- Cloud images evaluation
- Comprehensive troubleshooting
Last Updated: 2025-10-19 Alpine Version: 3.22 UTM Version: 4.x Tested Environment: macOS 26.x, ARM64 (M-series)