Skip to content

Instantly share code, notes, and snippets.

@infamousjoeg
Last active November 7, 2025 19:33
Show Gist options
  • Select an option

  • Save infamousjoeg/b87105571e8995b5ff130d8286fb94cf to your computer and use it in GitHub Desktop.

Select an option

Save infamousjoeg/b87105571e8995b5ff130d8286fb94cf to your computer and use it in GitHub Desktop.
SWA Multipass Deployment Scripts - Automated VM deployment and SWA setup for CyberArk Secure Workload Access

SWA Multipass VM Scripts

This directory contains automated scripts for managing Multipass VMs and deploying the SWA (Secure Workload Access) solution.

Prerequisites

  1. Multipass installed: Download from https://multipass.run/install
  2. Ansible installed: Required for deployment automation
  3. Terraform installed: Version 1.0 or higher (download)
  4. CyberArk Certificate Manager SAAS Account: With Workload Identity Manager (Firefly) activated
  5. Environment Variables: Set before running scripts:
    export TF_VAR_apikey="your-cyberark-api-key"
    export TF_VAR_team_owner_email="[email protected]"
    Or add them to .env file in the project root
  6. SSH key pair for SWA VMs: Generate a dedicated SSH key:
    ssh-keygen -t ed25519 -f ~/.ssh/swa_multipass -N "" -C "swa-multipass-vms"
    This creates ~/.ssh/swa_multipass (private key) and ~/.ssh/swa_multipass.pub (public key)
  7. SWA Binaries: Download swa-server and swa-agent binaries from releases and place them in the binaries/ directory

Quick Start

For a fully automated setup:

  1. Set environment variables:

    export TF_VAR_apikey="your-cyberark-api-key"
    export TF_VAR_team_owner_email="[email protected]"
  2. Run the complete setup:

    ./scripts/00_full-setup.sh

This will execute all steps automatically: deploy VMs, setup Terraform control-plane, configure inventory, test connectivity, deploy SWA, and verify the installation.

Scripts Overview

00_full-setup.sh - Complete Automated Setup

Orchestrates the entire deployment pipeline from start to finish.

What it does:

  • Runs all setup scripts in sequence (01 → 02 → 03 → 04 → 05 → 06)
  • Handles errors gracefully and provides detailed progress output
  • Sets up CyberArk control-plane via Terraform
  • Deploys VMs, configures inventory, deploys SWA, and verifies
  • Displays final summary and next steps

Usage:

./scripts/00_full-setup.sh

When to use: First-time setup or complete redeployment


01_deploy-vms.sh - Deploy Multipass VMs

Deploys three Ubuntu 22.04 VMs using Multipass:

  • swa-server-01: SWA server (2 CPUs, 2GB RAM, 10GB disk)
  • swa-agent-01: SWA agent (2 CPUs, 2GB RAM, 10GB disk)
  • swa-agent-02: SWA agent (2 CPUs, 2GB RAM, 10GB disk)

What it does:

  • Creates VMs with cloud-init (Python 3, SSH keys)
  • Generates vm-config.txt with VM details
  • Generates ansible-inventory-snippet.yml for Ansible
  • Validates SSH key exists before deployment
  • Idempotent: skips existing VMs

Usage:

./scripts/01_deploy-vms.sh

Output Files:

  • scripts/vm-config.txt - Human-readable VM configuration
  • scripts/ansible-inventory-snippet.yml - Ansible inventory format

02_setup-terraform.sh - Setup Terraform Control-Plane

Configures CyberArk Workload Identity Manager (formerly Firefly) using Terraform.

What it does:

  • Validates required environment variables (TF_VAR_apikey, TF_VAR_team_owner_email)
  • Optionally loads variables from .env file
  • Runs terraform init, plan, and apply
  • Creates Firefly configuration, Sub CA, and service account
  • Saves terraform_outputs.json for Ansible consumption
  • Generates service account credentials in terraform/firefly-deployment/serviceaccount/

Usage:

# Set environment variables first
export TF_VAR_apikey="your-api-key"
export TF_VAR_team_owner_email="[email protected]"

# Then run the script
./scripts/02_setup-terraform.sh

Requirements:

  • Terraform 1.0+ installed
  • CyberArk Certificate Manager SAAS account
  • Valid API key and email address set as environment variables
  • Internet connectivity

Note: This script uses only environment variables (TF_VAR_*), not terraform.tfvars files.

Outputs:

  • terraform/firefly-deployment/terraform_outputs.json - Terraform outputs for Ansible
  • terraform/firefly-deployment/serviceaccount/*.pem - Service account credentials

03_update-inventory.sh - Update Ansible Inventory

Copies the generated inventory snippet to the Ansible inventory location.

What it does:

  • Backs up existing ansible/inventory/hosts.yml
  • Copies ansible-inventory-snippet.yml to ansible/inventory/hosts.yml
  • Displays the new inventory configuration

Usage:

./scripts/03_update-inventory.sh

Backup location: ansible/inventory/hosts.yml.backup.YYYYMMDD-HHMMSS


04_test-connectivity.sh - Test SSH Connectivity

Tests SSH connectivity to all VMs defined in the Ansible inventory.

What it does:

  • Uses Ansible ping module to test all hosts
  • Tests individual SSH connections to each VM
  • Provides troubleshooting tips if connections fail

Usage:

./scripts/04_test-connectivity.sh

Validates:

  • Ansible can reach all VMs
  • SSH keys are properly configured
  • VMs are ready for deployment

05_deploy-swa.sh - Deploy SWA Services

Runs the Ansible playbook to deploy SWA server and agents.

What it does:

  • Validates binaries exist in binaries/ directory
  • Executes ansible/deploy-swa.yml playbook
  • Deploys SWA server to swa_servers group
  • Deploys SWA agents to swa_agents group
  • Runs with verbose output for debugging

Usage:

./scripts/05_deploy-swa.sh

Known Issue: May show errors about /tmp/trust_bundles/*.pem not found. This is a known issue and can be ignored.


06_verify-deployment.sh - Verify SWA Deployment

Checks that SWA services are running on all VMs.

What it does:

  • Checks SWA server systemd service status
  • Checks SWA agent systemd service status
  • Verifies agent socket exists at /run/swa-agent/api.sock
  • Attempts to fetch a test JWT token
  • Displays detailed service status and logs

Usage:

./scripts/06_verify-deployment.sh

Checks:

  • Services are active and running
  • Socket files are present
  • JWT token fetch works (if workload is registered)

07_destroy-vms.sh - Destroy VMs

Destroys all SWA-related Multipass VMs and cleans up configuration files.

What it does:

  • Stops and deletes all SWA VMs
  • Purges VMs from Multipass
  • Removes generated configuration files
  • Requires confirmation before deletion

Usage:

./scripts/07_destroy-vms.sh

When to use: Cleanup or starting fresh


08_destroy-terraform.sh - Destroy Terraform Resources

Destroys all CyberArk Workload Identity Manager (Firefly) resources created by Terraform.

What it does:

  • Runs terraform destroy in the firefly-deployment directory
  • Removes Firefly configuration, Sub CA, policy, and service account
  • Optionally cleans up service account credential files
  • Removes terraform_outputs.json

Usage:

./scripts/08_destroy-terraform.sh

Warning: This is destructive and cannot be undone. Make sure no active workloads are using these resources.

When to use:

  • Complete cleanup after testing
  • Resetting the control-plane configuration
  • Before recreating with different settings

Manual Workflow

If you prefer to run steps individually:

Step 1: Deploy VMs

./scripts/01_deploy-vms.sh

Review generated configuration:

cat scripts/vm-config.txt
cat scripts/ansible-inventory-snippet.yml

Step 2: Setup Terraform Control-Plane

Set environment variables:

export TF_VAR_apikey="your-api-key"
export TF_VAR_team_owner_email="[email protected]"

Run Terraform setup:

./scripts/02_setup-terraform.sh

This will create the CyberArk Workload Identity Manager configuration.

Step 3: Update Ansible Inventory

./scripts/03_update-inventory.sh

Step 4: Test Connectivity

./scripts/04_test-connectivity.sh

If this fails, troubleshoot before proceeding:

# Check VM status
multipass list

# Test manual SSH
ssh -i ~/.ssh/swa_multipass ubuntu@<VM_IP>

Step 5: Deploy SWA

Ensure binaries are in place:

ls -lh binaries/

Deploy:

./scripts/05_deploy-swa.sh

Step 6: Verify Deployment

./scripts/06_verify-deployment.sh

Step 7: Clean Up (when done)

Destroy VMs:

./scripts/07_destroy-vms.sh

Destroy Terraform resources:

./scripts/08_destroy-terraform.sh

Configuration

Modify VM Resources

Edit scripts/01_deploy-vms.sh and change these variables:

SERVER_CPUS=2
SERVER_MEM="2G"
SERVER_DISK="10G"
AGENT_CPUS=2
AGENT_MEM="2G"
AGENT_DISK="10G"

Add More Agent VMs

Edit scripts/01_deploy-vms.sh and modify the array:

AGENT_VMS=("swa-agent-01" "swa-agent-02" "swa-agent-03" "swa-agent-04")

Change SSH Key Location

Edit both scripts/01_deploy-vms.sh and update:

SSH_PUBLIC_KEY="$HOME/.ssh/your_custom_key.pub"
SSH_PRIVATE_KEY="$HOME/.ssh/your_custom_key"

Useful Commands

Multipass Commands

# List all VMs
multipass list

# Get VM details
multipass info swa-server-01

# Shell into a VM
multipass shell swa-server-01

# Stop/Start VMs
multipass stop swa-server-01
multipass start swa-server-01

# Delete and purge
multipass delete swa-server-01
multipass purge

Ansible Commands

# Check service status
ansible swa_servers -i ansible/inventory/hosts.yml -a "sudo systemctl status swa-server"
ansible swa_agents -i ansible/inventory/hosts.yml -a "sudo systemctl status swa-agent"

# View logs
ansible swa_servers -i ansible/inventory/hosts.yml -a "sudo journalctl -u swa-server -n 50"
ansible swa_agents -i ansible/inventory/hosts.yml -a "sudo journalctl -u swa-agent -n 50"

# Restart services
ansible swa_servers -i ansible/inventory/hosts.yml -a "sudo systemctl restart swa-server"
ansible swa_agents -i ansible/inventory/hosts.yml -a "sudo systemctl restart swa-agent"

SWA Commands

# Fetch JWT token from agent (run on agent VM)
/opt/swa/bin/spire-agent api fetch jwt -audience test -socketPath /run/swa-agent/api.sock

# Check agent socket
ls -la /run/swa-agent/api.sock

Troubleshooting

SSH Key Not Found

Generate the SSH key:

ssh-keygen -t ed25519 -f ~/.ssh/swa_multipass -N "" -C "swa-multipass-vms"

The deployment script will validate the key exists.

VMs Already Exist

The deploy script is idempotent. To recreate:

./scripts/07_destroy-vms.sh
./scripts/01_deploy-vms.sh

Cannot Connect to VM

  1. Check VM status: multipass list
  2. Check IP address: multipass info swa-server-01
  3. Test multipass shell: multipass shell swa-server-01
  4. Verify SSH key permissions:
    ls -la ~/.ssh/swa_multipass*
    chmod 600 ~/.ssh/swa_multipass
  5. Test SSH manually:
    ssh -i ~/.ssh/swa_multipass ubuntu@<VM_IP>

Terraform Errors

Environment Variables Not Set:

# Check if variables are set
echo $TF_VAR_apikey
echo $TF_VAR_team_owner_email

# Set them if missing
export TF_VAR_apikey="your-api-key"
export TF_VAR_team_owner_email="[email protected]"

Terraform State Issues:

cd terraform/firefly-deployment
terraform init -reconfigure

API Key Invalid:

Ansible Playbook Fails

  1. Ensure Terraform setup completed: ls terraform/firefly-deployment/terraform_outputs.json
  2. Ensure connectivity: ./scripts/04_test-connectivity.sh
  3. Check binaries exist: ls -lh binaries/
  4. Verify group_vars: cat ansible/group_vars/swa_servers.yml
  5. Check for specific error in Ansible output
  6. Known issue: /tmp/trust_bundles/*.pem errors can be ignored

Service Not Running

Check service status and logs:

# On the VM directly
multipass shell swa-server-01
sudo systemctl status swa-server
sudo journalctl -u swa-server -n 100

# Or via Ansible
ansible swa_servers -i ansible/inventory/hosts.yml -a "sudo systemctl status swa-server --no-pager"
ansible swa_servers -i ansible/inventory/hosts.yml -a "sudo journalctl -u swa-server -n 100 --no-pager"

Multipass Not Installed

macOS:

brew install multipass

Linux:

snap install multipass

Windows: Download installer from https://multipass.run/install

Ansible Not Installed

macOS:

brew install ansible

Linux:

sudo apt-get install ansible  # Debian/Ubuntu
sudo yum install ansible       # RHEL/CentOS

Terraform Not Installed

macOS:

brew install terraform

Linux:

# Download latest from https://www.terraform.io/downloads
wget https://releases.hashicorp.com/terraform/1.6.0/terraform_1.6.0_linux_amd64.zip
unzip terraform_1.6.0_linux_amd64.zip
sudo mv terraform /usr/local/bin/

Verify:

terraform version

Notes

  • VMs are configured with Ubuntu 22.04 LTS
  • Python 3 is pre-installed for Ansible compatibility
  • SSH keys are automatically injected via cloud-init
  • VMs use Multipass default network (typically 192.168.64.0/24 on macOS)
  • First run downloads the Ubuntu image (~500MB)
  • The complete setup takes approximately 10-15 minutes
  • Binaries must be downloaded separately and placed in binaries/

Next Steps After Deployment

  1. Test JWT token fetch:

    multipass shell swa-agent-01
    /opt/swa/bin/spire-agent api fetch jwt -audience test -socketPath /run/swa-agent/api.sock
  2. Explore integration examples:

    ls -la examples/
  3. Configure OIDC federation:

    • Follow instructions in docs/ for AWS, Azure, or Conjur integration
  4. Read the documentation:

    • CLAUDE.md - Project overview and commands
    • DEMO.md - Demo scenarios
    • TROUBLESHOOTING.md - Common issues

Support

For issues or questions:

  • Check TROUBLESHOOTING.md in the project root
  • Review Ansible output for specific errors
  • Check service logs via journalctl
  • Verify configuration in ansible/group_vars/
#!/bin/bash
# Full SWA Setup Script
# Runs all setup steps in sequence: deploy VMs, update inventory, test connectivity, deploy SWA, verify
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
echo -e "${BLUE}╔════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ SWA Complete Setup - Automated Deployment Pipeline ║${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════╝${NC}\n"
# Track overall success
OVERALL_SUCCESS=true
# Function to run a script step
run_step() {
local step_number=$1
local step_name=$2
local script_name=$3
echo -e "\n${BLUE}╔════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Step ${step_number}: ${step_name}${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════╝${NC}\n"
if [ -f "${SCRIPT_DIR}/${script_name}" ]; then
if bash "${SCRIPT_DIR}/${script_name}"; then
echo -e "\n${GREEN}✓ Step ${step_number} completed successfully${NC}"
else
echo -e "\n${RED}✗ Step ${step_number} failed${NC}"
OVERALL_SUCCESS=false
return 1
fi
else
echo -e "${RED}Error: Script ${script_name} not found${NC}"
OVERALL_SUCCESS=false
return 1
fi
}
# Step 1: Deploy VMs
if ! run_step 1 "Deploy Multipass VMs" "01_deploy-vms.sh"; then
echo -e "\n${RED}Setup failed at Step 1. Exiting.${NC}"
exit 1
fi
# Step 2: Setup Terraform Control-Plane
if ! run_step 2 "Setup Terraform Control-Plane" "02_setup-terraform.sh"; then
echo -e "\n${RED}Setup failed at Step 2. Exiting.${NC}"
echo -e "${YELLOW}Note: Ensure TF_VAR_apikey and TF_VAR_team_owner_email are set${NC}"
exit 1
fi
# Step 3: Update Inventory
if ! run_step 3 "Update Ansible Inventory" "03_update-inventory.sh"; then
echo -e "\n${RED}Setup failed at Step 3. Exiting.${NC}"
exit 1
fi
# Step 4: Test Connectivity
if ! run_step 4 "Test SSH Connectivity" "04_test-connectivity.sh"; then
echo -e "\n${RED}Setup failed at Step 4. Exiting.${NC}"
exit 1
fi
# Step 5: Deploy SWA
if ! run_step 5 "Deploy SWA Services" "05_deploy-swa.sh"; then
echo -e "\n${YELLOW}Note: Deployment may have partial failures but could still be functional.${NC}"
echo -e "${YELLOW}Continuing to verification step...${NC}"
fi
# Step 6: Verify Deployment
if ! run_step 6 "Verify SWA Deployment" "06_verify-deployment.sh"; then
echo -e "\n${YELLOW}Verification encountered issues. Check the output above.${NC}"
fi
# Final summary
echo -e "\n${BLUE}╔════════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ Setup Complete! ║${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════════╝${NC}\n"
if [ "$OVERALL_SUCCESS" = true ]; then
echo -e "${GREEN}All steps completed successfully!${NC}\n"
else
echo -e "${YELLOW}Setup completed with some warnings. Please review the output above.${NC}\n"
fi
echo -e "${YELLOW}What's Next:${NC}"
echo "1. Test JWT token fetch from an agent:"
echo " multipass shell swa-agent-01"
echo " /opt/swa/bin/spire-agent api fetch jwt -audience test -socketPath /run/swa-agent/api.sock"
echo ""
echo "2. Check service logs:"
echo " ansible swa_servers -i ansible/inventory/hosts.yml -a \"sudo journalctl -u swa-server -n 50\""
echo " ansible swa_agents -i ansible/inventory/hosts.yml -a \"sudo journalctl -u swa-agent -n 50\""
echo ""
echo "3. Explore integration examples in the examples/ directory"
echo ""
echo "4. To tear down everything:"
echo " ./scripts/07_destroy-vms.sh # Destroy VMs"
echo " ./scripts/08_destroy-terraform.sh # Destroy Terraform resources"
echo -e "\n${GREEN}Happy coding!${NC}"
#!/bin/bash
# SWA Multipass VM Deployment Script
# This script deploys Ubuntu VMs using Multipass for SWA server and agents
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Configuration
SERVER_VM="swa-server-01"
AGENT_VMS=("swa-agent-01" "swa-agent-02")
UBUNTU_VERSION="22.04"
SERVER_CPUS=2
SERVER_MEM="2G"
SERVER_DISK="10G"
AGENT_CPUS=2
AGENT_MEM="2G"
AGENT_DISK="10G"
# SSH Key Configuration
SSH_PUBLIC_KEY="$HOME/.ssh/swa_multipass.pub"
SSH_PRIVATE_KEY="$HOME/.ssh/swa_multipass"
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="${SCRIPT_DIR}/vm-config.txt"
echo -e "${GREEN}Starting SWA VM deployment with Multipass...${NC}\n"
# Check if multipass is installed
if ! command -v multipass &> /dev/null; then
echo -e "${RED}Error: Multipass is not installed. Please install it first.${NC}"
echo "Visit: https://multipass.run/install"
exit 1
fi
# Check if SSH key exists
if [ ! -f "${SSH_PUBLIC_KEY}" ]; then
echo -e "${RED}Error: SSH public key not found at ${SSH_PUBLIC_KEY}${NC}"
echo "Please generate the SSH key first:"
echo " ssh-keygen -t ed25519 -f ${SSH_PRIVATE_KEY} -N \"\" -C \"swa-multipass-vms\""
exit 1
fi
echo -e "${GREEN}Using SSH key: ${SSH_PUBLIC_KEY}${NC}"
# Function to deploy a VM
deploy_vm() {
local vm_name=$1
local cpus=$2
local mem=$3
local disk=$4
echo -e "${YELLOW}Deploying ${vm_name}...${NC}"
# Check if VM already exists
if multipass list | grep -q "^${vm_name}"; then
echo -e "${YELLOW}VM ${vm_name} already exists. Skipping...${NC}"
return 0
fi
# Launch the VM
multipass launch ${UBUNTU_VERSION} \
--name "${vm_name}" \
--cpus ${cpus} \
--memory ${mem} \
--disk ${disk} \
--cloud-init - <<EOF
#cloud-init
package_update: true
package_upgrade: true
packages:
- python3
- python3-pip
ssh_authorized_keys:
- $(cat ${SSH_PUBLIC_KEY})
EOF
if [ $? -eq 0 ]; then
echo -e "${GREEN}✓ ${vm_name} deployed successfully${NC}"
else
echo -e "${RED}✗ Failed to deploy ${vm_name}${NC}"
return 1
fi
}
# Deploy server VM
deploy_vm "${SERVER_VM}" ${SERVER_CPUS} ${SERVER_MEM} ${SERVER_DISK}
# Deploy agent VMs
for agent in "${AGENT_VMS[@]}"; do
deploy_vm "${agent}" ${AGENT_CPUS} ${AGENT_MEM} ${AGENT_DISK}
done
# Wait for VMs to be ready
echo -e "\n${YELLOW}Waiting for VMs to be ready...${NC}"
sleep 10
# Generate configuration file
echo -e "\n${YELLOW}Generating VM configuration file...${NC}"
echo "# SWA Multipass VM Configuration" > "${CONFIG_FILE}"
echo "# Generated on $(date)" >> "${CONFIG_FILE}"
echo "" >> "${CONFIG_FILE}"
# Function to get VM info
get_vm_info() {
local vm_name=$1
local ip_address=$(multipass info "${vm_name}" | grep IPv4 | awk '{print $2}')
local state=$(multipass info "${vm_name}" | grep State | awk '{print $2}')
echo "${vm_name}:"
echo " IP Address: ${ip_address}"
echo " State: ${state}"
echo " User: ubuntu"
echo " SSH Key: ${SSH_PRIVATE_KEY}"
echo ""
}
# Collect server info
echo "=== SWA Server ===" >> "${CONFIG_FILE}"
get_vm_info "${SERVER_VM}" >> "${CONFIG_FILE}"
# Collect agent info
echo "=== SWA Agents ===" >> "${CONFIG_FILE}"
for agent in "${AGENT_VMS[@]}"; do
get_vm_info "${agent}" >> "${CONFIG_FILE}"
done
# Display the configuration
echo -e "${GREEN}VM Deployment Complete!${NC}\n"
cat "${CONFIG_FILE}"
# Generate Ansible inventory format
echo -e "\n${YELLOW}Generating Ansible inventory snippet...${NC}"
INVENTORY_SNIPPET="${SCRIPT_DIR}/ansible-inventory-snippet.yml"
echo "# Ansible Inventory Snippet for SWA VMs" > "${INVENTORY_SNIPPET}"
echo "# Copy this to ansible/inventory/hosts.yml" >> "${INVENTORY_SNIPPET}"
echo "# Generated on $(date)" >> "${INVENTORY_SNIPPET}"
echo "" >> "${INVENTORY_SNIPPET}"
echo "all:" >> "${INVENTORY_SNIPPET}"
echo " children:" >> "${INVENTORY_SNIPPET}"
echo " swa_servers:" >> "${INVENTORY_SNIPPET}"
echo " hosts:" >> "${INVENTORY_SNIPPET}"
# Server entry
SERVER_IP=$(multipass info "${SERVER_VM}" | grep IPv4 | awk '{print $2}')
echo " ${SERVER_VM}:" >> "${INVENTORY_SNIPPET}"
echo " ansible_host: ${SERVER_IP}" >> "${INVENTORY_SNIPPET}"
echo " ansible_user: ubuntu" >> "${INVENTORY_SNIPPET}"
echo " ansible_ssh_private_key_file: ${SSH_PRIVATE_KEY}" >> "${INVENTORY_SNIPPET}"
# Agents section
echo " swa_agents:" >> "${INVENTORY_SNIPPET}"
echo " hosts:" >> "${INVENTORY_SNIPPET}"
for agent in "${AGENT_VMS[@]}"; do
AGENT_IP=$(multipass info "${agent}" | grep IPv4 | awk '{print $2}')
echo " ${agent}:" >> "${INVENTORY_SNIPPET}"
echo " ansible_host: ${AGENT_IP}" >> "${INVENTORY_SNIPPET}"
echo " ansible_user: ubuntu" >> "${INVENTORY_SNIPPET}"
echo " ansible_ssh_private_key_file: ${SSH_PRIVATE_KEY}" >> "${INVENTORY_SNIPPET}"
done
echo -e "\n${GREEN}Ansible inventory snippet saved to: ${INVENTORY_SNIPPET}${NC}"
# Display quick access commands
echo -e "\n${YELLOW}Quick Access Commands:${NC}"
echo " SSH to server: multipass shell ${SERVER_VM}"
for agent in "${AGENT_VMS[@]}"; do
echo " SSH to agent: multipass shell ${agent}"
done
echo -e "\n${YELLOW}Next Steps:${NC}"
echo "1. Update your Ansible inventory: ansible/inventory/hosts.yml"
echo "2. Copy the content from: ${INVENTORY_SNIPPET}"
echo "3. Ensure SSH access is working:"
echo " ssh -i ${SSH_PRIVATE_KEY} ubuntu@${SERVER_IP}"
echo "4. Run the Ansible playbook:"
echo " ansible-playbook -i ansible/inventory/hosts.yml ansible/deploy-swa.yml"
echo -e "\n${GREEN}All VMs deployed successfully!${NC}"
#!/bin/bash
# Setup Terraform Control-Plane Script
# Configures CyberArk Workload Identity Manager (Firefly) via Terraform
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "${SCRIPT_DIR}")"
TERRAFORM_DIR="${PROJECT_DIR}/terraform/firefly-deployment"
ENV_FILE="${PROJECT_DIR}/.env"
echo -e "${YELLOW}Setting up Terraform Control-Plane${NC}\n"
# Check if terraform is installed
if ! command -v terraform &> /dev/null; then
echo -e "${RED}Error: Terraform is not installed.${NC}"
echo "Please install Terraform first:"
echo " brew install terraform # macOS"
echo " https://www.terraform.io/downloads"
exit 1
fi
echo -e "${GREEN}✓ Terraform is installed: $(terraform version | head -1)${NC}\n"
# Check if .env file exists and offer to load it
if [ -f "${ENV_FILE}" ]; then
echo -e "${YELLOW}Found .env file. Would you like to load environment variables from it? (y/n)${NC}"
read -r LOAD_ENV
if [[ $LOAD_ENV =~ ^[Yy]$ ]]; then
echo -e "${YELLOW}Loading environment variables from .env...${NC}"
set -a
source "${ENV_FILE}"
set +a
echo -e "${GREEN}✓ Environment variables loaded${NC}\n"
fi
fi
# Check for required environment variables
if [ -z "${TF_VAR_apikey}" ]; then
echo -e "${RED}Error: TF_VAR_apikey environment variable is not set.${NC}"
echo ""
echo "Please set your CyberArk Certificate Manager SAAS API Key:"
echo " export TF_VAR_apikey=\"your-api-key-here\""
echo ""
echo "To obtain an API key:"
echo " 1. Log in to https://ui.venafi.cloud"
echo " 2. Go to Settings > API Keys"
echo " 3. Create a new API key"
echo ""
echo "Alternatively, add it to .env file:"
echo " echo 'export TF_VAR_apikey=\"your-api-key-here\"' >> .env"
exit 1
fi
if [ -z "${TF_VAR_team_owner_email}" ]; then
echo -e "${RED}Error: TF_VAR_team_owner_email environment variable is not set.${NC}"
echo ""
echo "Please set your registered CyberArk account email address:"
echo " export TF_VAR_team_owner_email=\"[email protected]\""
echo ""
echo "This should be the email address you used to register your CyberArk account."
echo ""
echo "Alternatively, add it to .env file:"
echo " echo 'export TF_VAR_team_owner_email=\"[email protected]\"' >> .env"
exit 1
fi
echo -e "${GREEN}✓ Required environment variables are set:${NC}"
echo " TF_VAR_apikey: ${TF_VAR_apikey:0:10}..."
echo " TF_VAR_team_owner_email: ${TF_VAR_team_owner_email}"
echo ""
# Navigate to Terraform directory
if [ ! -d "${TERRAFORM_DIR}" ]; then
echo -e "${RED}Error: Terraform directory not found at ${TERRAFORM_DIR}${NC}"
exit 1
fi
cd "${TERRAFORM_DIR}"
# Initialize Terraform
echo -e "${YELLOW}Initializing Terraform...${NC}"
if terraform init; then
echo -e "${GREEN}✓ Terraform initialized${NC}\n"
else
echo -e "${RED}✗ Terraform initialization failed${NC}"
exit 1
fi
# Run Terraform plan
echo -e "${YELLOW}Running Terraform plan...${NC}"
if terraform plan; then
echo -e "${GREEN}✓ Terraform plan completed${NC}\n"
else
EXIT_CODE=$?
echo -e "\n${RED}✗ Terraform plan failed${NC}"
echo -e "\n${YELLOW}Common issues:${NC}"
echo "1. Email address not found in CyberArk:"
echo " - Verify email matches your CyberArk account: ${TF_VAR_team_owner_email}"
echo " - Check spelling and domain name"
echo " - Ensure user exists at https://ui.venafi.cloud"
echo ""
echo "2. Invalid API key:"
echo " - Verify API key is valid and not expired"
echo " - Check permissions for the API key"
echo " - Generate new API key if needed at https://ui.venafi.cloud"
echo ""
echo "3. Network/connectivity issues:"
echo " - Ensure you can reach https://api.venafi.cloud"
echo " - Check proxy settings if behind corporate firewall"
exit ${EXIT_CODE}
fi
# Prompt for apply
echo -e "${YELLOW}Ready to apply Terraform configuration.${NC}"
echo "This will create resources in CyberArk Workload Identity Manager (Firefly)."
read -p "Do you want to proceed with 'terraform apply'? (yes/no): " -r
echo ""
if [[ ! $REPLY =~ ^[Yy][Ee][Ss]$ ]]; then
echo -e "${YELLOW}Aborted. You can manually run:${NC}"
echo " cd ${TERRAFORM_DIR}"
echo " terraform apply"
exit 0
fi
# Apply Terraform configuration
echo -e "${YELLOW}Applying Terraform configuration...${NC}"
if terraform apply -auto-approve; then
echo -e "${GREEN}✓ Terraform apply completed${NC}\n"
else
echo -e "${RED}✗ Terraform apply failed${NC}"
exit 1
fi
# Save outputs to JSON
echo -e "${YELLOW}Saving Terraform outputs...${NC}"
if terraform output -json > terraform_outputs.json; then
echo -e "${GREEN}✓ Outputs saved to: ${TERRAFORM_DIR}/terraform_outputs.json${NC}"
else
echo -e "${RED}✗ Failed to save outputs${NC}"
exit 1
fi
# Display important outputs
echo -e "\n${YELLOW}Terraform Outputs:${NC}"
terraform output
# Return to project root
cd "${PROJECT_DIR}"
echo -e "\n${GREEN}Terraform control-plane setup complete!${NC}"
echo -e "\n${YELLOW}Important:${NC}"
echo " - Service account credentials are in: ${TERRAFORM_DIR}/serviceaccount/"
echo " - Terraform outputs saved to: ${TERRAFORM_DIR}/terraform_outputs.json"
echo " - Keep these files secure and do not commit to version control!"
echo ""
echo -e "${YELLOW}Next step:${NC} Run 03_update-inventory.sh to configure Ansible inventory"
#!/bin/bash
# Update Ansible Inventory Script
# Copies the generated inventory snippet to the actual Ansible inventory location
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "${SCRIPT_DIR}")"
INVENTORY_SNIPPET="${SCRIPT_DIR}/ansible-inventory-snippet.yml"
ANSIBLE_INVENTORY="${PROJECT_DIR}/ansible/inventory/hosts.yml"
echo -e "${YELLOW}Updating Ansible Inventory${NC}\n"
# Check if snippet exists
if [ ! -f "${INVENTORY_SNIPPET}" ]; then
echo -e "${RED}Error: Inventory snippet not found at ${INVENTORY_SNIPPET}${NC}"
echo "Please run 01_deploy-vms.sh first to generate the inventory snippet."
exit 1
fi
# Backup existing inventory if it exists
if [ -f "${ANSIBLE_INVENTORY}" ]; then
BACKUP_FILE="${ANSIBLE_INVENTORY}.backup.$(date +%Y%m%d-%H%M%S)"
echo -e "${YELLOW}Backing up existing inventory to: ${BACKUP_FILE}${NC}"
cp "${ANSIBLE_INVENTORY}" "${BACKUP_FILE}"
echo -e "${GREEN}✓ Backup created${NC}\n"
fi
# Copy the snippet to the inventory location
echo -e "${YELLOW}Copying inventory snippet to ${ANSIBLE_INVENTORY}${NC}"
cp "${INVENTORY_SNIPPET}" "${ANSIBLE_INVENTORY}"
if [ $? -eq 0 ]; then
echo -e "${GREEN}✓ Inventory updated successfully${NC}\n"
else
echo -e "${RED}✗ Failed to update inventory${NC}"
exit 1
fi
# Display the new inventory
echo -e "${YELLOW}New Ansible Inventory:${NC}"
cat "${ANSIBLE_INVENTORY}"
echo -e "\n${GREEN}Inventory update complete!${NC}"
echo -e "\n${YELLOW}Next step:${NC} Run 04_test-connectivity.sh to verify SSH access"
#!/bin/bash
# Test SSH Connectivity Script
# Tests SSH connectivity to all VMs defined in the Ansible inventory
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "${SCRIPT_DIR}")"
ANSIBLE_INVENTORY="${PROJECT_DIR}/ansible/inventory/hosts.yml"
SSH_PRIVATE_KEY="$HOME/.ssh/swa_multipass"
echo -e "${YELLOW}Testing SSH Connectivity to SWA VMs${NC}\n"
# Check if inventory exists
if [ ! -f "${ANSIBLE_INVENTORY}" ]; then
echo -e "${RED}Error: Ansible inventory not found at ${ANSIBLE_INVENTORY}${NC}"
echo "Please run 03_update-inventory.sh first."
exit 1
fi
# Check if SSH key exists
if [ ! -f "${SSH_PRIVATE_KEY}" ]; then
echo -e "${RED}Error: SSH private key not found at ${SSH_PRIVATE_KEY}${NC}"
exit 1
fi
# Test connectivity using Ansible ping module
echo -e "${YELLOW}Testing connectivity with Ansible ping module...${NC}\n"
cd "${PROJECT_DIR}"
# Use ANSIBLE_HOST_KEY_CHECKING=False to skip host key verification for new VMs
if ANSIBLE_HOST_KEY_CHECKING=False ansible all -i "${ANSIBLE_INVENTORY}" -m ping; then
echo -e "\n${GREEN}✓ All VMs are accessible via SSH!${NC}"
else
echo -e "\n${RED}✗ Some VMs are not accessible${NC}"
echo -e "\n${YELLOW}Troubleshooting tips:${NC}"
echo "1. Verify VMs are running: multipass list"
echo "2. Check SSH key permissions: chmod 600 ${SSH_PRIVATE_KEY}"
echo "3. Test manual SSH: ssh -i ${SSH_PRIVATE_KEY} -o StrictHostKeyChecking=no ubuntu@<VM_IP>"
exit 1
fi
# Test individual VM connectivity and add to known_hosts
echo -e "\n${YELLOW}Testing individual SSH connections and adding to known_hosts...${NC}\n"
# Extract IPs from vm-config.txt if it exists
VM_CONFIG="${SCRIPT_DIR}/vm-config.txt"
if [ -f "${VM_CONFIG}" ]; then
while IFS= read -r line; do
if [[ $line =~ IP\ Address:\ ([0-9.]+) ]]; then
IP="${BASH_REMATCH[1]}"
echo -e "${YELLOW}Testing SSH to ${IP}...${NC}"
# Remove old host key if exists and add new one
ssh-keygen -R ${IP} 2>/dev/null || true
if ssh -i "${SSH_PRIVATE_KEY}" -o ConnectTimeout=5 -o StrictHostKeyChecking=no -o UserKnownHostsFile=~/.ssh/known_hosts ubuntu@${IP} "echo 'Connection successful'" 2>/dev/null; then
echo -e "${GREEN}✓ SSH connection to ${IP} successful${NC}"
else
echo -e "${RED}✗ SSH connection to ${IP} failed${NC}"
fi
fi
done < "${VM_CONFIG}"
fi
echo -e "\n${GREEN}Connectivity test complete!${NC}"
echo -e "\n${YELLOW}Next step:${NC} Run 05_deploy-swa.sh to deploy SWA services"
#!/bin/bash
# Deploy SWA Services Script
# Runs the Ansible playbook to deploy SWA server and agents
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "${SCRIPT_DIR}")"
ANSIBLE_INVENTORY="${PROJECT_DIR}/ansible/inventory/hosts.yml"
ANSIBLE_PLAYBOOK="${PROJECT_DIR}/ansible/deploy-swa.yml"
BINARIES_DIR="${PROJECT_DIR}/binaries"
echo -e "${YELLOW}Deploying SWA Services${NC}\n"
# Check if inventory exists
if [ ! -f "${ANSIBLE_INVENTORY}" ]; then
echo -e "${RED}Error: Ansible inventory not found at ${ANSIBLE_INVENTORY}${NC}"
echo "Please run 03_update-inventory.sh first."
exit 1
fi
# Check if playbook exists
if [ ! -f "${ANSIBLE_PLAYBOOK}" ]; then
echo -e "${RED}Error: Ansible playbook not found at ${ANSIBLE_PLAYBOOK}${NC}"
exit 1
fi
# Check if binaries exist
echo -e "${YELLOW}Checking for SWA binaries...${NC}"
if [ ! -d "${BINARIES_DIR}" ] || [ -z "$(ls -A ${BINARIES_DIR})" ]; then
echo -e "${RED}Error: SWA binaries not found in ${BINARIES_DIR}${NC}"
echo "Please download swa-server and swa-agent binaries and place them in the binaries/ directory."
echo "Download from: https://github.com/Venafi/swa-design-partners/releases"
exit 1
fi
echo -e "${GREEN}✓ Found binaries in ${BINARIES_DIR}${NC}"
ls -lh "${BINARIES_DIR}"
echo -e "\n${YELLOW}Running Ansible playbook...${NC}\n"
cd "${PROJECT_DIR}"
# Run the Ansible playbook with verbose output
# Disable host key checking for new VMs
if ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i "${ANSIBLE_INVENTORY}" "${ANSIBLE_PLAYBOOK}" -v; then
echo -e "\n${GREEN}✓ SWA deployment completed successfully!${NC}"
# Note about known issue
echo -e "\n${YELLOW}Note:${NC} If you see errors about '/tmp/trust_bundles/*.pem' not found,"
echo "this is a known issue and can be ignored. The deployment should still succeed."
else
EXIT_CODE=$?
echo -e "\n${RED}✗ SWA deployment failed${NC}"
echo -e "\n${YELLOW}Common issues:${NC}"
echo "1. Ensure VMs are accessible (run 04_test-connectivity.sh)"
echo "2. Check that binaries are present in binaries/ directory"
echo "3. Verify group_vars configurations in ansible/group_vars/"
echo "4. Check Ansible output above for specific error messages"
exit ${EXIT_CODE}
fi
echo -e "\n${YELLOW}Next step:${NC} Run 06_verify-deployment.sh to verify SWA services are running"
#!/bin/bash
# Verify SWA Deployment Script
# Checks that SWA services are running on all VMs
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "${SCRIPT_DIR}")"
ANSIBLE_INVENTORY="${PROJECT_DIR}/ansible/inventory/hosts.yml"
echo -e "${YELLOW}Verifying SWA Deployment${NC}\n"
# Check if inventory exists
if [ ! -f "${ANSIBLE_INVENTORY}" ]; then
echo -e "${RED}Error: Ansible inventory not found at ${ANSIBLE_INVENTORY}${NC}"
exit 1
fi
cd "${PROJECT_DIR}"
# Disable host key checking for Ansible commands
export ANSIBLE_HOST_KEY_CHECKING=False
# Check SWA server status
echo -e "${YELLOW}Checking SWA Server status...${NC}"
if ansible swa_servers -i "${ANSIBLE_INVENTORY}" -a "sudo systemctl status swa-server" -o 2>&1 | grep -q "active (running)"; then
echo -e "${GREEN}✓ SWA Server is running${NC}\n"
else
echo -e "${RED}✗ SWA Server is not running${NC}\n"
fi
# Check SWA agents status
echo -e "${YELLOW}Checking SWA Agents status...${NC}"
if ansible swa_agents -i "${ANSIBLE_INVENTORY}" -a "sudo systemctl status swa-agent" -o 2>&1 | grep -q "active (running)"; then
echo -e "${GREEN}✓ SWA Agents are running${NC}\n"
else
echo -e "${RED}✗ Some SWA Agents are not running${NC}\n"
fi
# Check for agent socket
echo -e "${YELLOW}Checking for agent sockets...${NC}"
ansible swa_agents -i "${ANSIBLE_INVENTORY}" -a "ls -la /run/swa-agent/api.sock" -o 2>&1
# Display service status in detail
echo -e "\n${YELLOW}=== Detailed Service Status ===${NC}\n"
echo -e "${YELLOW}SWA Server:${NC}"
ansible swa_servers -i "${ANSIBLE_INVENTORY}" -a "sudo systemctl status swa-server --no-pager" -o
echo -e "\n${YELLOW}SWA Agents:${NC}"
ansible swa_agents -i "${ANSIBLE_INVENTORY}" -a "sudo systemctl status swa-agent --no-pager" -o
# Test JWT fetch from an agent
echo -e "\n${YELLOW}=== Testing JWT Token Fetch ===${NC}\n"
echo -e "${YELLOW}Attempting to fetch a test JWT from first agent...${NC}"
if ansible swa_agents[0] -i "${ANSIBLE_INVENTORY}" -a "/opt/swa/bin/spire-agent api fetch jwt -audience test -socketPath /run/swa-agent/api.sock" -o 2>&1 | grep -q "token"; then
echo -e "${GREEN}✓ Successfully fetched JWT token!${NC}"
else
echo -e "${YELLOW}Note: JWT fetch may require workload registration first.${NC}"
fi
echo -e "\n${GREEN}Verification complete!${NC}"
echo -e "\n${YELLOW}Quick Reference Commands:${NC}"
echo " Check server logs: ansible swa_servers -i ansible/inventory/hosts.yml -a \"sudo journalctl -u swa-server -n 50\""
echo " Check agent logs: ansible swa_agents -i ansible/inventory/hosts.yml -a \"sudo journalctl -u swa-agent -n 50\""
echo " Restart server: ansible swa_servers -i ansible/inventory/hosts.yml -a \"sudo systemctl restart swa-server\""
echo " Restart agents: ansible swa_agents -i ansible/inventory/hosts.yml -a \"sudo systemctl restart swa-agent\""
#!/bin/bash
# SWA Multipass VM Destruction Script
# This script destroys all SWA-related Multipass VMs
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Configuration
SERVER_VM="swa-server-01"
AGENT_VMS=("swa-agent-01" "swa-agent-02")
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CONFIG_FILE="${SCRIPT_DIR}/vm-config.txt"
INVENTORY_SNIPPET="${SCRIPT_DIR}/ansible-inventory-snippet.yml"
echo -e "${YELLOW}SWA VM Destruction Script${NC}\n"
# Check if multipass is installed
if ! command -v multipass &> /dev/null; then
echo -e "${RED}Error: Multipass is not installed.${NC}"
exit 1
fi
# Function to check if VM exists
vm_exists() {
local vm_name=$1
multipass list | grep -q "^${vm_name}"
}
# Function to destroy a VM
destroy_vm() {
local vm_name=$1
if vm_exists "${vm_name}"; then
echo -e "${YELLOW}Stopping and deleting ${vm_name}...${NC}"
multipass stop "${vm_name}" 2>/dev/null || true
multipass delete "${vm_name}"
echo -e "${GREEN}✓ ${vm_name} deleted${NC}"
else
echo -e "${YELLOW}VM ${vm_name} does not exist. Skipping...${NC}"
fi
}
# Warning prompt
echo -e "${RED}WARNING: This will destroy the following VMs:${NC}"
echo " - ${SERVER_VM}"
for agent in "${AGENT_VMS[@]}"; do
echo " - ${agent}"
done
echo ""
read -p "Are you sure you want to continue? (yes/no): " -r
echo ""
if [[ ! $REPLY =~ ^[Yy][Ee][Ss]$ ]]; then
echo -e "${YELLOW}Aborted.${NC}"
exit 0
fi
# Destroy server VM
destroy_vm "${SERVER_VM}"
# Destroy agent VMs
for agent in "${AGENT_VMS[@]}"; do
destroy_vm "${agent}"
done
# Purge deleted VMs
echo -e "\n${YELLOW}Purging deleted VMs...${NC}"
multipass purge
echo -e "${GREEN}✓ All VMs purged${NC}"
# Clean up configuration files
echo -e "\n${YELLOW}Cleaning up configuration files...${NC}"
if [ -f "${CONFIG_FILE}" ]; then
rm -f "${CONFIG_FILE}"
echo -e "${GREEN}✓ Removed ${CONFIG_FILE}${NC}"
fi
if [ -f "${INVENTORY_SNIPPET}" ]; then
rm -f "${INVENTORY_SNIPPET}"
echo -e "${GREEN}✓ Removed ${INVENTORY_SNIPPET}${NC}"
fi
echo -e "\n${GREEN}All SWA VMs destroyed successfully!${NC}"
# Show remaining VMs if any
echo -e "\n${YELLOW}Remaining Multipass VMs:${NC}"
multipass list || echo "No VMs remaining"
#!/bin/bash
# Destroy Terraform Resources Script
# Destroys CyberArk Workload Identity Manager (Firefly) resources
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_DIR="$(dirname "${SCRIPT_DIR}")"
TERRAFORM_DIR="${PROJECT_DIR}/terraform/firefly-deployment"
echo -e "${YELLOW}Terraform Resource Destruction Script${NC}\n"
# Check if terraform is installed
if ! command -v terraform &> /dev/null; then
echo -e "${RED}Error: Terraform is not installed.${NC}"
exit 1
fi
# Check if Terraform directory exists
if [ ! -d "${TERRAFORM_DIR}" ]; then
echo -e "${RED}Error: Terraform directory not found at ${TERRAFORM_DIR}${NC}"
exit 1
fi
cd "${TERRAFORM_DIR}"
# Check if Terraform state exists
if [ ! -f "terraform.tfstate" ] && [ ! -f ".terraform/terraform.tfstate" ]; then
echo -e "${YELLOW}No Terraform state found. Nothing to destroy.${NC}"
exit 0
fi
# Warning prompt
echo -e "${RED}WARNING: This will destroy the following resources:${NC}"
echo " - CyberArk Workload Identity Manager (Firefly) configuration"
echo " - Sub CA Provider"
echo " - Issuance Policy"
echo " - Service Account"
echo " - Team (if created by Terraform)"
echo ""
echo -e "${RED}This action cannot be undone!${NC}"
echo ""
read -p "Are you sure you want to continue? Type 'yes' to confirm: " -r
echo ""
if [[ ! $REPLY == "yes" ]]; then
echo -e "${YELLOW}Aborted.${NC}"
exit 0
fi
# Run terraform destroy
echo -e "${YELLOW}Running terraform destroy...${NC}"
if terraform destroy -auto-approve; then
echo -e "${GREEN}✓ Terraform resources destroyed${NC}\n"
else
echo -e "${RED}✗ Terraform destroy failed${NC}"
echo "You may need to manually clean up resources in CyberArk."
exit 1
fi
# Clean up generated files
echo -e "${YELLOW}Cleaning up generated files...${NC}"
if [ -f "terraform_outputs.json" ]; then
rm -f terraform_outputs.json
echo -e "${GREEN}✓ Removed terraform_outputs.json${NC}"
fi
if [ -d "serviceaccount" ] && [ -n "$(ls -A serviceaccount 2>/dev/null)" ]; then
echo -e "${YELLOW}Found service account files. Remove them? (yes/no)${NC}"
read -r REMOVE_SA
if [[ $REMOVE_SA == "yes" ]]; then
rm -rf serviceaccount/*
echo -e "${GREEN}✓ Removed service account files${NC}"
else
echo -e "${YELLOW}Skipped removing service account files${NC}"
fi
fi
cd "${PROJECT_DIR}"
echo -e "\n${GREEN}Terraform resources destroyed successfully!${NC}"
echo -e "\n${YELLOW}Note:${NC} If you want to completely reset:"
echo " 1. Run ./scripts/07_destroy-vms.sh to destroy VMs"
echo " 2. Remove Terraform state files manually if needed"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment