Skip to content

Instantly share code, notes, and snippets.

@potatoqualitee
Created October 6, 2025 11:31
Show Gist options
  • Save potatoqualitee/3b41e0496b27ae9e48af632ba9981fa3 to your computer and use it in GitHub Desktop.
Save potatoqualitee/3b41e0496b27ae9e48af632ba9981fa3 to your computer and use it in GitHub Desktop.
Pester v4 to v5 to v6

The Problem We Avoided for Years

dbatools has 3,500+ Pester tests spread across 707 test files. For years, we put off migrating from Pester v4 to v5 because the sheer volume felt impossible. The thought of manually converting hundreds of test files, each with their own quirks, edge cases, and complex SQL Server integration scenarios, was overwhelming.

But avoiding it wasn't making it go away. Pester v4 was deprecated, and we knew we'd have to migrate eventually. The question wasn't if, but when and how.


Why August 2025 Was Different

I love working with AI, and I'd been hearing great things about Claude Code's refactoring capabilities. When Claude Code CLI came out, I decided to give it a shot. I combined it with PowerShell automation and built a repeatable pattern for test migration.

The goal: Automate 80% of the migration work, leaving only the truly complex tests for manual cleanup.

The timeline: August 2-30, 2025 The result: Migration complete in one month


The Setup: Building the AI Assembly Line

Before touching a single test file, I set up the infrastructure:

1. AI Tool Configuration (PR #9754, Aug 8)

Created .aitools/ directory with:

  • Conventions.md: 382-line document with strict Pester v5 standards

    • Static command naming ($CommandName variable pattern)
    • OTBS formatting (One True Brace Style)
    • BeforeAll/AfterAll block structure
    • Critical: Exact comment preservation rules
    • Mandatory verification checklist
  • Template.md: Migration instructions for Claude Code

  • Aider configuration: GPT-5 model settings, edit formats, streaming

  • Claude permissions: Tool access configuration

2. PowerShell Automation

I wrote Invoke-AITool to:

  • Loop through test files in batches
  • Call Claude Code CLI with consistent prompts
  • Track what was converted vs. what failed
  • Manage git commits

The beauty? Claude Code could see the entire test file, understand the patterns, and apply transformations systematically.


The Migration: By the Numbers

My Work (potatoqualitee - AI-assisted)

  • 28 PRs merged in August
  • 871 distinct files modified
  • 691 test files migrated
  • 21 "Refactor and enhance Pester tests batch" PRs in rapid succession

Pattern: Multiple PRs submitted within minutes (Aug 9, Aug 12-14), each touching 30-80+ test files

Andreas's Work (andreasjordan - Manual cleanup)

  • 29 PRs merged in August
  • 415 distinct files modified
  • 399 test files fixed/enhanced
  • 20 test-focused PRs with precision fixes

The Timeline: A Collaboration Story

Phase 1: The AI Blitz (Aug 2-8)

I kicked off with infrastructure setup and began batch conversions:

  • PR #9754 (Aug 8): Set up AI tooling
  • PRs #9755, 9758-9762 (Aug 8-9): First wave of batch migrations

Meanwhile, Andreas started working in parallel (same day!), tackling the hardest tests first:

  • PR #9744 (Aug 3): Rewrite Backup-DbaDatabase test (too complex for AI)

Phase 2: Parallel Execution (Aug 9-15)

I continued with automated batches:

  • PRs #9768-9779 (Aug 11-14): "Refactor and enhance Pester tests batch"
    • Multiple PRs submitted in rapid succession
    • Each PR: 30-80+ test files
    • Pattern-based conversions (BeforeAll/AfterAll, param blocks, assertions)

Andreas was already fixing what the AI converted:

  • PR #9780 (Aug 15): "Fix tests that needed extra care after migration"

Phase 3: The Big Cleanup (Aug 15-23)

Andreas identified systematic issues the AI created:

PR #9795 (Aug 20): EnableException Pattern Fixes

  • 100 test files modified
  • The problem: Pester v5 changed context isolation behavior
  • AI-converted tests weren't accounting for this, causing test pollution
  • Andreas systematically added EnableException resets in every relevant block

This is the kind of nuanced understanding AI doesn't have.

PR #9798 (Aug 20): Move the last tests from Pester 4 to Pester 5

  • 99 test files modified
  • Quote from Andreas: "A lot of changes, but now every test uses pester 5. Still a lot of TODO comments in the tests - but we will take care of that later."
  • These were the stragglers, the complex tests, the ones with weird dependencies

PR #9806 (Aug 23): Rewrite Restore-DbaDatabase

  • Quote: "That was hard work..." [screenshot showing all tests passing]
  • One of the most critical and complex functions in dbatools
  • Multiple backup types, complex restore scenarios, SQL Server version variations
  • I tried to convert this one - I couldn't fix it. Andreas succeeded.

PR #9807 (Aug 23): A very big pester 5 cleanup

  • 99 test files modified
  • Removed code patterns no longer needed in Pester v5
  • Fixed formatting inconsistencies introduced by AI conversion
  • Standardized structures across tests
  • The difference between "it works" and "it's maintainable"

My last major Pester work:

  • PR #9781 (Aug 16): BeforeAll/AfterAll standardization
  • PR #9803 (Aug 22): Documentation enhancement (Synopsis/Description improvements)
  • PR #9810 (Aug 23): Parameter help standardization (100+ command files)

Phase 4: Perfection (Aug 24-30)

Andreas continued enhancing:

PR #9825 (Aug 29): Enhance tests

  • 75 test files modified
  • Performance optimization: Create master key once instead of repeatedly
  • Reliability: Random numbers to prevent test interference
  • Maintainability: Comments explaining WHY tests are skipped

This is the work of someone who knows these tests will be maintained for years.


What Claude Excelled At

✅ Pattern Recognition and Transformation

  • Converting It blocks to proper Pester v5 syntax
  • Adding BeforeAll/AfterAll structure
  • Standardizing parameter validation patterns
  • Creating $CommandName variables
  • Adding param blocks to test headers

✅ Batch Processing at Scale

  • 691 test files converted in days
  • Consistent application of OTBS formatting
  • Systematic changes across hundreds of files
  • Work that would have taken weeks manually

✅ Structure and Standardization

  • BeforeAll/AfterAll cleanup (PR #9781)
  • Parameter help improvements (PR #9810 - 100+ files)
  • Documentation enhancement (PR #9803)

Where Claude Struggled (And Humans Saved the Day)

❌ Complex Test Logic

  • Backup-DbaDatabase (PR #9744 - Andreas rewrote it)
  • Restore-DbaDatabase (PR #9806 - Andreas: "That was hard work...")
  • These required complete rewrites, not syntax conversion

❌ Pester v5 Context Behavior Changes

  • PR #9795: EnableException pattern fixes (100 files)
  • AI didn't understand that Pester v5 changed context isolation
  • Required deep understanding of test pollution implications

❌ Edge Cases and Interdependencies

  • PR #9798: 99 "straggler" files the AI couldn't handle
  • Weird dependencies, unusual patterns, SQL Server-specific scenarios

❌ Performance and Reliability Patterns

  • PR #9825: Master key optimization, random number generation for test isolation
  • This requires understanding why tests fail intermittently, not just what the syntax should be

❌ Code Quality and Maintainability

  • PR #9807: 99-file cleanup of formatting and redundancy
  • AI converted it, but humans made it maintainable

The Reusable Pattern

Here's what made this successful:

1. Clear Conventions Document

382 lines of specific, enforceable rules. Not vague guidelines - precise transformation instructions.

2. PowerShell Automation Layer

Invoke-AITool to batch process files, manage git commits, and track success/failure.

3. Human-AI Collaboration Strategy

  • AI handles pattern-based conversions (80% of work)
  • Humans handle complexity, edge cases, and quality (20% of work)
  • Both are essential

4. Systematic Cleanup

  • Don't expect AI to be perfect
  • Plan for human review and enhancement
  • Track what needs fixing (Andreas's TODO comments approach)

Cost Analysis

Claude Code CLI (Sonnet 3.5)

Input Tokens: ~120M tokens (estimate based on 691 files × average 5000 tokens × 3 iterations) Output Tokens: ~15M tokens (estimate based on 691 files × average 7000 tokens output)

Approximate cost:

  • Input: $3.00 per million tokens × 120M = $360
  • Output: $15.00 per million tokens × 15M = $225
  • Total: ~$585 for AI-assisted migration

Lessons Learned

What Worked Brilliantly

  1. AI as a force multiplier: 691 files converted in days vs. weeks
  2. Systematic approach: Clear conventions + automation = consistency
  3. Parallel collaboration: AI batch conversion + human expertise in parallel
  4. Reusable pattern: When Pester v6 comes out, we won't avoid it for years

What Surprised Me

  1. AI limitations are predictable: Complex logic, context changes, performance patterns
  2. Human expertise is irreplaceable: Andreas fixed things I didn't even know were broken
  3. Quality matters as much as speed: PR #9807 (99-file cleanup) made it maintainable

What I'd Do Differently

  1. Nothing, that was awesome.

The Future: When Pester v6 Arrives

We won't avoid it. We have a proven pattern:

  1. Update conventions document with Pester v6 changes
  2. Run Invoke-AITool batch conversions
  3. Humans handle edge cases and quality
  4. Ship in weeks, not years

The real win: We transformed "impossible" into "systematic."


Key Takeaway for PSConf.EU

AI doesn't replace humans. It makes humans more effective.

Claude Code gave us velocity - 691 files converted in days. Andreas gave us quality - context isolation fixes, performance optimization, maintainability.

Together, we went from Pester v4 to v5 in one month - a migration we avoided for years.

The best part? It's a repeatable pattern for any large-scale refactoring project.


Resources

GitHub PRs Referenced

  • PR #9754 - AI Automation Setup
  • PR #9795 - EnableException Fixes (100 files)
  • PR #9798 - Last Pester 4 → 5 Migration (99 files)
  • PR #9806 - Restore-DbaDatabase Rewrite
  • PR #9807 - Big Cleanup (99 files)
  • PR #9825 - Test Enhancements (75 files)

Full PR list: 28 PRs by potatoqualitee + 29 PRs by andreasjordan in August 2025

Tools Used

  • Claude Code CLI (Anthropic)
  • Aider (for some initial testing)
  • PowerShell (automation layer)
  • Git (version control, PR workflow)

Q&A Preparation

Q: Could AI have done it alone? A: No. 100-file EnableException fixes, complex test rewrites, performance optimizations - all required human expertise.

Q: Could you have done it without AI? A: Technically yes, but it would have taken months and we'd been avoiding it for years.

Q: What percentage was really automated? A: 80% of the volume (691 files converted), but 20% of the complexity (199 files needing significant cleanup).

Q: Would you use AI for other refactoring tasks? A: Absolutely. We already did - I used Claude Code for parameter help standardization (PR #9810, 100+ files) and documentation improvements (PR #9803).

Q: What's the one thing you wish you'd known before starting? A: That Andreas would end up fixing 199 files after AI conversion. I would have categorized tests upfront to identify "human-first" candidates.


Session presented at PSConf.EU 2025 Chrissy LeMaire (@potatoqualitee) With special thanks to Andreas Jordan (@andreasjordan) for making the migration actually work

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment