dbatools has 3,500+ Pester tests spread across 707 test files. For years, we put off migrating from Pester v4 to v5 because the sheer volume felt impossible. The thought of manually converting hundreds of test files, each with their own quirks, edge cases, and complex SQL Server integration scenarios, was overwhelming.
But avoiding it wasn't making it go away. Pester v4 was deprecated, and we knew we'd have to migrate eventually. The question wasn't if, but when and how.
I love working with AI, and I'd been hearing great things about Claude Code's refactoring capabilities. When Claude Code CLI came out, I decided to give it a shot. I combined it with PowerShell automation and built a repeatable pattern for test migration.
The goal: Automate 80% of the migration work, leaving only the truly complex tests for manual cleanup.
The timeline: August 2-30, 2025 The result: Migration complete in one month
Before touching a single test file, I set up the infrastructure:
Created .aitools/
directory with:
-
Conventions.md: 382-line document with strict Pester v5 standards
- Static command naming (
$CommandName
variable pattern) - OTBS formatting (One True Brace Style)
- BeforeAll/AfterAll block structure
- Critical: Exact comment preservation rules
- Mandatory verification checklist
- Static command naming (
-
Template.md: Migration instructions for Claude Code
-
Aider configuration: GPT-5 model settings, edit formats, streaming
-
Claude permissions: Tool access configuration
I wrote Invoke-AITool
to:
- Loop through test files in batches
- Call Claude Code CLI with consistent prompts
- Track what was converted vs. what failed
- Manage git commits
The beauty? Claude Code could see the entire test file, understand the patterns, and apply transformations systematically.
- 28 PRs merged in August
- 871 distinct files modified
- 691 test files migrated
- 21 "Refactor and enhance Pester tests batch" PRs in rapid succession
Pattern: Multiple PRs submitted within minutes (Aug 9, Aug 12-14), each touching 30-80+ test files
- 29 PRs merged in August
- 415 distinct files modified
- 399 test files fixed/enhanced
- 20 test-focused PRs with precision fixes
I kicked off with infrastructure setup and began batch conversions:
- PR #9754 (Aug 8): Set up AI tooling
- PRs #9755, 9758-9762 (Aug 8-9): First wave of batch migrations
Meanwhile, Andreas started working in parallel (same day!), tackling the hardest tests first:
- PR #9744 (Aug 3): Rewrite Backup-DbaDatabase test (too complex for AI)
I continued with automated batches:
- PRs #9768-9779 (Aug 11-14): "Refactor and enhance Pester tests batch"
- Multiple PRs submitted in rapid succession
- Each PR: 30-80+ test files
- Pattern-based conversions (BeforeAll/AfterAll, param blocks, assertions)
Andreas was already fixing what the AI converted:
- PR #9780 (Aug 15): "Fix tests that needed extra care after migration"
Andreas identified systematic issues the AI created:
- 100 test files modified
- The problem: Pester v5 changed context isolation behavior
- AI-converted tests weren't accounting for this, causing test pollution
- Andreas systematically added EnableException resets in every relevant block
This is the kind of nuanced understanding AI doesn't have.
- 99 test files modified
- Quote from Andreas: "A lot of changes, but now every test uses pester 5. Still a lot of TODO comments in the tests - but we will take care of that later."
- These were the stragglers, the complex tests, the ones with weird dependencies
- Quote: "That was hard work..." [screenshot showing all tests passing]
- One of the most critical and complex functions in dbatools
- Multiple backup types, complex restore scenarios, SQL Server version variations
- I tried to convert this one - I couldn't fix it. Andreas succeeded.
- 99 test files modified
- Removed code patterns no longer needed in Pester v5
- Fixed formatting inconsistencies introduced by AI conversion
- Standardized structures across tests
- The difference between "it works" and "it's maintainable"
My last major Pester work:
- PR #9781 (Aug 16): BeforeAll/AfterAll standardization
- PR #9803 (Aug 22): Documentation enhancement (Synopsis/Description improvements)
- PR #9810 (Aug 23): Parameter help standardization (100+ command files)
Andreas continued enhancing:
- 75 test files modified
- Performance optimization: Create master key once instead of repeatedly
- Reliability: Random numbers to prevent test interference
- Maintainability: Comments explaining WHY tests are skipped
This is the work of someone who knows these tests will be maintained for years.
- Converting
It
blocks to proper Pester v5 syntax - Adding BeforeAll/AfterAll structure
- Standardizing parameter validation patterns
- Creating
$CommandName
variables - Adding param blocks to test headers
- 691 test files converted in days
- Consistent application of OTBS formatting
- Systematic changes across hundreds of files
- Work that would have taken weeks manually
- BeforeAll/AfterAll cleanup (PR #9781)
- Parameter help improvements (PR #9810 - 100+ files)
- Documentation enhancement (PR #9803)
- Backup-DbaDatabase (PR #9744 - Andreas rewrote it)
- Restore-DbaDatabase (PR #9806 - Andreas: "That was hard work...")
- These required complete rewrites, not syntax conversion
- PR #9795: EnableException pattern fixes (100 files)
- AI didn't understand that Pester v5 changed context isolation
- Required deep understanding of test pollution implications
- PR #9798: 99 "straggler" files the AI couldn't handle
- Weird dependencies, unusual patterns, SQL Server-specific scenarios
- PR #9825: Master key optimization, random number generation for test isolation
- This requires understanding why tests fail intermittently, not just what the syntax should be
- PR #9807: 99-file cleanup of formatting and redundancy
- AI converted it, but humans made it maintainable
Here's what made this successful:
382 lines of specific, enforceable rules. Not vague guidelines - precise transformation instructions.
Invoke-AITool
to batch process files, manage git commits, and track success/failure.
- AI handles pattern-based conversions (80% of work)
- Humans handle complexity, edge cases, and quality (20% of work)
- Both are essential
- Don't expect AI to be perfect
- Plan for human review and enhancement
- Track what needs fixing (Andreas's TODO comments approach)
Input Tokens: ~120M tokens (estimate based on 691 files × average 5000 tokens × 3 iterations) Output Tokens: ~15M tokens (estimate based on 691 files × average 7000 tokens output)
Approximate cost:
- Input: $3.00 per million tokens × 120M = $360
- Output: $15.00 per million tokens × 15M = $225
- Total: ~$585 for AI-assisted migration
- AI as a force multiplier: 691 files converted in days vs. weeks
- Systematic approach: Clear conventions + automation = consistency
- Parallel collaboration: AI batch conversion + human expertise in parallel
- Reusable pattern: When Pester v6 comes out, we won't avoid it for years
- AI limitations are predictable: Complex logic, context changes, performance patterns
- Human expertise is irreplaceable: Andreas fixed things I didn't even know were broken
- Quality matters as much as speed: PR #9807 (99-file cleanup) made it maintainable
- Nothing, that was awesome.
We won't avoid it. We have a proven pattern:
- Update conventions document with Pester v6 changes
- Run
Invoke-AITool
batch conversions - Humans handle edge cases and quality
- Ship in weeks, not years
The real win: We transformed "impossible" into "systematic."
AI doesn't replace humans. It makes humans more effective.
Claude Code gave us velocity - 691 files converted in days. Andreas gave us quality - context isolation fixes, performance optimization, maintainability.
Together, we went from Pester v4 to v5 in one month - a migration we avoided for years.
The best part? It's a repeatable pattern for any large-scale refactoring project.
- PR #9754 - AI Automation Setup
- PR #9795 - EnableException Fixes (100 files)
- PR #9798 - Last Pester 4 → 5 Migration (99 files)
- PR #9806 - Restore-DbaDatabase Rewrite
- PR #9807 - Big Cleanup (99 files)
- PR #9825 - Test Enhancements (75 files)
Full PR list: 28 PRs by potatoqualitee + 29 PRs by andreasjordan in August 2025
- Claude Code CLI (Anthropic)
- Aider (for some initial testing)
- PowerShell (automation layer)
- Git (version control, PR workflow)
Q: Could AI have done it alone? A: No. 100-file EnableException fixes, complex test rewrites, performance optimizations - all required human expertise.
Q: Could you have done it without AI? A: Technically yes, but it would have taken months and we'd been avoiding it for years.
Q: What percentage was really automated? A: 80% of the volume (691 files converted), but 20% of the complexity (199 files needing significant cleanup).
Q: Would you use AI for other refactoring tasks? A: Absolutely. We already did - I used Claude Code for parameter help standardization (PR #9810, 100+ files) and documentation improvements (PR #9803).
Q: What's the one thing you wish you'd known before starting? A: That Andreas would end up fixing 199 files after AI conversion. I would have categorized tests upfront to identify "human-first" candidates.
Session presented at PSConf.EU 2025 Chrissy LeMaire (@potatoqualitee) With special thanks to Andreas Jordan (@andreasjordan) for making the migration actually work