For larger refactors with lots of file changes, we can use the Auto Split feature which uses AI to group changes into logical commits. "100 file changes → 12 commits" etc. We use mostly recursive algorithms + AI to figure out the best possible split for the changes given.
Because local models like Apple Foundation Models (AFM) are constrained to tiny context windows (~4K tokens), the system is built on a highly parallel, adaptive "Divide and Conquer" pipeline
Phase 1: Static Graph Pre-Clustering (Zero AI Cost)
Before invoking AI, Omni builds a deterministic relationship graph. By locally parsing ASTs and regex import statements (e.g., detecting that UserProfile.tsx imports useUserData.ts), it groups obvious dependencies and automatically pairs test files with their implementations (e.g., grouping UserService.ts with UserService.test.ts). This instantaneous step resolves 80% of file relationships for free.
Phase 2: AI Refinement (The Micro-Pipeline)
For ambiguous files, Omni creates a lightweight "File Dossier" (path, diff summary, change type, domain) and feeds it to the LLM. To bypass the 4K token limit of local models, Omni runs an Ultra-Lean Pipeline:
Classify: Many parallel micro-calls to classify each file (feature, fix, chore) using heavily structured, non-natural language prompts (e.g., feat user cache-user-fetch).
Cluster: Batching 25 tiny summaries per call to let the AI determine which files belong together semantically.
Merge: Reconciling groups across batches.
Adaptive Scaling for Cloud Models
While built to survive on 4K-token local models, the architecture detects capabilities dynamically. If a user provides an API key for a massive-context model like Gemini 3.1 Pro or GPT-5.2, Omni pivots from the "Micro-Pipeline" strategy to a "Single-Pass" strategy — dumping all 47 file dossiers into a single call for superior, instantaneous relationship mapping.