When dealing with Dependabot PRs in CI/CD pipelines, teams face a choice: should automated dependency update PRs trigger deployments, or should they skip deployment and only run tests?
This document compares two approaches:
- Approach A: Skip Deployment - Run build/lint/test checks but skip deployment for Dependabot PRs
- Approach B: Deploy-and-Verify - Deploy to preview environment and run E2E tests before auto-merge
name: Deploy to Cloudflare Pages
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
# Skip deployment for Dependabot PRs
if: github.actor != 'dependabot[bot]'
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
- run: npm ci
- run: npm run lint
- run: npm run build
- name: Deploy
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}- Simple to implement - No additional infrastructure needed
- Fast CI runs - Skips deployment step entirely
- Lower costs - Fewer deployments = reduced resource usage
- No secret management complexity - Avoids Dependabot secret access issues
- Sufficient for many projects - Build + unit tests catch most issues
- No runtime verification - Can't catch issues that only manifest in deployed environment
- Misses integration issues - Dependencies might break API integrations, external services
- No production-like testing - Environment differences might hide bugs
- Less confidence in auto-merge - Relying solely on build-time checks
- Small to medium projects
- Applications with comprehensive unit/integration test coverage
- Teams with limited CI/CD resources
- Projects where deployment preview costs are prohibitive
- Libraries/packages (no deployment needed)
name: Deploy and Test
on:
pull_request:
branches: [main]
jobs:
deploy-preview:
runs-on: ubuntu-latest
outputs:
deployment-url: ${{ steps.deploy.outputs.deployment-url }}
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
- run: npm ci
- run: npm run build
- name: Deploy to Preview
id: deploy
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
command: pages deploy dist --project-name=my-project --branch=${{ github.head_ref }}
- name: Save deployment URL
run: echo "DEPLOYMENT_URL=${{ steps.deploy.outputs.deployment-url }}" >> $GITHUB_ENV
e2e-tests:
needs: deploy-preview
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
- run: npm ci
- name: Run E2E tests
run: npm run test:e2e
env:
BASE_URL: ${{ needs.deploy-preview.outputs.deployment-url }}
- name: Run smoke tests
run: npm run test:smoke
env:
BASE_URL: ${{ needs.deploy-preview.outputs.deployment-url }}
verify-deployment:
needs: deploy-preview
runs-on: ubuntu-latest
steps:
- name: Check deployment health
run: |
URL="${{ needs.deploy-preview.outputs.deployment-url }}"
for i in {1..30}; do
STATUS=$(curl -o /dev/null -s -w "%{http_code}" "$URL")
if [ "$STATUS" -eq 200 ]; then
echo "Deployment is healthy!"
exit 0
fi
echo "Attempt $i: Got status $STATUS, retrying..."
sleep 10
done
echo "Deployment failed health check"
exit 1
auto-merge:
needs: [deploy-preview, e2e-tests, verify-deployment]
runs-on: ubuntu-latest
if: github.actor == 'dependabot[bot]'
permissions:
contents: write
pull-requests: write
steps:
- name: Dependabot metadata
id: metadata
uses: dependabot/fetch-metadata@v2
- name: Enable auto-merge
if: |
steps.metadata.outputs.update-type == 'version-update:semver-patch' ||
steps.metadata.outputs.update-type == 'version-update:semver-minor'
run: gh pr merge --auto --merge "${{ github.event.pull_request.html_url }}"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}- Catches runtime issues - Verifies actual deployment works before merge
- Production-like testing - E2E tests run against deployed preview
- Integration verification - Tests external APIs, databases, services
- Deployment validation - Confirms deployment process itself works
- Higher confidence - More assurance before auto-merging
- Catches environment-specific bugs - Issues that only appear when deployed
- Tests dependency changes holistically - Not just build-time compatibility
- Complexity - Requires preview deployment infrastructure
- Longer CI times - Deployment + E2E tests take significantly longer
- Higher costs - Every PR creates a deployment (compute + bandwidth)
- Secret management - Need to handle Dependabot secret access (see below)
- Cleanup required - Preview deployments might need cleanup
- More maintenance - E2E tests need upkeep, can be flaky
- Requires good test coverage - Only valuable if E2E tests are comprehensive
For Approach B, you need to grant Dependabot access to secrets:
# Add secret for Dependabot
gh secret set CLOUDFLARE_API_TOKEN --app dependabot --body "your-token"Or in GitHub UI: Settings → Secrets and variables → Dependabot
Security Note: Dependabot has limited permissions by design. Only grant secrets that Dependabot needs, and consider using scoped/limited tokens.
- Production-critical applications
- Applications with complex integrations
- Teams with mature CI/CD pipelines
- Projects with comprehensive E2E test suites
- Applications where downtime is costly
- Organizations practicing continuous deployment
- SaaS products with paying customers
| Factor | Skip Deployment | Deploy-and-Verify |
|---|---|---|
| Setup Complexity | Low | High |
| CI Duration | 2-5 min | 10-30 min |
| Cost | Low | Medium-High |
| Confidence Level | Medium | High |
| Runtime Issue Detection | No | Yes |
| Maintenance Overhead | Low | Medium-High |
| Secret Management | Simple | Complex |
| False Positives | Low | Medium (flaky E2E) |
| Best for Team Size | Small-Medium | Medium-Large |
- Startups/Small Teams: Typically use Approach A (skip deployment)
- Medium Companies: Mix of both, often start with A and graduate to B
- Large Enterprises: Increasingly adopt Approach B for critical services
- Open Source: Primarily Approach A due to cost constraints
Modern platforms make Approach B increasingly accessible:
- Vercel: Automatic preview deployments for all PRs (including Dependabot)
- Netlify: Deploy Previews with E2E test integration
- Cloudflare Pages: Branch previews support automated testing
- GitHub Actions: Built-in support for deployment workflows
- Hybrid Approach: Skip deployment for patch updates, deploy-and-verify for minor updates
- Selective Testing: Deploy for all PRs, but run full E2E only for framework updates
- Staged Rollout: Approach A initially, migrate to B as project matures
- Risk-Based: Use B for backend/API, A for frontend-only projects
- Just starting out or small team
- Limited CI/CD budget
- Strong unit/integration test coverage
- Low-risk application
- Infrequent deployments
- Primarily frontend changes
- Production-critical application
- Complex external integrations
- Auto-merging dependencies
- Large user base
- Budget for CI/CD infrastructure
- Team capacity for E2E test maintenance
- Previous incidents from dependency updates
- Start with Approach A - Get basic CI working
- Add unit tests - Increase coverage to 80%+
- Add integration tests - Test API contracts
- Implement preview deployments - Manual verification first
- Add basic E2E tests - Critical user flows only
- Automate verification - Move to Approach B
- Expand E2E coverage - Add more test scenarios
Both approaches are valid depending on your context:
-
Approach A is pragmatic and sufficient for most projects. It's what many successful companies use, and it's not "cutting corners"—it's making a reasonable trade-off.
-
Approach B is industry best practice for high-stakes applications. If you're building critical infrastructure, handling sensitive data, or have thousands of users, the extra investment pays dividends.
The key is matching your strategy to your risk profile, resources, and team maturity. Many teams successfully operate with Approach A and only adopt Approach B after experiencing issues or as the product grows.
Most importantly: Either approach is better than no automation at all. The perfect CI/CD setup that never ships is worse than a good-enough setup that ships daily.
Related Reading: