feat: L'Ami Fiduciaire V1.0.0 — full codebase with Story 0.1 complete

Initial commit of the L'Ami Fiduciaire SaaS platform built on Laravel 12,
Vue 3, Inertia.js 2, and Tailwind CSS 4.

Story 0.1 (rename folders to declarations in database) is implemented and
code-reviewed: migration, rollback, and 6 Pest tests all passing.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-11 23:33:10 +00:00
commit 35545c2a8f
1517 changed files with 246774 additions and 0 deletions

View File

@@ -0,0 +1,464 @@
# Test Design and Risk Assessment - Validation Checklist
## Prerequisites (Mode-Dependent)
**System-Level Mode (Phase 3):**
- [ ] PRD exists with functional and non-functional requirements
- [ ] ADR (Architecture Decision Record) exists
- [ ] Architecture document available (architecture.md or tech-spec)
- [ ] Requirements are testable and unambiguous
**Epic-Level Mode (Phase 4):**
- [ ] Story markdown with clear acceptance criteria exists
- [ ] PRD or epic documentation available
- [ ] Architecture documents available (test-design-architecture.md + test-design-qa.md from Phase 3, if exists)
- [ ] Requirements are testable and unambiguous
## Process Steps
### Step 1: Context Loading
- [ ] PRD.md read and requirements extracted
- [ ] Epics.md or specific epic documentation loaded
- [ ] Story markdown with acceptance criteria analyzed
- [ ] Architecture documents reviewed (if available)
- [ ] Existing test coverage analyzed
- [ ] Knowledge base fragments loaded (risk-governance, probability-impact, test-levels, test-priorities)
### Step 2: Risk Assessment
- [ ] Genuine risks identified (not just features)
- [ ] Risks classified by category (TECH/SEC/PERF/DATA/BUS/OPS)
- [ ] Probability scored (1-3 for each risk)
- [ ] Impact scored (1-3 for each risk)
- [ ] Risk scores calculated (probability × impact)
- [ ] High-priority risks (score ≥6) flagged
- [ ] Mitigation plans defined for high-priority risks
- [ ] Owners assigned for each mitigation
- [ ] Timelines set for mitigations
- [ ] Residual risk documented
### Step 3: Coverage Design
- [ ] Acceptance criteria broken into atomic scenarios
- [ ] Test levels selected (E2E/API/Component/Unit)
- [ ] No duplicate coverage across levels
- [ ] Priority levels assigned (P0/P1/P2/P3)
- [ ] P0 scenarios meet strict criteria (blocks core + high risk + no workaround)
- [ ] Data prerequisites identified
- [ ] Tooling/access requirements documented when applicable
- [ ] Execution order defined (smoke → P0 → P1 → P2/P3)
### Step 4: Deliverables Generation
- [ ] Risk assessment matrix created
- [ ] Coverage matrix created
- [ ] Execution order documented
- [ ] Resource estimates calculated
- [ ] Quality gate criteria defined
- [ ] Output file written to correct location
- [ ] Output file uses template structure
## Output Validation
### Risk Assessment Matrix
- [ ] All risks have unique IDs (R-001, R-002, etc.)
- [ ] Each risk has category assigned
- [ ] Probability values are 1, 2, or 3
- [ ] Impact values are 1, 2, or 3
- [ ] Scores calculated correctly (P × I)
- [ ] High-priority risks (≥6) clearly marked
- [ ] Mitigation strategies specific and actionable
### Coverage Matrix
- [ ] All requirements mapped to test levels
- [ ] Priorities assigned to all scenarios
- [ ] Risk linkage documented
- [ ] Test counts realistic
- [ ] Owners assigned where applicable
- [ ] No duplicate coverage (same behavior at multiple levels)
### Execution Strategy
**CRITICAL: Keep execution strategy simple, avoid redundancy**
- [ ] **Simple structure**: PR / Nightly / Weekly (NOT complex smoke/P0/P1/P2 tiers)
- [ ] **PR execution**: All functional tests unless significant infrastructure overhead
- [ ] **Nightly/Weekly**: Only performance, chaos, long-running, manual tests
- [ ] **No redundancy**: Don't re-list all tests (already in coverage plan)
- [ ] **Philosophy stated**: "Run everything in PRs if <15 min, defer only if expensive/long"
- [ ] **Playwright parallelization noted**: 100s of tests in 10-15 min
### Resource Estimates
**CRITICAL: Use intervals/ranges, NOT exact numbers**
- [ ] P0 effort provided as interval range (e.g., "~25-40 hours" NOT "36 hours")
- [ ] P1 effort provided as interval range (e.g., "~20-35 hours" NOT "27 hours")
- [ ] P2 effort provided as interval range (e.g., "~10-30 hours" NOT "15.5 hours")
- [ ] P3 effort provided as interval range (e.g., "~2-5 hours" NOT "2.5 hours")
- [ ] Total effort provided as interval range (e.g., "~55-110 hours" NOT "81 hours")
- [ ] Timeline provided as week range (e.g., "~1.5-3 weeks" NOT "11 days")
- [ ] Estimates include setup time and account for complexity variations
- [ ] **No false precision**: Avoid exact calculations like "18 tests × 2 hours = 36 hours"
### Quality Gate Criteria
- [ ] P0 pass rate threshold defined (should be 100%)
- [ ] P1 pass rate threshold defined (typically ≥95%)
- [ ] High-risk mitigation completion required
- [ ] Coverage targets specified (≥80% recommended)
## Quality Checks
### Evidence-Based Assessment
- [ ] Risk assessment based on documented evidence
- [ ] No speculation on business impact
- [ ] Assumptions clearly documented
- [ ] Clarifications requested where needed
- [ ] Historical data referenced where available
### Risk Classification Accuracy
- [ ] TECH risks are architecture/integration issues
- [ ] SEC risks are security vulnerabilities
- [ ] PERF risks are performance/scalability concerns
- [ ] DATA risks are data integrity issues
- [ ] BUS risks are business/revenue impacts
- [ ] OPS risks are deployment/operational issues
### Priority Assignment Accuracy
**CRITICAL: Priority classification is separate from execution timing**
- [ ] **Priority sections (P0/P1/P2/P3) do NOT include execution context** (e.g., no "Run on every commit" in headers)
- [ ] **Priority sections have only "Criteria" and "Purpose"** (no "Execution:" field)
- [ ] **Execution Strategy section** is separate and handles timing based on infrastructure overhead
- [ ] P0: Truly blocks core functionality + High-risk (≥6) + No workaround
- [ ] P1: Important features + Medium-risk (3-4) + Common workflows
- [ ] P2: Secondary features + Low-risk (1-2) + Edge cases
- [ ] P3: Nice-to-have + Exploratory + Benchmarks
- [ ] **Note at top of Test Coverage Plan**: Clarifies P0/P1/P2/P3 = priority/risk, NOT execution timing
### Test Level Selection
- [ ] E2E used only for critical paths
- [ ] API tests cover complex business logic
- [ ] Component tests for UI interactions
- [ ] Unit tests for edge cases and algorithms
- [ ] No redundant coverage
## Integration Points
### Knowledge Base Integration
- [ ] risk-governance.md consulted
- [ ] probability-impact.md applied
- [ ] test-levels-framework.md referenced
- [ ] test-priorities-matrix.md used
- [ ] Additional fragments loaded as needed
### Status File Integration
- [ ] Test design logged in Quality & Testing Progress
- [ ] Epic number and scope documented
- [ ] Completion timestamp recorded
### Workflow Dependencies
- [ ] Can proceed to `*atdd` workflow with P0 scenarios
- [ ] `*atdd` is a separate workflow and must be run explicitly (not auto-run)
- [ ] Can proceed to `automate` workflow with full coverage plan
- [ ] Risk assessment informs `gate` workflow criteria
- [ ] Integrates with `ci` workflow execution order
## Accountability & Logistics
### Not in Scope
- [ ] Out-of-scope items explicitly listed with reasoning
- [ ] Mitigation noted for each excluded item
- [ ] Exclusions reviewed and accepted by stakeholders
### Entry Criteria
- [ ] Prerequisites for testing start are clearly defined
- [ ] Environment readiness included
- [ ] Test data readiness included
- [ ] Pre-implementation blocker resolution referenced
### Exit Criteria
- [ ] Pass/fail thresholds defined for each priority level
- [ ] Bug severity gate defined (e.g., no open P0/P1 bugs)
- [ ] Coverage sufficiency criteria specified
### Project Team (Optional)
- [ ] If included, key roles identified (QA Lead, Dev Lead, PM, Architect minimum)
- [ ] If included, testing responsibilities mapped to roles
- [ ] If included, names populated where available (placeholders acceptable for draft)
### Tooling & Access (System-Level Only, If Applicable)
- [ ] If non-standard tools or access requests exist, list them
- [ ] Access requirements identified for each tool/service
- [ ] Status tracked (Ready/Pending) when applicable
### Interworking & Regression
- [ ] Impacted services/components identified
- [ ] Regression scope defined per impacted service
- [ ] Cross-team coordination noted where needed
## System-Level Mode: Two-Document Validation
**When in system-level mode (PRD + ADR input), validate BOTH documents:**
### test-design-architecture.md
- [ ] **Purpose statement** at top (serves as contract with Architecture team)
- [ ] **Executive Summary** with scope, business context, architecture decisions, risk summary
- [ ] **Quick Guide** section with three tiers:
- [ ] 🚨 BLOCKERS - Team Must Decide (pre-implementation critical path items)
- [ ] ⚠️ HIGH PRIORITY - Team Should Validate (recommendations for approval)
- [ ] 📋 INFO ONLY - Solutions Provided (no decisions needed)
- [ ] **Risk Assessment** section - **ACTIONABLE**
- [ ] Total risks identified count
- [ ] High-priority risks table (score ≥6) with all columns: Risk ID, Category, Description, Probability, Impact, Score, Mitigation, Owner, Timeline
- [ ] Medium and low-priority risks tables
- [ ] Risk category legend included
- [ ] **Testability Concerns and Architectural Gaps** section - **ACTIONABLE**
- [ ] **Sub-section: 🚨 ACTIONABLE CONCERNS** at TOP
- [ ] Blockers to Fast Feedback table (WHAT architecture must provide)
- [ ] Architectural Improvements Needed (WHAT must be changed)
- [ ] Each concern has: Owner, Timeline, Impact
- [ ] **Sub-section: Testability Assessment Summary** at BOTTOM (FYI)
- [ ] What Works Well (passing items)
- [ ] Accepted Trade-offs (no action required)
- [ ] This section only included if worth mentioning; otherwise omitted
- [ ] **Risk Mitigation Plans** for all high-priority risks (≥6)
- [ ] Each plan has: Strategy (numbered steps), Owner, Timeline, Status, Verification
- [ ] **Only Backend/DevOps/Arch/Security mitigations** (production code changes)
- [ ] QA-owned mitigations belong in QA doc instead
- [ ] **Assumptions and Dependencies** section
- [ ] **Architectural assumptions only** (SLO targets, replication lag, system design)
- [ ] Assumptions list (numbered)
- [ ] Dependencies list with required dates
- [ ] Risks to plan with impact and contingency
- [ ] QA execution assumptions belong in QA doc instead
- [ ] **NO test implementation code** (long examples belong in QA doc)
- [ ] **NO test scripts** (no Playwright test(...) blocks, no assertions, no test setup code)
- [ ] **NO NFR test examples** (NFR sections describe WHAT to test, not HOW to test)
- [ ] **NO test scenario checklists** (belong in QA doc)
- [ ] **NO bloat or repetition** (consolidate repeated notes, avoid over-explanation)
- [ ] **Cross-references to QA doc** where appropriate (instead of duplication)
- [ ] **RECIPE SECTIONS NOT IN ARCHITECTURE DOC:**
- [ ] NO "Test Levels Strategy" section (unit/integration/E2E split belongs in QA doc only)
- [ ] NO "NFR Testing Approach" section with detailed test procedures (belongs in QA doc only)
- [ ] NO "Test Environment Requirements" section (belongs in QA doc only)
- [ ] NO "Recommendations for pre-implementation" section with test framework setup (belongs in QA doc only)
- [ ] NO "Quality Gate Criteria" section (pass rates, coverage targets belong in QA doc only)
- [ ] NO "Tool Selection" section (Playwright, k6, etc. belongs in QA doc only)
### test-design-qa.md
**REQUIRED SECTIONS:**
- [ ] **Purpose statement** at top (test execution recipe)
- [ ] **Executive Summary** with risk summary and coverage summary
- [ ] **Dependencies & Test Blockers** section appears near the top (immediately after Executive Summary, or after Not in Scope)
- [ ] Backend/Architecture dependencies listed (what QA needs from other teams)
- [ ] QA infrastructure setup listed (factories, fixtures, environments)
- [ ] Code example with playwright-utils if config.tea_use_playwright_utils is true
- [ ] Test from '@seontechnologies/playwright-utils/api-request/fixtures'
- [ ] Expect from '@playwright/test' (playwright-utils does not re-export expect)
- [ ] Code examples include assertions (no unused imports)
- [ ] **Risk Assessment** section (brief, references Architecture doc)
- [ ] High-priority risks table
- [ ] Medium/low-priority risks table
- [ ] Each risk shows "QA Test Coverage" column (how QA validates)
- [ ] **Test Coverage Plan** with P0/P1/P2/P3 sections
- [ ] Priority sections have ONLY "Criteria" (no execution context)
- [ ] Note at top: "P0/P1/P2/P3 = priority, NOT execution timing"
- [ ] Test tables with columns: Test ID | Requirement | Test Level | Risk Link | Notes
- [ ] **Execution Strategy** section (organized by TOOL TYPE)
- [ ] Every PR: Playwright tests (~10-15 min)
- [ ] Nightly: k6 performance tests (~30-60 min)
- [ ] Weekly: Chaos & long-running (~hours)
- [ ] Philosophy: "Run everything in PRs unless expensive/long-running"
- [ ] **QA Effort Estimate** section (QA effort ONLY)
- [ ] Interval-based estimates (e.g., "~1-2 weeks" NOT "36 hours")
- [ ] NO DevOps, Backend, Data Eng, Finance effort
- [ ] No per-milestone effort breakdowns in this section
- [ ] **Implementation Planning Handoff** section (optional)
- [ ] Only include if implementation tasks must be scheduled
- [ ] Owners assigned (QA/Dev/Platform/etc)
- [ ] Target milestone may be noted, but avoid detailed per-milestone breakdowns
- [ ] **Appendix A: Code Examples & Tagging**
- [ ] **Appendix B: Knowledge Base References**
**DON'T INCLUDE (bloat):**
- [ ] ❌ NO Quick Reference section
- [ ] ❌ NO System Architecture Summary
- [ ] ❌ NO Test Environment Requirements as separate section (integrate into Dependencies)
- [ ] ❌ NO Testability Assessment section (covered in Dependencies)
- [ ] ❌ NO Test Levels Strategy section (obvious from test scenarios)
- [ ] ❌ NO NFR Readiness Summary
- [ ] ❌ NO Quality Gate Criteria section (teams decide for themselves)
- [ ] ❌ NO Follow-on Workflows section (BMAD commands self-explanatory)
- [ ] ❌ NO Approval section
- [ ] ❌ NO Infrastructure/DevOps/Finance effort tables (out of scope)
- [ ] ❌ NO detailed milestone-by-milestone breakdown tables (use Implementation Planning Handoff if needed)
- [ ] ❌ NO generic Next Steps section (use Implementation Planning Handoff if needed)
### Cross-Document Consistency
- [ ] Both documents reference same risks by ID (R-001, R-002, etc.)
- [ ] Both documents use consistent priority levels (P0, P1, P2, P3)
- [ ] Both documents reference same pre-implementation blockers
- [ ] No duplicate content (cross-reference instead)
- [ ] Dates and authors match across documents
- [ ] ADR and PRD references consistent
### Document Quality (Anti-Bloat Check)
**CRITICAL: Check for bloat and repetition across BOTH documents**
- [ ] **No repeated notes 10+ times** (e.g., "Timing is pessimistic until R-005 fixed" on every section)
- [ ] **Repeated information consolidated** (write once at top, reference briefly if needed)
- [ ] **No excessive detail** that doesn't add value (obvious concepts, redundant examples)
- [ ] **Focus on unique/critical info** (only document what's different from standard practice)
- [ ] **Architecture doc**: Concerns-focused, NOT implementation-focused
- [ ] **QA doc**: Implementation-focused, NOT theory-focused
- [ ] **Clear separation**: Architecture = WHAT and WHY, QA = HOW
- [ ] **Professional tone**: No AI slop markers
- [ ] Avoid excessive ✅/❌ emojis (use sparingly, only when adding clarity)
- [ ] Avoid "absolutely", "excellent", "fantastic", overly enthusiastic language
- [ ] Write professionally and directly
- [ ] **Architecture doc length**: Target ~150-200 lines max (focus on actionable concerns only)
- [ ] **QA doc length**: Keep concise, remove bloat sections
### Architecture Doc Structure (Actionable-First Principle)
**CRITICAL: Validate structure follows actionable-first, FYI-last principle**
- [ ] **Actionable sections at TOP:**
- [ ] Quick Guide (🚨 BLOCKERS first, then ⚠️ HIGH PRIORITY, then 📋 INFO ONLY last)
- [ ] Risk Assessment (high-priority risks ≥6 at top)
- [ ] Testability Concerns (concerns/blockers at top, passing items at bottom)
- [ ] Risk Mitigation Plans (for high-priority risks ≥6)
- [ ] **FYI sections at BOTTOM:**
- [ ] Testability Assessment Summary (what works well - only if worth mentioning)
- [ ] Assumptions and Dependencies
- [ ] **ASRs categorized correctly:**
- [ ] Actionable ASRs included in 🚨 or ⚠️ sections
- [ ] FYI ASRs included in 📋 section or omitted if obvious
## BMAD Handoff Validation (System-Level Mode Only)
- [ ] Handoff document generated at `{test_artifacts}/test-design/{project_name}-handoff.md`
- [ ] TEA Artifacts Inventory table populated with actual paths
- [ ] Epic-Level Integration Guidance populated with P0/P1 risks
- [ ] Story-Level Integration Guidance populated with critical test scenarios
- [ ] Risk-to-Story Mapping table populated from risk register
- [ ] Recommended workflow sequence is accurate
- [ ] Phase transition quality gates are defined
## Completion Criteria
**All must be true:**
- [ ] All prerequisites met
- [ ] All process steps completed
- [ ] All output validations passed
- [ ] All quality checks passed
- [ ] All integration points verified
- [ ] Output file(s) complete and well-formatted
- [ ] **System-level mode:** Both documents validated (if applicable)
- [ ] **System-level mode:** Handoff document validated (if applicable)
- [ ] **Epic-level mode:** Single document validated (if applicable)
- [ ] Team review scheduled (if required)
## Post-Workflow Actions
**User must complete:**
1. [ ] Review risk assessment with team
2. [ ] Prioritize mitigation for high-priority risks (score ≥6)
3. [ ] Allocate resources per estimates
4. [ ] Run `*atdd` workflow to generate P0 tests (separate workflow; not auto-run)
5. [ ] Set up test data factories and fixtures
6. [ ] Schedule team review of test design document
**Recommended next workflows:**
1. [ ] Run `atdd` workflow for P0 test generation
2. [ ] Run `framework` workflow if not already done
3. [ ] Run `ci` workflow to configure pipeline stages
## Rollback Procedure
If workflow fails:
1. [ ] Delete output file
2. [ ] Review error logs
3. [ ] Fix missing context (PRD, architecture docs)
4. [ ] Clarify ambiguous requirements
5. [ ] Retry workflow
## Notes
### Common Issues
**Issue**: Too many P0 tests
- **Solution**: Apply strict P0 criteria - must block core AND high risk AND no workaround
**Issue**: Risk scores all high
- **Solution**: Differentiate between high-impact (3) and degraded (2) impacts
**Issue**: Duplicate coverage across levels
- **Solution**: Use test pyramid - E2E for critical paths only
**Issue**: Resource estimates too high or too precise
- **Solution**:
- Invest in fixtures/factories to reduce per-test setup time
- Use interval ranges (e.g., "~55-110 hours") instead of exact numbers (e.g., "81 hours")
- Widen intervals if high uncertainty exists
**Issue**: Execution order section too complex or redundant
- **Solution**:
- Default: Run everything in PRs (<15 min with Playwright parallelization)
- Only defer to nightly/weekly if expensive (k6, chaos, 4+ hour tests)
- Don't create smoke/P0/P1/P2/P3 tier structure
- Don't re-list all tests (already in coverage plan)
### Best Practices
- Base risk assessment on evidence, not assumptions
- High-priority risks (≥6) require immediate mitigation
- P0 tests should cover <10% of total scenarios
- Avoid testing same behavior at multiple levels
- **Use interval-based estimates** (e.g., "~25-40 hours") instead of exact numbers to avoid false precision and provide flexibility
- **Keep execution strategy simple**: Default to "run everything in PRs" (<15 min with Playwright), only defer if expensive/long-running
- **Avoid execution order redundancy**: Don't create complex tier structures or re-list tests
---
**Checklist Complete**: Sign off when all items validated.
**Completed by:** {name}
**Date:** {date}
**Epic:** {epic title}
**Notes:** {additional notes}

View File

@@ -0,0 +1,464 @@
# Test Design and Risk Assessment - Validation Checklist
## Prerequisites (Mode-Dependent)
**System-Level Mode (Phase 3):**
- [ ] PRD exists with functional and non-functional requirements
- [ ] ADR (Architecture Decision Record) exists
- [ ] Architecture document available (architecture.md or tech-spec)
- [ ] Requirements are testable and unambiguous
**Epic-Level Mode (Phase 4):**
- [ ] Story markdown with clear acceptance criteria exists
- [ ] PRD or epic documentation available
- [ ] Architecture documents available (test-design-architecture.md + test-design-qa.md from Phase 3, if exists)
- [ ] Requirements are testable and unambiguous
## Process Steps
### Step 1: Context Loading
- [ ] PRD.md read and requirements extracted
- [ ] Epics.md or specific epic documentation loaded
- [ ] Story markdown with acceptance criteria analyzed
- [ ] Architecture documents reviewed (if available)
- [ ] Existing test coverage analyzed
- [ ] Knowledge base fragments loaded (risk-governance, probability-impact, test-levels, test-priorities)
### Step 2: Risk Assessment
- [ ] Genuine risks identified (not just features)
- [ ] Risks classified by category (TECH/SEC/PERF/DATA/BUS/OPS)
- [ ] Probability scored (1-3 for each risk)
- [ ] Impact scored (1-3 for each risk)
- [ ] Risk scores calculated (probability × impact)
- [ ] High-priority risks (score ≥6) flagged
- [ ] Mitigation plans defined for high-priority risks
- [ ] Owners assigned for each mitigation
- [ ] Timelines set for mitigations
- [ ] Residual risk documented
### Step 3: Coverage Design
- [ ] Acceptance criteria broken into atomic scenarios
- [ ] Test levels selected (E2E/API/Component/Unit)
- [ ] No duplicate coverage across levels
- [ ] Priority levels assigned (P0/P1/P2/P3)
- [ ] P0 scenarios meet strict criteria (blocks core + high risk + no workaround)
- [ ] Data prerequisites identified
- [ ] Tooling/access requirements documented when applicable
- [ ] Execution order defined (smoke → P0 → P1 → P2/P3)
### Step 4: Deliverables Generation
- [ ] Risk assessment matrix created
- [ ] Coverage matrix created
- [ ] Execution order documented
- [ ] Resource estimates calculated
- [ ] Quality gate criteria defined
- [ ] Output file written to correct location
- [ ] Output file uses template structure
## Output Validation
### Risk Assessment Matrix
- [ ] All risks have unique IDs (R-001, R-002, etc.)
- [ ] Each risk has category assigned
- [ ] Probability values are 1, 2, or 3
- [ ] Impact values are 1, 2, or 3
- [ ] Scores calculated correctly (P × I)
- [ ] High-priority risks (≥6) clearly marked
- [ ] Mitigation strategies specific and actionable
### Coverage Matrix
- [ ] All requirements mapped to test levels
- [ ] Priorities assigned to all scenarios
- [ ] Risk linkage documented
- [ ] Test counts realistic
- [ ] Owners assigned where applicable
- [ ] No duplicate coverage (same behavior at multiple levels)
### Execution Strategy
**CRITICAL: Keep execution strategy simple, avoid redundancy**
- [ ] **Simple structure**: PR / Nightly / Weekly (NOT complex smoke/P0/P1/P2 tiers)
- [ ] **PR execution**: All functional tests unless significant infrastructure overhead
- [ ] **Nightly/Weekly**: Only performance, chaos, long-running, manual tests
- [ ] **No redundancy**: Don't re-list all tests (already in coverage plan)
- [ ] **Philosophy stated**: "Run everything in PRs if <15 min, defer only if expensive/long"
- [ ] **Playwright parallelization noted**: 100s of tests in 10-15 min
### Resource Estimates
**CRITICAL: Use intervals/ranges, NOT exact numbers**
- [ ] P0 effort provided as interval range (e.g., "~25-40 hours" NOT "36 hours")
- [ ] P1 effort provided as interval range (e.g., "~20-35 hours" NOT "27 hours")
- [ ] P2 effort provided as interval range (e.g., "~10-30 hours" NOT "15.5 hours")
- [ ] P3 effort provided as interval range (e.g., "~2-5 hours" NOT "2.5 hours")
- [ ] Total effort provided as interval range (e.g., "~55-110 hours" NOT "81 hours")
- [ ] Timeline provided as week range (e.g., "~1.5-3 weeks" NOT "11 days")
- [ ] Estimates include setup time and account for complexity variations
- [ ] **No false precision**: Avoid exact calculations like "18 tests × 2 hours = 36 hours"
### Quality Gate Criteria
- [ ] P0 pass rate threshold defined (should be 100%)
- [ ] P1 pass rate threshold defined (typically ≥95%)
- [ ] High-risk mitigation completion required
- [ ] Coverage targets specified (≥80% recommended)
## Quality Checks
### Evidence-Based Assessment
- [ ] Risk assessment based on documented evidence
- [ ] No speculation on business impact
- [ ] Assumptions clearly documented
- [ ] Clarifications requested where needed
- [ ] Historical data referenced where available
### Risk Classification Accuracy
- [ ] TECH risks are architecture/integration issues
- [ ] SEC risks are security vulnerabilities
- [ ] PERF risks are performance/scalability concerns
- [ ] DATA risks are data integrity issues
- [ ] BUS risks are business/revenue impacts
- [ ] OPS risks are deployment/operational issues
### Priority Assignment Accuracy
**CRITICAL: Priority classification is separate from execution timing**
- [ ] **Priority sections (P0/P1/P2/P3) do NOT include execution context** (e.g., no "Run on every commit" in headers)
- [ ] **Priority sections have only "Criteria" and "Purpose"** (no "Execution:" field)
- [ ] **Execution Strategy section** is separate and handles timing based on infrastructure overhead
- [ ] P0: Truly blocks core functionality + High-risk (≥6) + No workaround
- [ ] P1: Important features + Medium-risk (3-4) + Common workflows
- [ ] P2: Secondary features + Low-risk (1-2) + Edge cases
- [ ] P3: Nice-to-have + Exploratory + Benchmarks
- [ ] **Note at top of Test Coverage Plan**: Clarifies P0/P1/P2/P3 = priority/risk, NOT execution timing
### Test Level Selection
- [ ] E2E used only for critical paths
- [ ] API tests cover complex business logic
- [ ] Component tests for UI interactions
- [ ] Unit tests for edge cases and algorithms
- [ ] No redundant coverage
## Integration Points
### Knowledge Base Integration
- [ ] risk-governance.md consulted
- [ ] probability-impact.md applied
- [ ] test-levels-framework.md referenced
- [ ] test-priorities-matrix.md used
- [ ] Additional fragments loaded as needed
### Status File Integration
- [ ] Test design logged in Quality & Testing Progress
- [ ] Epic number and scope documented
- [ ] Completion timestamp recorded
### Workflow Dependencies
- [ ] Can proceed to `*atdd` workflow with P0 scenarios
- [ ] `*atdd` is a separate workflow and must be run explicitly (not auto-run)
- [ ] Can proceed to `automate` workflow with full coverage plan
- [ ] Risk assessment informs `gate` workflow criteria
- [ ] Integrates with `ci` workflow execution order
## Accountability & Logistics
### Not in Scope
- [ ] Out-of-scope items explicitly listed with reasoning
- [ ] Mitigation noted for each excluded item
- [ ] Exclusions reviewed and accepted by stakeholders
### Entry Criteria
- [ ] Prerequisites for testing start are clearly defined
- [ ] Environment readiness included
- [ ] Test data readiness included
- [ ] Pre-implementation blocker resolution referenced
### Exit Criteria
- [ ] Pass/fail thresholds defined for each priority level
- [ ] Bug severity gate defined (e.g., no open P0/P1 bugs)
- [ ] Coverage sufficiency criteria specified
### Project Team (Optional)
- [ ] If included, key roles identified (QA Lead, Dev Lead, PM, Architect minimum)
- [ ] If included, testing responsibilities mapped to roles
- [ ] If included, names populated where available (placeholders acceptable for draft)
### Tooling & Access (System-Level Only, If Applicable)
- [ ] If non-standard tools or access requests exist, list them
- [ ] Access requirements identified for each tool/service
- [ ] Status tracked (Ready/Pending) when applicable
### Interworking & Regression
- [ ] Impacted services/components identified
- [ ] Regression scope defined per impacted service
- [ ] Cross-team coordination noted where needed
## System-Level Mode: Two-Document Validation
**When in system-level mode (PRD + ADR input), validate BOTH documents:**
### test-design-architecture.md
- [ ] **Purpose statement** at top (serves as contract with Architecture team)
- [ ] **Executive Summary** with scope, business context, architecture decisions, risk summary
- [ ] **Quick Guide** section with three tiers:
- [ ] 🚨 BLOCKERS - Team Must Decide (pre-implementation critical path items)
- [ ] ⚠️ HIGH PRIORITY - Team Should Validate (recommendations for approval)
- [ ] 📋 INFO ONLY - Solutions Provided (no decisions needed)
- [ ] **Risk Assessment** section - **ACTIONABLE**
- [ ] Total risks identified count
- [ ] High-priority risks table (score ≥6) with all columns: Risk ID, Category, Description, Probability, Impact, Score, Mitigation, Owner, Timeline
- [ ] Medium and low-priority risks tables
- [ ] Risk category legend included
- [ ] **Testability Concerns and Architectural Gaps** section - **ACTIONABLE**
- [ ] **Sub-section: 🚨 ACTIONABLE CONCERNS** at TOP
- [ ] Blockers to Fast Feedback table (WHAT architecture must provide)
- [ ] Architectural Improvements Needed (WHAT must be changed)
- [ ] Each concern has: Owner, Timeline, Impact
- [ ] **Sub-section: Testability Assessment Summary** at BOTTOM (FYI)
- [ ] What Works Well (passing items)
- [ ] Accepted Trade-offs (no action required)
- [ ] This section only included if worth mentioning; otherwise omitted
- [ ] **Risk Mitigation Plans** for all high-priority risks (≥6)
- [ ] Each plan has: Strategy (numbered steps), Owner, Timeline, Status, Verification
- [ ] **Only Backend/DevOps/Arch/Security mitigations** (production code changes)
- [ ] QA-owned mitigations belong in QA doc instead
- [ ] **Assumptions and Dependencies** section
- [ ] **Architectural assumptions only** (SLO targets, replication lag, system design)
- [ ] Assumptions list (numbered)
- [ ] Dependencies list with required dates
- [ ] Risks to plan with impact and contingency
- [ ] QA execution assumptions belong in QA doc instead
- [ ] **NO test implementation code** (long examples belong in QA doc)
- [ ] **NO test scripts** (no Playwright test(...) blocks, no assertions, no test setup code)
- [ ] **NO NFR test examples** (NFR sections describe WHAT to test, not HOW to test)
- [ ] **NO test scenario checklists** (belong in QA doc)
- [ ] **NO bloat or repetition** (consolidate repeated notes, avoid over-explanation)
- [ ] **Cross-references to QA doc** where appropriate (instead of duplication)
- [ ] **RECIPE SECTIONS NOT IN ARCHITECTURE DOC:**
- [ ] NO "Test Levels Strategy" section (unit/integration/E2E split belongs in QA doc only)
- [ ] NO "NFR Testing Approach" section with detailed test procedures (belongs in QA doc only)
- [ ] NO "Test Environment Requirements" section (belongs in QA doc only)
- [ ] NO "Recommendations for pre-implementation" section with test framework setup (belongs in QA doc only)
- [ ] NO "Quality Gate Criteria" section (pass rates, coverage targets belong in QA doc only)
- [ ] NO "Tool Selection" section (Playwright, k6, etc. belongs in QA doc only)
### test-design-qa.md
**REQUIRED SECTIONS:**
- [ ] **Purpose statement** at top (test execution recipe)
- [ ] **Executive Summary** with risk summary and coverage summary
- [ ] **Dependencies & Test Blockers** section appears near the top (immediately after Executive Summary, or after Not in Scope)
- [ ] Backend/Architecture dependencies listed (what QA needs from other teams)
- [ ] QA infrastructure setup listed (factories, fixtures, environments)
- [ ] Code example with playwright-utils if config.tea_use_playwright_utils is true
- [ ] Test from '@seontechnologies/playwright-utils/api-request/fixtures'
- [ ] Expect from '@playwright/test' (playwright-utils does not re-export expect)
- [ ] Code examples include assertions (no unused imports)
- [ ] **Risk Assessment** section (brief, references Architecture doc)
- [ ] High-priority risks table
- [ ] Medium/low-priority risks table
- [ ] Each risk shows "QA Test Coverage" column (how QA validates)
- [ ] **Test Coverage Plan** with P0/P1/P2/P3 sections
- [ ] Priority sections have ONLY "Criteria" (no execution context)
- [ ] Note at top: "P0/P1/P2/P3 = priority, NOT execution timing"
- [ ] Test tables with columns: Test ID | Requirement | Test Level | Risk Link | Notes
- [ ] **Execution Strategy** section (organized by TOOL TYPE)
- [ ] Every PR: Playwright tests (~10-15 min)
- [ ] Nightly: k6 performance tests (~30-60 min)
- [ ] Weekly: Chaos & long-running (~hours)
- [ ] Philosophy: "Run everything in PRs unless expensive/long-running"
- [ ] **QA Effort Estimate** section (QA effort ONLY)
- [ ] Interval-based estimates (e.g., "~1-2 weeks" NOT "36 hours")
- [ ] NO DevOps, Backend, Data Eng, Finance effort
- [ ] No per-milestone effort breakdowns in this section
- [ ] **Implementation Planning Handoff** section (optional)
- [ ] Only include if implementation tasks must be scheduled
- [ ] Owners assigned (QA/Dev/Platform/etc)
- [ ] Target milestone may be noted, but avoid detailed per-milestone breakdowns
- [ ] **Appendix A: Code Examples & Tagging**
- [ ] **Appendix B: Knowledge Base References**
**DON'T INCLUDE (bloat):**
- [ ] ❌ NO Quick Reference section
- [ ] ❌ NO System Architecture Summary
- [ ] ❌ NO Test Environment Requirements as separate section (integrate into Dependencies)
- [ ] ❌ NO Testability Assessment section (covered in Dependencies)
- [ ] ❌ NO Test Levels Strategy section (obvious from test scenarios)
- [ ] ❌ NO NFR Readiness Summary
- [ ] ❌ NO Quality Gate Criteria section (teams decide for themselves)
- [ ] ❌ NO Follow-on Workflows section (BMAD commands self-explanatory)
- [ ] ❌ NO Approval section
- [ ] ❌ NO Infrastructure/DevOps/Finance effort tables (out of scope)
- [ ] ❌ NO detailed milestone-by-milestone breakdown tables (use Implementation Planning Handoff if needed)
- [ ] ❌ NO generic Next Steps section (use Implementation Planning Handoff if needed)
### Cross-Document Consistency
- [ ] Both documents reference same risks by ID (R-001, R-002, etc.)
- [ ] Both documents use consistent priority levels (P0, P1, P2, P3)
- [ ] Both documents reference same pre-implementation blockers
- [ ] No duplicate content (cross-reference instead)
- [ ] Dates and authors match across documents
- [ ] ADR and PRD references consistent
### Document Quality (Anti-Bloat Check)
**CRITICAL: Check for bloat and repetition across BOTH documents**
- [ ] **No repeated notes 10+ times** (e.g., "Timing is pessimistic until R-005 fixed" on every section)
- [ ] **Repeated information consolidated** (write once at top, reference briefly if needed)
- [ ] **No excessive detail** that doesn't add value (obvious concepts, redundant examples)
- [ ] **Focus on unique/critical info** (only document what's different from standard practice)
- [ ] **Architecture doc**: Concerns-focused, NOT implementation-focused
- [ ] **QA doc**: Implementation-focused, NOT theory-focused
- [ ] **Clear separation**: Architecture = WHAT and WHY, QA = HOW
- [ ] **Professional tone**: No AI slop markers
- [ ] Avoid excessive ✅/❌ emojis (use sparingly, only when adding clarity)
- [ ] Avoid "absolutely", "excellent", "fantastic", overly enthusiastic language
- [ ] Write professionally and directly
- [ ] **Architecture doc length**: Target ~150-200 lines max (focus on actionable concerns only)
- [ ] **QA doc length**: Keep concise, remove bloat sections
### Architecture Doc Structure (Actionable-First Principle)
**CRITICAL: Validate structure follows actionable-first, FYI-last principle**
- [ ] **Actionable sections at TOP:**
- [ ] Quick Guide (🚨 BLOCKERS first, then ⚠️ HIGH PRIORITY, then 📋 INFO ONLY last)
- [ ] Risk Assessment (high-priority risks ≥6 at top)
- [ ] Testability Concerns (concerns/blockers at top, passing items at bottom)
- [ ] Risk Mitigation Plans (for high-priority risks ≥6)
- [ ] **FYI sections at BOTTOM:**
- [ ] Testability Assessment Summary (what works well - only if worth mentioning)
- [ ] Assumptions and Dependencies
- [ ] **ASRs categorized correctly:**
- [ ] Actionable ASRs included in 🚨 or ⚠️ sections
- [ ] FYI ASRs included in 📋 section or omitted if obvious
## BMAD Handoff Validation (System-Level Mode Only)
- [ ] Handoff document generated at `{test_artifacts}/test-design/{project_name}-handoff.md`
- [ ] TEA Artifacts Inventory table populated with actual paths
- [ ] Epic-Level Integration Guidance populated with P0/P1 risks
- [ ] Story-Level Integration Guidance populated with critical test scenarios
- [ ] Risk-to-Story Mapping table populated from risk register
- [ ] Recommended workflow sequence is accurate
- [ ] Phase transition quality gates are defined
## Completion Criteria
**All must be true:**
- [ ] All prerequisites met
- [ ] All process steps completed
- [ ] All output validations passed
- [ ] All quality checks passed
- [ ] All integration points verified
- [ ] Output file(s) complete and well-formatted
- [ ] **System-level mode:** Both documents validated (if applicable)
- [ ] **System-level mode:** Handoff document validated (if applicable)
- [ ] **Epic-level mode:** Single document validated (if applicable)
- [ ] Team review scheduled (if required)
## Post-Workflow Actions
**User must complete:**
1. [ ] Review risk assessment with team
2. [ ] Prioritize mitigation for high-priority risks (score ≥6)
3. [ ] Allocate resources per estimates
4. [ ] Run `*atdd` workflow to generate P0 tests (separate workflow; not auto-run)
5. [ ] Set up test data factories and fixtures
6. [ ] Schedule team review of test design document
**Recommended next workflows:**
1. [ ] Run `atdd` workflow for P0 test generation
2. [ ] Run `framework` workflow if not already done
3. [ ] Run `ci` workflow to configure pipeline stages
## Rollback Procedure
If workflow fails:
1. [ ] Delete output file
2. [ ] Review error logs
3. [ ] Fix missing context (PRD, architecture docs)
4. [ ] Clarify ambiguous requirements
5. [ ] Retry workflow
## Notes
### Common Issues
**Issue**: Too many P0 tests
- **Solution**: Apply strict P0 criteria - must block core AND high risk AND no workaround
**Issue**: Risk scores all high
- **Solution**: Differentiate between high-impact (3) and degraded (2) impacts
**Issue**: Duplicate coverage across levels
- **Solution**: Use test pyramid - E2E for critical paths only
**Issue**: Resource estimates too high or too precise
- **Solution**:
- Invest in fixtures/factories to reduce per-test setup time
- Use interval ranges (e.g., "~55-110 hours") instead of exact numbers (e.g., "81 hours")
- Widen intervals if high uncertainty exists
**Issue**: Execution order section too complex or redundant
- **Solution**:
- Default: Run everything in PRs (<15 min with Playwright parallelization)
- Only defer to nightly/weekly if expensive (k6, chaos, 4+ hour tests)
- Don't create smoke/P0/P1/P2/P3 tier structure
- Don't re-list all tests (already in coverage plan)
### Best Practices
- Base risk assessment on evidence, not assumptions
- High-priority risks (≥6) require immediate mitigation
- P0 tests should cover <10% of total scenarios
- Avoid testing same behavior at multiple levels
- **Use interval-based estimates** (e.g., "~25-40 hours") instead of exact numbers to avoid false precision and provide flexibility
- **Keep execution strategy simple**: Default to "run everything in PRs" (<15 min with Playwright), only defer if expensive/long-running
- **Avoid execution order redundancy**: Don't create complex tier structures or re-list tests
---
**Checklist Complete**: Sign off when all items validated.
**Completed by:** {name}
**Date:** {date}
**Epic:** {epic title}
**Notes:** {additional notes}

View File

@@ -0,0 +1,105 @@
<!-- Powered by BMAD-CORE™ -->
# Test Design and Risk Assessment
**Workflow ID**: `_bmad/tea/testarch/test-design`
**Version**: 5.0 (Step-File Architecture)
---
## Overview
Plans comprehensive test coverage strategy with risk assessment, priority classification, and execution ordering. This workflow operates in **two modes**:
- **System-Level Mode (Phase 3)**: Testability review of architecture before solutioning gate check
- **Epic-Level Mode (Phase 4)**: Per-epic test planning with risk assessment
The workflow auto-detects which mode to use based on project phase and user intent.
---
## WORKFLOW ARCHITECTURE
This workflow uses **step-file architecture** for disciplined execution:
### Core Principles
- **Micro-file Design**: Each step is a self-contained instruction file
- **Just-In-Time Loading**: Only the current step file is in memory
- **Sequential Enforcement**: Execute steps in order without skipping
- **State Tracking**: Write outputs only when instructed, then proceed
### Step Processing Rules (Non-Negotiable)
1. **READ COMPLETELY**: Read the entire step file before taking any action
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order
3. **WAIT FOR INPUT**: Halt when user input is required
4. **LOAD NEXT**: Only load the next step file when directed
---
## INITIALIZATION SEQUENCE
### 1. Configuration Loading
From `workflow.yaml`, resolve:
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
### 2. First Step
Load, read completely, and execute:
`{project-root}/_bmad/tea/workflows/testarch/test-design/steps-c/step-01-detect-mode.md`
### 3. Resume Support
If the user selects **Resume** mode, load, read completely, and execute:
`{project-root}/_bmad/tea/workflows/testarch/test-design/steps-c/step-01b-resume.md`
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
---
## OUTPUT GENERATION GUIDANCE
When populating templates in step 5, apply the following guidance for these sections:
### Not in Scope
- Identify components, third-party services, or subsystems NOT covered by this test plan
- For each excluded item, provide reasoning (why excluded) and mitigation (how risk is addressed elsewhere)
- Common exclusions: external vendor APIs tested by upstream teams, legacy modules outside the current phase scope, infrastructure already covered by platform team monitoring
### Entry and Exit Criteria
- **Entry criteria**: Derive from Dependencies and Test Blockers -- what must be resolved before QA can start testing
- **Exit criteria**: Derive from Quality Gate Criteria -- what constitutes "done" for the testing phase
- Include project-specific criteria based on context (e.g., "feature flag enabled in staging", "seed data loaded", "pre-implementation blockers resolved")
### Project Team (Optional)
- Include only if roles/names are known or responsibility mapping is needed
- Extract names and roles from PRD, ADR, or project context if available
- If names are unknown, either omit or use role placeholders for drafts
- Map testing responsibilities to each role (e.g., who owns E2E tests, who signs off)
### Tooling and Access (System-Level QA Document Only)
- Include only if non-standard tools or access requests are required
- List notable tools/services needed for test execution and any access approvals
- Avoid assuming specific vendors unless the project context names them
- Mark each item's status as Ready or Pending based on available information
- This section applies only to `test-design-qa-template.md` output
### Implementation Planning Handoff (Optional)
- Include only if test design produces implementation tasks that must be scheduled
- Derive items from Dependencies & Test Blockers, tooling/access needs, and QA infra setup
- If no dedicated QA, assign ownership to Dev/Platform as appropriate
- Keep the list short; avoid per-milestone breakdown tables
### Interworking & Regression
- Identify services and components that interact with or are affected by the feature under test
- For each, define what existing regression tests must pass before release
- Note any cross-team coordination needed for regression validation (e.g., shared staging environments, upstream API contracts)

View File

@@ -0,0 +1,105 @@
<!-- Powered by BMAD-CORE™ -->
# Test Design and Risk Assessment
**Workflow ID**: `_bmad/tea/testarch/test-design`
**Version**: 5.0 (Step-File Architecture)
---
## Overview
Plans comprehensive test coverage strategy with risk assessment, priority classification, and execution ordering. This workflow operates in **two modes**:
- **System-Level Mode (Phase 3)**: Testability review of architecture before solutioning gate check
- **Epic-Level Mode (Phase 4)**: Per-epic test planning with risk assessment
The workflow auto-detects which mode to use based on project phase and user intent.
---
## WORKFLOW ARCHITECTURE
This workflow uses **step-file architecture** for disciplined execution:
### Core Principles
- **Micro-file Design**: Each step is a self-contained instruction file
- **Just-In-Time Loading**: Only the current step file is in memory
- **Sequential Enforcement**: Execute steps in order without skipping
- **State Tracking**: Write outputs only when instructed, then proceed
### Step Processing Rules (Non-Negotiable)
1. **READ COMPLETELY**: Read the entire step file before taking any action
2. **FOLLOW SEQUENCE**: Execute all numbered sections in order
3. **WAIT FOR INPUT**: Halt when user input is required
4. **LOAD NEXT**: Only load the next step file when directed
---
## INITIALIZATION SEQUENCE
### 1. Configuration Loading
From `workflow.yaml`, resolve:
- `config_source`, `test_artifacts`, `user_name`, `communication_language`, `document_output_language`, `date`
### 2. First Step
Load, read completely, and execute:
`{project-root}/_bmad/tea/workflows/testarch/test-design/steps-c/step-01-detect-mode.md`
### 3. Resume Support
If the user selects **Resume** mode, load, read completely, and execute:
`{project-root}/_bmad/tea/workflows/testarch/test-design/steps-c/step-01b-resume.md`
This checks the output document for progress tracking frontmatter and routes to the next incomplete step.
---
## OUTPUT GENERATION GUIDANCE
When populating templates in step 5, apply the following guidance for these sections:
### Not in Scope
- Identify components, third-party services, or subsystems NOT covered by this test plan
- For each excluded item, provide reasoning (why excluded) and mitigation (how risk is addressed elsewhere)
- Common exclusions: external vendor APIs tested by upstream teams, legacy modules outside the current phase scope, infrastructure already covered by platform team monitoring
### Entry and Exit Criteria
- **Entry criteria**: Derive from Dependencies and Test Blockers -- what must be resolved before QA can start testing
- **Exit criteria**: Derive from Quality Gate Criteria -- what constitutes "done" for the testing phase
- Include project-specific criteria based on context (e.g., "feature flag enabled in staging", "seed data loaded", "pre-implementation blockers resolved")
### Project Team (Optional)
- Include only if roles/names are known or responsibility mapping is needed
- Extract names and roles from PRD, ADR, or project context if available
- If names are unknown, either omit or use role placeholders for drafts
- Map testing responsibilities to each role (e.g., who owns E2E tests, who signs off)
### Tooling and Access (System-Level QA Document Only)
- Include only if non-standard tools or access requests are required
- List notable tools/services needed for test execution and any access approvals
- Avoid assuming specific vendors unless the project context names them
- Mark each item's status as Ready or Pending based on available information
- This section applies only to `test-design-qa-template.md` output
### Implementation Planning Handoff (Optional)
- Include only if test design produces implementation tasks that must be scheduled
- Derive items from Dependencies & Test Blockers, tooling/access needs, and QA infra setup
- If no dedicated QA, assign ownership to Dev/Platform as appropriate
- Keep the list short; avoid per-milestone breakdown tables
### Interworking & Regression
- Identify services and components that interact with or are affected by the feature under test
- For each, define what existing regression tests must pass before release
- Note any cross-team coordination needed for regression validation (e.g., shared staging environments, upstream API contracts)

View File

@@ -0,0 +1,134 @@
---
name: 'step-01-detect-mode'
description: 'Determine system-level vs epic-level mode and validate prerequisites'
nextStepFile: './step-02-load-context.md'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 1: Detect Mode & Prerequisites
## STEP GOAL
Determine whether to run **System-Level** or **Epic-Level** test design, and confirm required inputs are available.
## MANDATORY EXECUTION RULES
### Universal Rules
- 📖 Read this entire step file before taking any action
- ✅ Speak in `{communication_language}`
- 🚫 Do not load the next step until this step is complete
### Role Reinforcement
- ✅ You are the **Master Test Architect**
- ✅ You prioritize risk-based, evidence-backed decisions
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Mode Detection (Priority Order)
### A) User Intent (Highest Priority)
Use explicit intent if the user already indicates scope:
- **PRD + ADR (no epic/stories)** → **System-Level Mode**
- **Epic + Stories (no PRD/ADR)** → **Epic-Level Mode**
- **Both PRD/ADR + Epic/Stories** → Prefer **System-Level Mode** first
If intent is unclear, ask:
> "Should I create (A) **System-level** test design (PRD + ADR → Architecture + QA docs), or (B) **Epic-level** test design (Epic → single test plan)?"
### B) File-Based Detection (BMad-Integrated)
If user intent is unclear:
- If `{implementation_artifacts}/sprint-status.yaml` exists → **Epic-Level Mode**
- Otherwise → **System-Level Mode**
### C) Ambiguous → Ask
If mode still unclear, ask the user to choose (A) or (B) and **halt** until they respond.
---
## 2. Prerequisite Check (Mode-Specific)
### System-Level Mode Requires:
- PRD (functional + non-functional requirements)
- ADR or architecture decision records
- Architecture or tech-spec document
### Epic-Level Mode Requires:
- Epic and/or story requirements with acceptance criteria
- Architecture context (if available)
### HALT CONDITIONS
If required inputs are missing **and** the user cannot provide them:
- **System-Level**: "Please provide PRD + ADR/architecture docs to proceed."
- **Epic-Level**: "Please provide epic/story requirements or acceptance criteria to proceed."
---
## 3. Confirm Mode
State which mode you will use and why. Then proceed.
---
### 4. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-01-detect-mode']
lastStep: 'step-01-detect-mode'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-01-detect-mode'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-01-detect-mode'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,134 @@
---
name: 'step-01-detect-mode'
description: 'Determine system-level vs epic-level mode and validate prerequisites'
nextStepFile: './step-02-load-context.md'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 1: Detect Mode & Prerequisites
## STEP GOAL
Determine whether to run **System-Level** or **Epic-Level** test design, and confirm required inputs are available.
## MANDATORY EXECUTION RULES
### Universal Rules
- 📖 Read this entire step file before taking any action
- ✅ Speak in `{communication_language}`
- 🚫 Do not load the next step until this step is complete
### Role Reinforcement
- ✅ You are the **Master Test Architect**
- ✅ You prioritize risk-based, evidence-backed decisions
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Mode Detection (Priority Order)
### A) User Intent (Highest Priority)
Use explicit intent if the user already indicates scope:
- **PRD + ADR (no epic/stories)** → **System-Level Mode**
- **Epic + Stories (no PRD/ADR)** → **Epic-Level Mode**
- **Both PRD/ADR + Epic/Stories** → Prefer **System-Level Mode** first
If intent is unclear, ask:
> "Should I create (A) **System-level** test design (PRD + ADR → Architecture + QA docs), or (B) **Epic-level** test design (Epic → single test plan)?"
### B) File-Based Detection (BMad-Integrated)
If user intent is unclear:
- If `{implementation_artifacts}/sprint-status.yaml` exists → **Epic-Level Mode**
- Otherwise → **System-Level Mode**
### C) Ambiguous → Ask
If mode still unclear, ask the user to choose (A) or (B) and **halt** until they respond.
---
## 2. Prerequisite Check (Mode-Specific)
### System-Level Mode Requires:
- PRD (functional + non-functional requirements)
- ADR or architecture decision records
- Architecture or tech-spec document
### Epic-Level Mode Requires:
- Epic and/or story requirements with acceptance criteria
- Architecture context (if available)
### HALT CONDITIONS
If required inputs are missing **and** the user cannot provide them:
- **System-Level**: "Please provide PRD + ADR/architecture docs to proceed."
- **Epic-Level**: "Please provide epic/story requirements or acceptance criteria to proceed."
---
## 3. Confirm Mode
State which mode you will use and why. Then proceed.
---
### 4. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-01-detect-mode']
lastStep: 'step-01-detect-mode'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-01-detect-mode'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-01-detect-mode'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,102 @@
---
name: 'step-01b-resume'
description: 'Resume interrupted workflow from last completed step'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 1b: Resume Workflow
## STEP GOAL
Resume an interrupted workflow by loading the existing output document, displaying progress, and routing to the next incomplete step.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: Output document with progress frontmatter
- Focus: Load progress and route to next step
- Limits: Do not re-execute completed steps
- Dependencies: Output document must exist from a previous run
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
### 1. Load Output Document
Read `{outputFile}` and parse YAML frontmatter for:
- `stepsCompleted` — array of completed step names
- `lastStep` — last completed step name
- `lastSaved` — timestamp of last save
**If `{outputFile}` does not exist**, display:
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
**THEN:** Halt. Do not proceed.
---
### 2. Display Progress Dashboard
Display:
"📋 **Workflow Resume — Test Design and Risk Assessment**
**Last saved:** {lastSaved}
**Steps completed:** {stepsCompleted.length} of 5
1. ✅/⬜ Detect Mode (step-01-detect-mode)
2. ✅/⬜ Load Context (step-02-load-context)
3. ✅/⬜ Risk & Testability (step-03-risk-and-testability)
4. ✅/⬜ Coverage Plan (step-04-coverage-plan)
5. ✅/⬜ Generate Output (step-05-generate-output)"
---
### 3. Route to Next Step
Based on `lastStep`, load the next incomplete step:
- `'step-01-detect-mode'``./step-02-load-context.md`
- `'step-02-load-context'``./step-03-risk-and-testability.md`
- `'step-03-risk-and-testability'``./step-04-coverage-plan.md`
- `'step-04-coverage-plan'``./step-05-generate-output.md`
- `'step-05-generate-output'`**Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
**Otherwise**, load the identified step file, read completely, and execute.
The existing content in `{outputFile}` provides context from previously completed steps.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Output document loaded and parsed correctly
- Progress dashboard displayed accurately
- Routed to correct next step
### ❌ SYSTEM FAILURE:
- Not loading output document
- Incorrect progress display
- Routing to wrong step
- Re-executing completed steps
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.

View File

@@ -0,0 +1,102 @@
---
name: 'step-01b-resume'
description: 'Resume interrupted workflow from last completed step'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 1b: Resume Workflow
## STEP GOAL
Resume an interrupted workflow by loading the existing output document, displaying progress, and routing to the next incomplete step.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: Output document with progress frontmatter
- Focus: Load progress and route to next step
- Limits: Do not re-execute completed steps
- Dependencies: Output document must exist from a previous run
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
### 1. Load Output Document
Read `{outputFile}` and parse YAML frontmatter for:
- `stepsCompleted` — array of completed step names
- `lastStep` — last completed step name
- `lastSaved` — timestamp of last save
**If `{outputFile}` does not exist**, display:
"⚠️ **No previous progress found.** There is no output document to resume from. Please use **[C] Create** to start a fresh workflow run."
**THEN:** Halt. Do not proceed.
---
### 2. Display Progress Dashboard
Display:
"📋 **Workflow Resume — Test Design and Risk Assessment**
**Last saved:** {lastSaved}
**Steps completed:** {stepsCompleted.length} of 5
1. ✅/⬜ Detect Mode (step-01-detect-mode)
2. ✅/⬜ Load Context (step-02-load-context)
3. ✅/⬜ Risk & Testability (step-03-risk-and-testability)
4. ✅/⬜ Coverage Plan (step-04-coverage-plan)
5. ✅/⬜ Generate Output (step-05-generate-output)"
---
### 3. Route to Next Step
Based on `lastStep`, load the next incomplete step:
- `'step-01-detect-mode'``./step-02-load-context.md`
- `'step-02-load-context'``./step-03-risk-and-testability.md`
- `'step-03-risk-and-testability'``./step-04-coverage-plan.md`
- `'step-04-coverage-plan'``./step-05-generate-output.md`
- `'step-05-generate-output'`**Workflow already complete.** Display: "✅ **All steps completed.** Use **[V] Validate** to review outputs or **[E] Edit** to make revisions." Then halt.
**If `lastStep` does not match any value above**, display: "⚠️ **Unknown progress state** (`lastStep`: {lastStep}). Please use **[C] Create** to start fresh." Then halt.
**Otherwise**, load the identified step file, read completely, and execute.
The existing content in `{outputFile}` provides context from previously completed steps.
---
## 🚨 SYSTEM SUCCESS/FAILURE METRICS
### ✅ SUCCESS:
- Output document loaded and parsed correctly
- Progress dashboard displayed accurately
- Routed to correct next step
### ❌ SYSTEM FAILURE:
- Not loading output document
- Incorrect progress display
- Routing to wrong step
- Re-executing completed steps
**Master Rule:** Resume MUST route to the exact next incomplete step. Never re-execute completed steps.

View File

@@ -0,0 +1,242 @@
---
name: 'step-02-load-context'
description: 'Load documents, configuration, and knowledge fragments for the chosen mode'
nextStepFile: './step-03-risk-and-testability.md'
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 2: Load Context & Knowledge Base
## STEP GOAL
Load the required documents, config flags, and knowledge fragments needed to produce accurate test design outputs.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- 🎯 Only load artifacts required for the selected mode
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Load Configuration
From `{config_source}`:
- Read `tea_use_playwright_utils`
- Read `tea_use_pactjs_utils`
- Read `tea_pact_mcp`
- Read `tea_browser_automation`
- Read `test_stack_type` (if not set, default to `"auto"`)
- Note `test_artifacts`
**Stack Detection** (for context-aware loading):
If `test_stack_type` is `"auto"` or not configured, infer `{detected_stack}` by scanning `{project-root}`:
- **Frontend indicators**: `playwright.config.*`, `cypress.config.*`, `package.json` with react/vue/angular
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`
- **Both present**`fullstack`; only frontend → `frontend`; only backend → `backend`
- Explicit `test_stack_type` overrides auto-detection
---
## 2. Load Project Artifacts (Mode-Specific)
### System-Level Mode (Phase 3)
Load:
- PRD (FRs + NFRs)
- ADRs or architecture decisions
- Architecture / tech-spec document
- Epics (for scope)
Extract:
- Tech stack & dependencies
- Integration points
- NFRs (performance, security, reliability, compliance)
### Epic-Level Mode (Phase 4)
Load:
- Epic and story docs with acceptance criteria
- PRD (if available)
- Architecture / tech-spec (if available)
- Prior system-level test-design outputs (if available)
Extract:
- Testable requirements
- Integration points
- Known coverage gaps
---
## 3. Analyze Existing Test Coverage (Epic-Level)
If epic-level:
- Scan the repository for existing tests (search for `tests/`, `spec`, `e2e`, `api` folders)
- Identify coverage gaps and flaky areas
- Note existing fixture and test patterns
### Browser Exploration (if `tea_browser_automation` is `cli` or `auto`)
> **Fallback:** If CLI is not installed, fall back to MCP (if available) or skip browser exploration and rely on code/doc analysis.
**CLI Exploration Steps:**
All commands use the same named session to target the correct browser:
1. `playwright-cli -s=tea-explore open <target_url>`
2. `playwright-cli -s=tea-explore snapshot` → capture page structure and element refs
3. `playwright-cli -s=tea-explore screenshot --filename={test_artifacts}/exploration/explore-<page>.png`
4. Analyze snapshot output to identify testable elements and flows
5. `playwright-cli -s=tea-explore close`
Store artifacts under `{test_artifacts}/exploration/`
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-explore close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
---
### Tiered Knowledge Loading
Load fragments based on their `tier` classification in `tea-index.csv`:
1. **Core tier** (always load): Foundational fragments required for this workflow
2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
> **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
### Playwright Utils Loading Profiles
**If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
- **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
- **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
Load: all Playwright Utils core fragments (~4,500 lines)
**Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
### Pact.js Utils Loading
**If `tea_use_pactjs_utils` is enabled** (and `{detected_stack}` is `backend` or `fullstack`, or microservices indicators detected):
Load: `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
**If `tea_use_pactjs_utils` is disabled** but contract testing is relevant:
Load: `contract-testing.md`
### Pact MCP Loading
**If `tea_pact_mcp` is `"mcp"`:**
Load: `pact-mcp.md` — enables agent to use SmartBear MCP "Fetch Provider States" and "Matrix" tools to understand existing contract landscape during test design.
## 4. Load Knowledge Base Fragments
Use `{knowledgeIndex}` to select and load only relevant fragments.
### System-Level Mode (Required)
- `adr-quality-readiness-checklist.md`
- `test-levels-framework.md`
- `risk-governance.md`
- `test-quality.md`
### Epic-Level Mode (Required)
- `risk-governance.md`
- `probability-impact.md`
- `test-levels-framework.md`
- `test-priorities-matrix.md`
**Playwright CLI (if `tea_browser_automation` is "cli" or "auto"):**
- `playwright-cli.md`
**MCP Patterns (if `tea_browser_automation` is "mcp" or "auto"):**
- (existing MCP-related fragments, if any are added in future)
**Pact.js Utils (if enabled — both System-Level and Epic-Level):**
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
**Contract Testing (if pactjs-utils disabled but relevant):**
- `contract-testing.md`
**Pact MCP (if tea_pact_mcp is "mcp"):**
- `pact-mcp.md`
---
## 5. Confirm Loaded Inputs
Summarize what was loaded and confirm with the user if anything is missing.
---
### 6. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-02-load-context']
lastStep: 'step-02-load-context'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-02-load-context'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-02-load-context'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
**Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,242 @@
---
name: 'step-02-load-context'
description: 'Load documents, configuration, and knowledge fragments for the chosen mode'
nextStepFile: './step-03-risk-and-testability.md'
knowledgeIndex: '{project-root}/_bmad/tea/testarch/tea-index.csv'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 2: Load Context & Knowledge Base
## STEP GOAL
Load the required documents, config flags, and knowledge fragments needed to produce accurate test design outputs.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- 🎯 Only load artifacts required for the selected mode
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Load Configuration
From `{config_source}`:
- Read `tea_use_playwright_utils`
- Read `tea_use_pactjs_utils`
- Read `tea_pact_mcp`
- Read `tea_browser_automation`
- Read `test_stack_type` (if not set, default to `"auto"`)
- Note `test_artifacts`
**Stack Detection** (for context-aware loading):
If `test_stack_type` is `"auto"` or not configured, infer `{detected_stack}` by scanning `{project-root}`:
- **Frontend indicators**: `playwright.config.*`, `cypress.config.*`, `package.json` with react/vue/angular
- **Backend indicators**: `pyproject.toml`, `pom.xml`/`build.gradle`, `go.mod`, `*.csproj`, `Gemfile`, `Cargo.toml`
- **Both present** → `fullstack`; only frontend → `frontend`; only backend → `backend`
- Explicit `test_stack_type` overrides auto-detection
---
## 2. Load Project Artifacts (Mode-Specific)
### System-Level Mode (Phase 3)
Load:
- PRD (FRs + NFRs)
- ADRs or architecture decisions
- Architecture / tech-spec document
- Epics (for scope)
Extract:
- Tech stack & dependencies
- Integration points
- NFRs (performance, security, reliability, compliance)
### Epic-Level Mode (Phase 4)
Load:
- Epic and story docs with acceptance criteria
- PRD (if available)
- Architecture / tech-spec (if available)
- Prior system-level test-design outputs (if available)
Extract:
- Testable requirements
- Integration points
- Known coverage gaps
---
## 3. Analyze Existing Test Coverage (Epic-Level)
If epic-level:
- Scan the repository for existing tests (search for `tests/`, `spec`, `e2e`, `api` folders)
- Identify coverage gaps and flaky areas
- Note existing fixture and test patterns
### Browser Exploration (if `tea_browser_automation` is `cli` or `auto`)
> **Fallback:** If CLI is not installed, fall back to MCP (if available) or skip browser exploration and rely on code/doc analysis.
**CLI Exploration Steps:**
All commands use the same named session to target the correct browser:
1. `playwright-cli -s=tea-explore open <target_url>`
2. `playwright-cli -s=tea-explore snapshot` → capture page structure and element refs
3. `playwright-cli -s=tea-explore screenshot --filename={test_artifacts}/exploration/explore-<page>.png`
4. Analyze snapshot output to identify testable elements and flows
5. `playwright-cli -s=tea-explore close`
Store artifacts under `{test_artifacts}/exploration/`
> **Session Hygiene:** Always close sessions using `playwright-cli -s=tea-explore close`. Do NOT use `close-all` — it kills every session on the machine and breaks parallel execution.
---
### Tiered Knowledge Loading
Load fragments based on their `tier` classification in `tea-index.csv`:
1. **Core tier** (always load): Foundational fragments required for this workflow
2. **Extended tier** (load on-demand): Load when deeper analysis is needed or when the user's context requires it
3. **Specialized tier** (load only when relevant): Load only when the specific use case matches (e.g., contract-testing only for microservices, email-auth only for email flows)
> **Context Efficiency**: Loading only core fragments reduces context usage by 40-50% compared to loading all fragments.
### Playwright Utils Loading Profiles
**If `tea_use_playwright_utils` is enabled**, select the appropriate loading profile:
- **API-only profile** (when `{detected_stack}` is `backend` or no `page.goto`/`page.locator` found in test files):
Load: `overview`, `api-request`, `auth-session`, `recurse` (~1,800 lines)
- **Full UI+API profile** (when `{detected_stack}` is `frontend`/`fullstack` or browser tests detected):
Load: all Playwright Utils core fragments (~4,500 lines)
**Detection**: Scan `{test_dir}` for files containing `page.goto` or `page.locator`. If none found, use API-only profile.
### Pact.js Utils Loading
**If `tea_use_pactjs_utils` is enabled** (and `{detected_stack}` is `backend` or `fullstack`, or microservices indicators detected):
Load: `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
**If `tea_use_pactjs_utils` is disabled** but contract testing is relevant:
Load: `contract-testing.md`
### Pact MCP Loading
**If `tea_pact_mcp` is `"mcp"`:**
Load: `pact-mcp.md` — enables agent to use SmartBear MCP "Fetch Provider States" and "Matrix" tools to understand existing contract landscape during test design.
## 4. Load Knowledge Base Fragments
Use `{knowledgeIndex}` to select and load only relevant fragments.
### System-Level Mode (Required)
- `adr-quality-readiness-checklist.md`
- `test-levels-framework.md`
- `risk-governance.md`
- `test-quality.md`
### Epic-Level Mode (Required)
- `risk-governance.md`
- `probability-impact.md`
- `test-levels-framework.md`
- `test-priorities-matrix.md`
**Playwright CLI (if `tea_browser_automation` is "cli" or "auto"):**
- `playwright-cli.md`
**MCP Patterns (if `tea_browser_automation` is "mcp" or "auto"):**
- (existing MCP-related fragments, if any are added in future)
**Pact.js Utils (if enabled — both System-Level and Epic-Level):**
- `pactjs-utils-overview.md`, `pactjs-utils-consumer-helpers.md`, `pactjs-utils-provider-verifier.md`, `pactjs-utils-request-filter.md`
**Contract Testing (if pactjs-utils disabled but relevant):**
- `contract-testing.md`
**Pact MCP (if tea_pact_mcp is "mcp"):**
- `pact-mcp.md`
---
## 5. Confirm Loaded Inputs
Summarize what was loaded and confirm with the user if anything is missing.
---
### 6. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-02-load-context']
lastStep: 'step-02-load-context'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-02-load-context'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-02-load-context'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
**Update `inputDocuments`**: Set `inputDocuments` in the output template frontmatter to the list of artifact paths loaded in this step (e.g., knowledge fragments, test design documents, configuration files).
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,110 @@
---
name: 'step-03-risk-and-testability'
description: 'Perform testability review (system-level) and risk assessment'
nextStepFile: './step-04-coverage-plan.md'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 3: Testability & Risk Assessment
## STEP GOAL
Produce a defensible testability review (system-level) and a risk assessment matrix (all modes).
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- 🎯 Base conclusions on evidence from loaded artifacts
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. System-Level Mode: Testability Review
If **system-level**, evaluate architecture for:
- **Controllability** (state seeding, mockability, fault injection)
- **Observability** (logs, metrics, traces, deterministic assertions)
- **Reliability** (isolation, reproducibility, parallel safety)
**Structure output as:**
1. **🚨 Testability Concerns** (actionable issues first)
2. **✅ Testability Assessment Summary** (what is already strong)
Also identify **ASRs** (Architecturally Significant Requirements):
- Mark each as **ACTIONABLE** or **FYI**
---
## 2. All Modes: Risk Assessment
Using `risk-governance.md` and `probability-impact.md` (if loaded):
- Identify real risks (not just features)
- Classify by category: TECH / SEC / PERF / DATA / BUS / OPS
- Score Probability (13) and Impact (13)
- Calculate Risk Score (P × I)
- Flag high risks (score ≥ 6)
- Define mitigation, owner, and timeline
---
## 3. Summarize Risk Findings
Summarize the highest risks and their mitigation priorities.
---
### 4. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-03-risk-and-testability']
lastStep: 'step-03-risk-and-testability'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-03-risk-and-testability'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-03-risk-and-testability'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,110 @@
---
name: 'step-03-risk-and-testability'
description: 'Perform testability review (system-level) and risk assessment'
nextStepFile: './step-04-coverage-plan.md'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 3: Testability & Risk Assessment
## STEP GOAL
Produce a defensible testability review (system-level) and a risk assessment matrix (all modes).
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- 🎯 Base conclusions on evidence from loaded artifacts
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. System-Level Mode: Testability Review
If **system-level**, evaluate architecture for:
- **Controllability** (state seeding, mockability, fault injection)
- **Observability** (logs, metrics, traces, deterministic assertions)
- **Reliability** (isolation, reproducibility, parallel safety)
**Structure output as:**
1. **🚨 Testability Concerns** (actionable issues first)
2. **✅ Testability Assessment Summary** (what is already strong)
Also identify **ASRs** (Architecturally Significant Requirements):
- Mark each as **ACTIONABLE** or **FYI**
---
## 2. All Modes: Risk Assessment
Using `risk-governance.md` and `probability-impact.md` (if loaded):
- Identify real risks (not just features)
- Classify by category: TECH / SEC / PERF / DATA / BUS / OPS
- Score Probability (13) and Impact (13)
- Calculate Risk Score (P × I)
- Flag high risks (score ≥ 6)
- Define mitigation, owner, and timeline
---
## 3. Summarize Risk Findings
Summarize the highest risks and their mitigation priorities.
---
### 4. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-03-risk-and-testability']
lastStep: 'step-03-risk-and-testability'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-03-risk-and-testability'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-03-risk-and-testability'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,123 @@
---
name: 'step-04-coverage-plan'
description: 'Design test coverage, priorities, execution strategy, and estimates'
nextStepFile: './step-05-generate-output.md'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 4: Coverage Plan & Execution Strategy
## STEP GOAL
Create the test coverage matrix, prioritize scenarios, and define execution strategy, resource estimates, and quality gates.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- 🚫 Avoid redundant coverage across test levels
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Coverage Matrix
For each requirement or risk-driven scenario:
- Decompose into atomic test scenarios
- Select **test level** (E2E / API / Component / Unit) using `test-levels-framework.md`
- Ensure no duplicate coverage across levels
- Assign priorities (P0P3) using `test-priorities-matrix.md`
**Priority rules:**
- P0: Blocks core functionality + high risk + no workaround
- P1: Critical paths + medium/high risk
- P2: Secondary flows + low/medium risk
- P3: Nice-to-have, exploratory, benchmarks
---
## 2. Execution Strategy (Keep Simple)
Use a **PR / Nightly / Weekly** model:
- **PR**: All functional tests if <15 minutes
- **Nightly/Weekly**: Long-running or expensive suites (perf, chaos, large datasets)
- Avoid re-listing all tests (refer to coverage plan)
---
## 3. Resource Estimates (Ranges Only)
Provide intervals (no false precision):
- P0: e.g., "~2540 hours"
- P1: e.g., "~2035 hours"
- P2: e.g., "~1030 hours"
- P3: e.g., "~25 hours"
- Total and timeline as ranges
---
## 4. Quality Gates
Define thresholds:
- P0 pass rate = 100%
- P1 pass rate ≥ 95%
- High-risk mitigations complete before release
- Coverage target ≥ 80% (adjust if justified)
---
### 5. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-04-coverage-plan']
lastStep: 'step-04-coverage-plan'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-04-coverage-plan'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-04-coverage-plan'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,123 @@
---
name: 'step-04-coverage-plan'
description: 'Design test coverage, priorities, execution strategy, and estimates'
nextStepFile: './step-05-generate-output.md'
outputFile: '{test_artifacts}/test-design-progress.md'
---
# Step 4: Coverage Plan & Execution Strategy
## STEP GOAL
Create the test coverage matrix, prioritize scenarios, and define execution strategy, resource estimates, and quality gates.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- 🚫 Avoid redundant coverage across test levels
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 1. Coverage Matrix
For each requirement or risk-driven scenario:
- Decompose into atomic test scenarios
- Select **test level** (E2E / API / Component / Unit) using `test-levels-framework.md`
- Ensure no duplicate coverage across levels
- Assign priorities (P0P3) using `test-priorities-matrix.md`
**Priority rules:**
- P0: Blocks core functionality + high risk + no workaround
- P1: Critical paths + medium/high risk
- P2: Secondary flows + low/medium risk
- P3: Nice-to-have, exploratory, benchmarks
---
## 2. Execution Strategy (Keep Simple)
Use a **PR / Nightly / Weekly** model:
- **PR**: All functional tests if <15 minutes
- **Nightly/Weekly**: Long-running or expensive suites (perf, chaos, large datasets)
- Avoid re-listing all tests (refer to coverage plan)
---
## 3. Resource Estimates (Ranges Only)
Provide intervals (no false precision):
- P0: e.g., "~2540 hours"
- P1: e.g., "~2035 hours"
- P2: e.g., "~1030 hours"
- P3: e.g., "~25 hours"
- Total and timeline as ranges
---
## 4. Quality Gates
Define thresholds:
- P0 pass rate = 100%
- P1 pass rate ≥ 95%
- High-risk mitigations complete before release
- Coverage target ≥ 80% (adjust if justified)
---
### 5. Save Progress
**Save this step's accumulated work to `{outputFile}`.**
- **If `{outputFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-04-coverage-plan']
lastStep: 'step-04-coverage-plan'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{outputFile}` already exists**, update:
- Add `'step-04-coverage-plan'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-04-coverage-plan'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,222 @@
---
name: 'step-05-generate-output'
description: 'Generate output documents with adaptive orchestration (agent-team, subagent, or sequential)'
outputFile: '{test_artifacts}/test-design-epic-{epic_num}.md'
progressFile: '{test_artifacts}/test-design-progress.md'
---
# Step 5: Generate Outputs & Validate
## STEP GOAL
Write the final test-design document(s) using the correct template(s), then validate against the checklist.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- ✅ Use the provided templates and output paths
- ✅ Resolve execution mode from explicit user request first, then config
- ✅ Apply fallback rules deterministically when requested mode is unsupported
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 0. Resolve Execution Mode (User Override First)
```javascript
const orchestrationContext = {
config: {
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
capability_probe: config.tea_capability_probe !== false, // true by default
},
timestamp: new Date().toISOString().replace(/[:.]/g, '-'),
};
const normalizeUserExecutionMode = (mode) => {
if (typeof mode !== 'string') return null;
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
if (normalized === 'auto') return 'auto';
if (normalized === 'sequential') return 'sequential';
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
return 'subagent';
}
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
return 'agent-team';
}
return null;
};
const normalizeConfigExecutionMode = (mode) => {
if (mode === 'subagent') return 'subagent';
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
return mode;
}
return null;
};
// Explicit user instruction in the active run takes priority over config.
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(orchestrationContext.config.execution_mode) || 'auto';
const probeEnabled = orchestrationContext.config.capability_probe;
const supports = { subagent: false, agentTeam: false };
if (probeEnabled) {
supports.subagent = runtime.canLaunchSubagents?.() === true;
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
}
let resolvedMode = requestedMode;
if (requestedMode === 'auto') {
if (supports.agentTeam) resolvedMode = 'agent-team';
else if (supports.subagent) resolvedMode = 'subagent';
else resolvedMode = 'sequential';
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
resolvedMode = 'sequential';
}
```
Resolution precedence:
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
2. `tea_execution_mode` from config
3. Runtime capability fallback (when probing enabled)
## 1. Select Output Template(s)
### System-Level Mode (Phase 3)
Generate **two** documents:
- `{test_artifacts}/test-design-architecture.md` using `test-design-architecture-template.md`
- `{test_artifacts}/test-design-qa.md` using `test-design-qa-template.md`
If `resolvedMode` is `agent-team` or `subagent`, these two documents can be generated in parallel as independent workers, then reconciled for consistency.
### Epic-Level Mode (Phase 4)
Generate **one** document:
- `{outputFile}` using `test-design-template.md`
- If `epic_num` is unclear, ask the user
Epic-level mode remains single-worker by default (one output artifact).
---
## 2. Populate Templates
Ensure the outputs include:
- Risk assessment matrix
- Coverage matrix and priorities
- Execution strategy
- Resource estimates (ranges)
- Quality gate criteria
- Any mode-specific sections required by the template
---
## 3. Validation
Validate the output(s) against:
- `checklist.md` in this workflow folder
- [ ] CLI sessions cleaned up (no orphaned browsers)
- [ ] Temp artifacts stored in `{test_artifacts}/` not random locations
If any checklist criteria are missing, fix before completion.
---
## 4. Generate BMAD Handoff Document (System-Level Mode Only)
**If this is a system-level test design** (not component/feature level):
1. Copy `test-design-handoff-template.md` to `{test_artifacts}/test-design/{project_name}-handoff.md`
2. Populate all sections from the test design output:
- Fill TEA Artifacts Inventory with actual paths
- Extract P0/P1 risks into Epic-Level guidance
- Map critical test scenarios to Story-Level guidance
- Build risk-to-story mapping table from risk register
3. Save alongside the test design document
> **Note**: The handoff document is designed for consumption by BMAD's `create-epics-and-stories` workflow. It is only generated for system-level test designs where epic/story decomposition is relevant.
---
## 5. Polish Output
Before finalizing, review the complete output document for quality:
1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
3. **Check completeness**: All template sections should be populated or explicitly marked N/A
4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
---
## 6. Completion Report
Summarize:
- Mode used
- Output file paths
- Key risks and gate thresholds
- Any open assumptions
---
### 7. Save Progress
**Save this step's accumulated work to `{progressFile}`.**
- **If `{progressFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-05-generate-output']
lastStep: 'step-05-generate-output'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{progressFile}` already exists**, update:
- Add `'step-05-generate-output'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-05-generate-output'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,222 @@
---
name: 'step-05-generate-output'
description: 'Generate output documents with adaptive orchestration (agent-team, subagent, or sequential)'
outputFile: '{test_artifacts}/test-design-epic-{epic_num}.md'
progressFile: '{test_artifacts}/test-design-progress.md'
---
# Step 5: Generate Outputs & Validate
## STEP GOAL
Write the final test-design document(s) using the correct template(s), then validate against the checklist.
## MANDATORY EXECUTION RULES
- 📖 Read the entire step file before acting
- ✅ Speak in `{communication_language}`
- ✅ Use the provided templates and output paths
- ✅ Resolve execution mode from explicit user request first, then config
- ✅ Apply fallback rules deterministically when requested mode is unsupported
---
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Record outputs before proceeding
- 📖 Load the next step only when instructed
## CONTEXT BOUNDARIES:
- Available context: config, loaded artifacts, and knowledge fragments
- Focus: this step's goal only
- Limits: do not execute future steps
- Dependencies: prior steps' outputs (if any)
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly. Do not skip, reorder, or improvise.
## 0. Resolve Execution Mode (User Override First)
```javascript
const orchestrationContext = {
config: {
execution_mode: config.tea_execution_mode || 'auto', // "auto" | "subagent" | "agent-team" | "sequential"
capability_probe: config.tea_capability_probe !== false, // true by default
},
timestamp: new Date().toISOString().replace(/[:.]/g, '-'),
};
const normalizeUserExecutionMode = (mode) => {
if (typeof mode !== 'string') return null;
const normalized = mode.trim().toLowerCase().replace(/[-_]/g, ' ').replace(/\s+/g, ' ');
if (normalized === 'auto') return 'auto';
if (normalized === 'sequential') return 'sequential';
if (normalized === 'subagent' || normalized === 'sub agent' || normalized === 'subagents' || normalized === 'sub agents') {
return 'subagent';
}
if (normalized === 'agent team' || normalized === 'agent teams' || normalized === 'agentteam') {
return 'agent-team';
}
return null;
};
const normalizeConfigExecutionMode = (mode) => {
if (mode === 'subagent') return 'subagent';
if (mode === 'auto' || mode === 'sequential' || mode === 'subagent' || mode === 'agent-team') {
return mode;
}
return null;
};
// Explicit user instruction in the active run takes priority over config.
const explicitModeFromUser = normalizeUserExecutionMode(runtime.getExplicitExecutionModeHint?.() || null);
const requestedMode = explicitModeFromUser || normalizeConfigExecutionMode(orchestrationContext.config.execution_mode) || 'auto';
const probeEnabled = orchestrationContext.config.capability_probe;
const supports = { subagent: false, agentTeam: false };
if (probeEnabled) {
supports.subagent = runtime.canLaunchSubagents?.() === true;
supports.agentTeam = runtime.canLaunchAgentTeams?.() === true;
}
let resolvedMode = requestedMode;
if (requestedMode === 'auto') {
if (supports.agentTeam) resolvedMode = 'agent-team';
else if (supports.subagent) resolvedMode = 'subagent';
else resolvedMode = 'sequential';
} else if (probeEnabled && requestedMode === 'agent-team' && !supports.agentTeam) {
resolvedMode = supports.subagent ? 'subagent' : 'sequential';
} else if (probeEnabled && requestedMode === 'subagent' && !supports.subagent) {
resolvedMode = 'sequential';
}
```
Resolution precedence:
1. Explicit user request in this run (`agent team` => `agent-team`; `subagent` => `subagent`; `sequential`; `auto`)
2. `tea_execution_mode` from config
3. Runtime capability fallback (when probing enabled)
## 1. Select Output Template(s)
### System-Level Mode (Phase 3)
Generate **two** documents:
- `{test_artifacts}/test-design-architecture.md` using `test-design-architecture-template.md`
- `{test_artifacts}/test-design-qa.md` using `test-design-qa-template.md`
If `resolvedMode` is `agent-team` or `subagent`, these two documents can be generated in parallel as independent workers, then reconciled for consistency.
### Epic-Level Mode (Phase 4)
Generate **one** document:
- `{outputFile}` using `test-design-template.md`
- If `epic_num` is unclear, ask the user
Epic-level mode remains single-worker by default (one output artifact).
---
## 2. Populate Templates
Ensure the outputs include:
- Risk assessment matrix
- Coverage matrix and priorities
- Execution strategy
- Resource estimates (ranges)
- Quality gate criteria
- Any mode-specific sections required by the template
---
## 3. Validation
Validate the output(s) against:
- `checklist.md` in this workflow folder
- [ ] CLI sessions cleaned up (no orphaned browsers)
- [ ] Temp artifacts stored in `{test_artifacts}/` not random locations
If any checklist criteria are missing, fix before completion.
---
## 4. Generate BMAD Handoff Document (System-Level Mode Only)
**If this is a system-level test design** (not component/feature level):
1. Copy `test-design-handoff-template.md` to `{test_artifacts}/test-design/{project_name}-handoff.md`
2. Populate all sections from the test design output:
- Fill TEA Artifacts Inventory with actual paths
- Extract P0/P1 risks into Epic-Level guidance
- Map critical test scenarios to Story-Level guidance
- Build risk-to-story mapping table from risk register
3. Save alongside the test design document
> **Note**: The handoff document is designed for consumption by BMAD's `create-epics-and-stories` workflow. It is only generated for system-level test designs where epic/story decomposition is relevant.
---
## 5. Polish Output
Before finalizing, review the complete output document for quality:
1. **Remove duplication**: Progressive-append workflow may have created repeated sections — consolidate
2. **Verify consistency**: Ensure terminology, risk scores, and references are consistent throughout
3. **Check completeness**: All template sections should be populated or explicitly marked N/A
4. **Format cleanup**: Ensure markdown formatting is clean (tables aligned, headers consistent, no orphaned references)
---
## 6. Completion Report
Summarize:
- Mode used
- Output file paths
- Key risks and gate thresholds
- Any open assumptions
---
### 7. Save Progress
**Save this step's accumulated work to `{progressFile}`.**
- **If `{progressFile}` does not exist** (first save), create it with YAML frontmatter:
```yaml
---
stepsCompleted: ['step-05-generate-output']
lastStep: 'step-05-generate-output'
lastSaved: '{date}'
---
```
Then write this step's output below the frontmatter.
- **If `{progressFile}` already exists**, update:
- Add `'step-05-generate-output'` to `stepsCompleted` array (only if not already present)
- Set `lastStep: 'step-05-generate-output'`
- Set `lastSaved: '{date}'`
- Append this step's output to the appropriate section of the document.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Step completed in full with required outputs
### ❌ SYSTEM FAILURE:
- Skipped sequence steps or missing outputs
**Master Rule:** Skipping steps is FORBIDDEN.

View File

@@ -0,0 +1,65 @@
---
name: 'step-01-assess'
description: 'Load an existing output for editing'
nextStepFile: './step-02-apply-edit.md'
---
# Step 1: Assess Edit Target
## STEP GOAL:
Identify which output should be edited and load it.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Ask the user which output file to edit
- 🚫 Do not edit until target is confirmed
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
## CONTEXT BOUNDARIES:
- Available context: existing outputs
- Focus: select edit target
- Limits: no edits yet
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Identify Target
Ask the user to provide the output file path or select from known outputs.
### 2. Load Target
Read the provided output file in full.
### 3. Confirm
Confirm the target and proceed to edit.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Target identified and loaded
### ❌ SYSTEM FAILURE:
- Proceeding without a confirmed target

View File

@@ -0,0 +1,65 @@
---
name: 'step-01-assess'
description: 'Load an existing output for editing'
nextStepFile: './step-02-apply-edit.md'
---
# Step 1: Assess Edit Target
## STEP GOAL:
Identify which output should be edited and load it.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Ask the user which output file to edit
- 🚫 Do not edit until target is confirmed
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
## CONTEXT BOUNDARIES:
- Available context: existing outputs
- Focus: select edit target
- Limits: no edits yet
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Identify Target
Ask the user to provide the output file path or select from known outputs.
### 2. Load Target
Read the provided output file in full.
### 3. Confirm
Confirm the target and proceed to edit.
Load next step: `{nextStepFile}`
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Target identified and loaded
### ❌ SYSTEM FAILURE:
- Proceeding without a confirmed target

View File

@@ -0,0 +1,60 @@
---
name: 'step-02-apply-edit'
description: 'Apply edits to the selected output'
---
# Step 2: Apply Edits
## STEP GOAL:
Apply the requested edits to the selected output and confirm changes.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Only apply edits explicitly requested by the user
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
## CONTEXT BOUNDARIES:
- Available context: selected output and user changes
- Focus: apply edits only
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Confirm Requested Changes
Restate what will be changed and confirm.
### 2. Apply Changes
Update the output file accordingly.
### 3. Report
Summarize the edits applied.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Changes applied and confirmed
### ❌ SYSTEM FAILURE:
- Unconfirmed edits or missing update

View File

@@ -0,0 +1,60 @@
---
name: 'step-02-apply-edit'
description: 'Apply edits to the selected output'
---
# Step 2: Apply Edits
## STEP GOAL:
Apply the requested edits to the selected output and confirm changes.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Only apply edits explicitly requested by the user
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
## CONTEXT BOUNDARIES:
- Available context: selected output and user changes
- Focus: apply edits only
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Confirm Requested Changes
Restate what will be changed and confirm.
### 2. Apply Changes
Update the output file accordingly.
### 3. Report
Summarize the edits applied.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Changes applied and confirmed
### ❌ SYSTEM FAILURE:
- Unconfirmed edits or missing update

View File

@@ -0,0 +1,67 @@
---
name: 'step-01-validate'
description: 'Validate workflow outputs against checklist'
outputFile: '{test_artifacts}/test-design-validation-report.md'
validationChecklist: '../checklist.md'
---
# Step 1: Validate Outputs
## STEP GOAL:
Validate outputs using the workflow checklist and record findings.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Validate against `{validationChecklist}`
- 🚫 Do not skip checks
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Write findings to `{outputFile}`
## CONTEXT BOUNDARIES:
- Available context: workflow outputs and checklist
- Focus: validation only
- Limits: do not modify outputs in this step
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Load Checklist
Read `{validationChecklist}` and list all criteria.
### 2. Validate Outputs
Evaluate outputs against each checklist item.
### 3. Write Report
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Validation report written
- All checklist items evaluated
### ❌ SYSTEM FAILURE:
- Skipped checklist items
- No report produced

View File

@@ -0,0 +1,67 @@
---
name: 'step-01-validate'
description: 'Validate workflow outputs against checklist'
outputFile: '{test_artifacts}/test-design-validation-report.md'
validationChecklist: '../checklist.md'
---
# Step 1: Validate Outputs
## STEP GOAL:
Validate outputs using the workflow checklist and record findings.
## MANDATORY EXECUTION RULES (READ FIRST):
### Universal Rules:
- 📖 Read the complete step file before taking any action
- ✅ Speak in `{communication_language}`
### Role Reinforcement:
- ✅ You are the Master Test Architect
### Step-Specific Rules:
- 🎯 Validate against `{validationChecklist}`
- 🚫 Do not skip checks
## EXECUTION PROTOCOLS:
- 🎯 Follow the MANDATORY SEQUENCE exactly
- 💾 Write findings to `{outputFile}`
## CONTEXT BOUNDARIES:
- Available context: workflow outputs and checklist
- Focus: validation only
- Limits: do not modify outputs in this step
## MANDATORY SEQUENCE
**CRITICAL:** Follow this sequence exactly.
### 1. Load Checklist
Read `{validationChecklist}` and list all criteria.
### 2. Validate Outputs
Evaluate outputs against each checklist item.
### 3. Write Report
Write a validation report to `{outputFile}` with PASS/WARN/FAIL per section.
## 🚨 SYSTEM SUCCESS/FAILURE METRICS:
### ✅ SUCCESS:
- Validation report written
- All checklist items evaluated
### ❌ SYSTEM FAILURE:
- Skipped checklist items
- No report produced

View File

@@ -0,0 +1,230 @@
---
stepsCompleted: []
lastStep: ''
lastSaved: ''
workflowType: 'testarch-test-design'
inputDocuments: []
---
# Test Design for Architecture: {Feature Name}
**Purpose:** Architectural concerns, testability gaps, and NFR requirements for review by Architecture/Dev teams. Serves as a contract between QA and Engineering on what must be addressed before test development begins.
**Date:** {date}
**Author:** {author}
**Status:** Architecture Review Pending
**Project:** {project_name}
**PRD Reference:** {prd_link}
**ADR Reference:** {adr_link}
---
## Executive Summary
**Scope:** {Brief description of feature scope}
**Business Context** (from PRD):
- **Revenue/Impact:** {Business metrics if applicable}
- **Problem:** {Problem being solved}
- **GA Launch:** {Target date or timeline}
**Architecture** (from ADR {adr_number}):
- **Key Decision 1:** {e.g., OAuth 2.1 authentication}
- **Key Decision 2:** {e.g., Centralized MCP Server pattern}
- **Key Decision 3:** {e.g., Stack: TypeScript, SDK v1.x}
**Expected Scale** (from ADR):
- {RPS, volume, users, etc.}
**Risk Summary:**
- **Total risks**: {N}
- **High-priority (≥6)**: {N} risks requiring immediate mitigation
- **Test effort**: ~{N} tests (~{X} weeks for 1 QA, ~{Y} weeks for 2 QAs)
---
## Quick Guide
### 🚨 BLOCKERS - Team Must Decide (Can't Proceed Without)
**Pre-Implementation Critical Path** - These MUST be completed before QA can write integration tests:
1. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role})
2. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role})
3. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role})
**What we need from team:** Complete these {N} items pre-implementation or test development is blocked.
---
### ⚠️ HIGH PRIORITY - Team Should Validate (We Provide Recommendation, You Approve)
1. **{Risk ID}: {Title}** - {Recommendation + who should approve} (implementation phase)
2. **{Risk ID}: {Title}** - {Recommendation + who should approve} (implementation phase)
3. **{Risk ID}: {Title}** - {Recommendation + who should approve} (implementation phase)
**What we need from team:** Review recommendations and approve (or suggest changes).
---
### 📋 INFO ONLY - Solutions Provided (Review, No Decisions Needed)
1. **Test strategy**: {Test level split} ({Rationale})
2. **Tooling**: {Test frameworks and utilities}
3. **Tiered CI/CD**: {Execution tiers with timing}
4. **Coverage**: ~{N} test scenarios prioritized P0-P3 with risk-based classification
5. **Quality gates**: {Pass criteria}
**What we need from team:** Just review and acknowledge (we already have the solution).
---
## For Architects and Devs - Open Topics 👷
### Risk Assessment
**Total risks identified**: {N} ({X} high-priority score ≥6, {Y} medium, {Z} low)
#### High-Priority Risks (Score ≥6) - IMMEDIATE ATTENTION
| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner | Timeline |
| ---------- | --------- | ------------- | ----------- | ------ | ----------- | --------------------- | ------- | -------- |
| **{R-ID}** | **{CAT}** | {Description} | {1-3} | {1-3} | **{Score}** | {Mitigation strategy} | {Owner} | {Date} |
#### Medium-Priority Risks (Score 3-5)
| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner |
| ------- | -------- | ------------- | ----------- | ------ | ------- | ------------ | ------- |
| {R-ID} | {CAT} | {Description} | {1-3} | {1-3} | {Score} | {Mitigation} | {Owner} |
#### Low-Priority Risks (Score 1-2)
| Risk ID | Category | Description | Probability | Impact | Score | Action |
| ------- | -------- | ------------- | ----------- | ------ | ------- | ------- |
| {R-ID} | {CAT} | {Description} | {1-3} | {1-3} | {Score} | Monitor |
#### Risk Category Legend
- **TECH**: Technical/Architecture (flaws, integration, scalability)
- **SEC**: Security (access controls, auth, data exposure)
- **PERF**: Performance (SLA violations, degradation, resource limits)
- **DATA**: Data Integrity (loss, corruption, inconsistency)
- **BUS**: Business Impact (UX harm, logic errors, revenue)
- **OPS**: Operations (deployment, config, monitoring)
---
### Testability Concerns and Architectural Gaps
**🚨 ACTIONABLE CONCERNS - Architecture Team Must Address**
{If system has critical testability concerns, list them here. If architecture supports testing well, state "No critical testability concerns identified" and skip to Testability Assessment Summary}
#### 1. Blockers to Fast Feedback (WHAT WE NEED FROM ARCHITECTURE)
| Concern | Impact | What Architecture Must Provide | Owner | Timeline |
| ------------------ | ------------------- | -------------------------------------- | ------ | ----------- |
| **{Concern name}** | {Impact on testing} | {Specific architectural change needed} | {Team} | {Milestone} |
**Example:**
- **No API for test data seeding** → Cannot parallelize tests → Provide POST /test/seed endpoint (Backend, pre-implementation)
#### 2. Architectural Improvements Needed (WHAT SHOULD BE CHANGED)
{List specific improvements that would make the system more testable}
1. **{Improvement name}**
- **Current problem**: {What's wrong}
- **Required change**: {What architecture must do}
- **Impact if not fixed**: {Consequences}
- **Owner**: {Team}
- **Timeline**: {Milestone}
---
### Testability Assessment Summary
**📊 CURRENT STATE - FYI**
{Only include this section if there are passing items worth mentioning. Otherwise omit.}
#### What Works Well
- ✅ {Passing item 1} (e.g., "API-first design supports parallel test execution")
- ✅ {Passing item 2} (e.g., "Feature flags enable test isolation")
- ✅ {Passing item 3}
#### Accepted Trade-offs (No Action Required)
For {Feature} Phase 1, the following trade-offs are acceptable:
- **{Trade-off 1}** - {Why acceptable for now}
- **{Trade-off 2}** - {Why acceptable for now}
{This is technical debt OR acceptable for Phase 1} that {should be revisited post-GA OR maintained as-is}
---
### Risk Mitigation Plans (High-Priority Risks ≥6)
**Purpose**: Detailed mitigation strategies for all {N} high-priority risks (score ≥6). These risks MUST be addressed before {GA launch date or milestone}.
#### {R-ID}: {Risk Description} (Score: {Score}) - {CRITICALITY LEVEL}
**Mitigation Strategy:**
1. {Step 1}
2. {Step 2}
3. {Step 3}
**Owner:** {Owner}
**Timeline:** {Milestone or date}
**Status:** Planned / In Progress / Complete
**Verification:** {How to verify mitigation is effective}
---
{Repeat for all high-priority risks}
---
### Assumptions and Dependencies
#### Assumptions
1. {Assumption about architecture or requirements}
2. {Assumption about team or timeline}
3. {Assumption about scope or constraints}
#### Dependencies
1. {Dependency} - Required by {date/milestone}
2. {Dependency} - Required by {date/milestone}
#### Risks to Plan
- **Risk**: {Risk to the test plan itself}
- **Impact**: {How it affects testing}
- **Contingency**: {Backup plan}
---
**End of Architecture Document**
**Next Steps for Architecture Team:**
1. Review Quick Guide (🚨/⚠️/📋) and prioritize blockers
2. Assign owners and timelines for high-priority risks (≥6)
3. Validate assumptions and dependencies
4. Provide feedback to QA on testability gaps
**Next Steps for QA Team:**
1. Wait for pre-implementation blockers to be resolved
2. Refer to companion QA doc (test-design-qa.md) for test scenarios
3. Begin test infrastructure setup (factories, fixtures, environments)

View File

@@ -0,0 +1,230 @@
---
stepsCompleted: []
lastStep: ''
lastSaved: ''
workflowType: 'testarch-test-design'
inputDocuments: []
---
# Test Design for Architecture: {Feature Name}
**Purpose:** Architectural concerns, testability gaps, and NFR requirements for review by Architecture/Dev teams. Serves as a contract between QA and Engineering on what must be addressed before test development begins.
**Date:** {date}
**Author:** {author}
**Status:** Architecture Review Pending
**Project:** {project_name}
**PRD Reference:** {prd_link}
**ADR Reference:** {adr_link}
---
## Executive Summary
**Scope:** {Brief description of feature scope}
**Business Context** (from PRD):
- **Revenue/Impact:** {Business metrics if applicable}
- **Problem:** {Problem being solved}
- **GA Launch:** {Target date or timeline}
**Architecture** (from ADR {adr_number}):
- **Key Decision 1:** {e.g., OAuth 2.1 authentication}
- **Key Decision 2:** {e.g., Centralized MCP Server pattern}
- **Key Decision 3:** {e.g., Stack: TypeScript, SDK v1.x}
**Expected Scale** (from ADR):
- {RPS, volume, users, etc.}
**Risk Summary:**
- **Total risks**: {N}
- **High-priority (≥6)**: {N} risks requiring immediate mitigation
- **Test effort**: ~{N} tests (~{X} weeks for 1 QA, ~{Y} weeks for 2 QAs)
---
## Quick Guide
### 🚨 BLOCKERS - Team Must Decide (Can't Proceed Without)
**Pre-Implementation Critical Path** - These MUST be completed before QA can write integration tests:
1. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role})
2. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role})
3. **{Blocker ID}: {Blocker Title}** - {What architecture must provide} (recommended owner: {Team/Role})
**What we need from team:** Complete these {N} items pre-implementation or test development is blocked.
---
### ⚠️ HIGH PRIORITY - Team Should Validate (We Provide Recommendation, You Approve)
1. **{Risk ID}: {Title}** - {Recommendation + who should approve} (implementation phase)
2. **{Risk ID}: {Title}** - {Recommendation + who should approve} (implementation phase)
3. **{Risk ID}: {Title}** - {Recommendation + who should approve} (implementation phase)
**What we need from team:** Review recommendations and approve (or suggest changes).
---
### 📋 INFO ONLY - Solutions Provided (Review, No Decisions Needed)
1. **Test strategy**: {Test level split} ({Rationale})
2. **Tooling**: {Test frameworks and utilities}
3. **Tiered CI/CD**: {Execution tiers with timing}
4. **Coverage**: ~{N} test scenarios prioritized P0-P3 with risk-based classification
5. **Quality gates**: {Pass criteria}
**What we need from team:** Just review and acknowledge (we already have the solution).
---
## For Architects and Devs - Open Topics 👷
### Risk Assessment
**Total risks identified**: {N} ({X} high-priority score ≥6, {Y} medium, {Z} low)
#### High-Priority Risks (Score ≥6) - IMMEDIATE ATTENTION
| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner | Timeline |
| ---------- | --------- | ------------- | ----------- | ------ | ----------- | --------------------- | ------- | -------- |
| **{R-ID}** | **{CAT}** | {Description} | {1-3} | {1-3} | **{Score}** | {Mitigation strategy} | {Owner} | {Date} |
#### Medium-Priority Risks (Score 3-5)
| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner |
| ------- | -------- | ------------- | ----------- | ------ | ------- | ------------ | ------- |
| {R-ID} | {CAT} | {Description} | {1-3} | {1-3} | {Score} | {Mitigation} | {Owner} |
#### Low-Priority Risks (Score 1-2)
| Risk ID | Category | Description | Probability | Impact | Score | Action |
| ------- | -------- | ------------- | ----------- | ------ | ------- | ------- |
| {R-ID} | {CAT} | {Description} | {1-3} | {1-3} | {Score} | Monitor |
#### Risk Category Legend
- **TECH**: Technical/Architecture (flaws, integration, scalability)
- **SEC**: Security (access controls, auth, data exposure)
- **PERF**: Performance (SLA violations, degradation, resource limits)
- **DATA**: Data Integrity (loss, corruption, inconsistency)
- **BUS**: Business Impact (UX harm, logic errors, revenue)
- **OPS**: Operations (deployment, config, monitoring)
---
### Testability Concerns and Architectural Gaps
**🚨 ACTIONABLE CONCERNS - Architecture Team Must Address**
{If system has critical testability concerns, list them here. If architecture supports testing well, state "No critical testability concerns identified" and skip to Testability Assessment Summary}
#### 1. Blockers to Fast Feedback (WHAT WE NEED FROM ARCHITECTURE)
| Concern | Impact | What Architecture Must Provide | Owner | Timeline |
| ------------------ | ------------------- | -------------------------------------- | ------ | ----------- |
| **{Concern name}** | {Impact on testing} | {Specific architectural change needed} | {Team} | {Milestone} |
**Example:**
- **No API for test data seeding** → Cannot parallelize tests → Provide POST /test/seed endpoint (Backend, pre-implementation)
#### 2. Architectural Improvements Needed (WHAT SHOULD BE CHANGED)
{List specific improvements that would make the system more testable}
1. **{Improvement name}**
- **Current problem**: {What's wrong}
- **Required change**: {What architecture must do}
- **Impact if not fixed**: {Consequences}
- **Owner**: {Team}
- **Timeline**: {Milestone}
---
### Testability Assessment Summary
**📊 CURRENT STATE - FYI**
{Only include this section if there are passing items worth mentioning. Otherwise omit.}
#### What Works Well
- ✅ {Passing item 1} (e.g., "API-first design supports parallel test execution")
- ✅ {Passing item 2} (e.g., "Feature flags enable test isolation")
- ✅ {Passing item 3}
#### Accepted Trade-offs (No Action Required)
For {Feature} Phase 1, the following trade-offs are acceptable:
- **{Trade-off 1}** - {Why acceptable for now}
- **{Trade-off 2}** - {Why acceptable for now}
{This is technical debt OR acceptable for Phase 1} that {should be revisited post-GA OR maintained as-is}
---
### Risk Mitigation Plans (High-Priority Risks ≥6)
**Purpose**: Detailed mitigation strategies for all {N} high-priority risks (score ≥6). These risks MUST be addressed before {GA launch date or milestone}.
#### {R-ID}: {Risk Description} (Score: {Score}) - {CRITICALITY LEVEL}
**Mitigation Strategy:**
1. {Step 1}
2. {Step 2}
3. {Step 3}
**Owner:** {Owner}
**Timeline:** {Milestone or date}
**Status:** Planned / In Progress / Complete
**Verification:** {How to verify mitigation is effective}
---
{Repeat for all high-priority risks}
---
### Assumptions and Dependencies
#### Assumptions
1. {Assumption about architecture or requirements}
2. {Assumption about team or timeline}
3. {Assumption about scope or constraints}
#### Dependencies
1. {Dependency} - Required by {date/milestone}
2. {Dependency} - Required by {date/milestone}
#### Risks to Plan
- **Risk**: {Risk to the test plan itself}
- **Impact**: {How it affects testing}
- **Contingency**: {Backup plan}
---
**End of Architecture Document**
**Next Steps for Architecture Team:**
1. Review Quick Guide (🚨/⚠️/📋) and prioritize blockers
2. Assign owners and timelines for high-priority risks (≥6)
3. Validate assumptions and dependencies
4. Provide feedback to QA on testability gaps
**Next Steps for QA Team:**
1. Wait for pre-implementation blockers to be resolved
2. Refer to companion QA doc (test-design-qa.md) for test scenarios
3. Begin test infrastructure setup (factories, fixtures, environments)

View File

@@ -0,0 +1,70 @@
---
title: 'TEA Test Design → BMAD Handoff Document'
version: '1.0'
workflowType: 'testarch-test-design-handoff'
inputDocuments: []
sourceWorkflow: 'testarch-test-design'
generatedBy: 'TEA Master Test Architect'
generatedAt: '{timestamp}'
projectName: '{project_name}'
---
# TEA → BMAD Integration Handoff
## Purpose
This document bridges TEA's test design outputs with BMAD's epic/story decomposition workflow (`create-epics-and-stories`). It provides structured integration guidance so that quality requirements, risk assessments, and test strategies flow into implementation planning.
## TEA Artifacts Inventory
| Artifact | Path | BMAD Integration Point |
| -------------------- | ------------------------- | ---------------------------------------------------- |
| Test Design Document | `{test_design_path}` | Epic quality requirements, story acceptance criteria |
| Risk Assessment | (embedded in test design) | Epic risk classification, story priority |
| Coverage Strategy | (embedded in test design) | Story test requirements |
## Epic-Level Integration Guidance
### Risk References
<!-- TEA will populate: P0/P1 risks that should appear as epic-level quality gates -->
### Quality Gates
<!-- TEA will populate: recommended quality gates per epic based on risk assessment -->
## Story-Level Integration Guidance
### P0/P1 Test Scenarios → Story Acceptance Criteria
<!-- TEA will populate: critical test scenarios that MUST be acceptance criteria -->
### Data-TestId Requirements
<!-- TEA will populate: recommended data-testid attributes for testability -->
## Risk-to-Story Mapping
| Risk ID | Category | P×I | Recommended Story/Epic | Test Level |
| ------- | -------- | --- | ---------------------- | ---------- |
<!-- TEA will populate from risk assessment -->
## Recommended BMAD → TEA Workflow Sequence
1. **TEA Test Design** (`TD`) → produces this handoff document
2. **BMAD Create Epics & Stories** → consumes this handoff, embeds quality requirements
3. **TEA ATDD** (`AT`) → generates acceptance tests per story
4. **BMAD Implementation** → developers implement with test-first guidance
5. **TEA Automate** (`TA`) → generates full test suite
6. **TEA Trace** (`TR`) → validates coverage completeness
## Phase Transition Quality Gates
| From Phase | To Phase | Gate Criteria |
| ------------------- | ------------------- | ------------------------------------------------------ |
| Test Design | Epic/Story Creation | All P0 risks have mitigation strategy |
| Epic/Story Creation | ATDD | Stories have acceptance criteria from test design |
| ATDD | Implementation | Failing acceptance tests exist for all P0/P1 scenarios |
| Implementation | Test Automation | All acceptance tests pass |
| Test Automation | Release | Trace matrix shows ≥80% coverage of P0/P1 requirements |

View File

@@ -0,0 +1,70 @@
---
title: 'TEA Test Design → BMAD Handoff Document'
version: '1.0'
workflowType: 'testarch-test-design-handoff'
inputDocuments: []
sourceWorkflow: 'testarch-test-design'
generatedBy: 'TEA Master Test Architect'
generatedAt: '{timestamp}'
projectName: '{project_name}'
---
# TEA → BMAD Integration Handoff
## Purpose
This document bridges TEA's test design outputs with BMAD's epic/story decomposition workflow (`create-epics-and-stories`). It provides structured integration guidance so that quality requirements, risk assessments, and test strategies flow into implementation planning.
## TEA Artifacts Inventory
| Artifact | Path | BMAD Integration Point |
| -------------------- | ------------------------- | ---------------------------------------------------- |
| Test Design Document | `{test_design_path}` | Epic quality requirements, story acceptance criteria |
| Risk Assessment | (embedded in test design) | Epic risk classification, story priority |
| Coverage Strategy | (embedded in test design) | Story test requirements |
## Epic-Level Integration Guidance
### Risk References
<!-- TEA will populate: P0/P1 risks that should appear as epic-level quality gates -->
### Quality Gates
<!-- TEA will populate: recommended quality gates per epic based on risk assessment -->
## Story-Level Integration Guidance
### P0/P1 Test Scenarios → Story Acceptance Criteria
<!-- TEA will populate: critical test scenarios that MUST be acceptance criteria -->
### Data-TestId Requirements
<!-- TEA will populate: recommended data-testid attributes for testability -->
## Risk-to-Story Mapping
| Risk ID | Category | P×I | Recommended Story/Epic | Test Level |
| ------- | -------- | --- | ---------------------- | ---------- |
<!-- TEA will populate from risk assessment -->
## Recommended BMAD → TEA Workflow Sequence
1. **TEA Test Design** (`TD`) → produces this handoff document
2. **BMAD Create Epics & Stories** → consumes this handoff, embeds quality requirements
3. **TEA ATDD** (`AT`) → generates acceptance tests per story
4. **BMAD Implementation** → developers implement with test-first guidance
5. **TEA Automate** (`TA`) → generates full test suite
6. **TEA Trace** (`TR`) → validates coverage completeness
## Phase Transition Quality Gates
| From Phase | To Phase | Gate Criteria |
| ------------------- | ------------------- | ------------------------------------------------------ |
| Test Design | Epic/Story Creation | All P0 risks have mitigation strategy |
| Epic/Story Creation | ATDD | Stories have acceptance criteria from test design |
| ATDD | Implementation | Failing acceptance tests exist for all P0/P1 scenarios |
| Implementation | Test Automation | All acceptance tests pass |
| Test Automation | Release | Trace matrix shows ≥80% coverage of P0/P1 requirements |

View File

@@ -0,0 +1,396 @@
---
stepsCompleted: []
lastStep: ''
lastSaved: ''
workflowType: 'testarch-test-design'
inputDocuments: []
---
# Test Design for QA: {Feature Name}
**Purpose:** Test execution recipe for QA team. Defines what to test, how to test it, and what QA needs from other teams.
**Date:** {date}
**Author:** {author}
**Status:** Draft
**Project:** {project_name}
**Related:** See Architecture doc (test-design-architecture.md) for testability concerns and architectural blockers.
---
## Executive Summary
**Scope:** {Brief description of testing scope}
**Risk Summary:**
- Total Risks: {N} ({X} high-priority score ≥6, {Y} medium, {Z} low)
- Critical Categories: {Categories with most high-priority risks}
**Coverage Summary:**
- P0 tests: ~{N} (critical paths, security)
- P1 tests: ~{N} (important features, integration)
- P2 tests: ~{N} (edge cases, regression)
- P3 tests: ~{N} (exploratory, benchmarks)
- **Total**: ~{N} tests (~{X}-{Y} weeks with 1 QA)
---
## Not in Scope
**Components or systems explicitly excluded from this test plan:**
| Item | Reasoning | Mitigation |
| ---------- | --------------------------- | ------------------------------------------------------------------------------- |
| **{Item}** | {Why excluded from testing} | {How risk is mitigated, e.g., "validated manually", "covered by upstream team"} |
**Note:** Items listed here have been reviewed and accepted as out-of-scope by QA, Dev, and PM.
---
## Dependencies & Test Blockers
**CRITICAL:** QA cannot proceed without these items from other teams.
### Backend/Architecture Dependencies (Pre-Implementation)
**Source:** See Architecture doc "Quick Guide" for detailed mitigation plans
1. **{Dependency 1}** - {Team} - {Timeline}
- {What QA needs}
- {Why it blocks testing}
2. **{Dependency 2}** - {Team} - {Timeline}
- {What QA needs}
- {Why it blocks testing}
### QA Infrastructure Setup (Pre-Implementation)
1. **Test Data Factories** - QA
- {Entity} factory with faker-based randomization
- Auto-cleanup fixtures for parallel safety
2. **Test Environments** - QA
- Local: {Setup details}
- CI/CD: {Setup details}
- Staging: {Setup details}
**Example factory pattern:**
```typescript
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
import { expect } from '@playwright/test';
import { faker } from '@faker-js/faker';
test('example test @p0', async ({ apiRequest }) => {
const testData = {
id: `test-${faker.string.uuid()}`,
email: faker.internet.email(),
};
const { status } = await apiRequest({
method: 'POST',
path: '/api/resource',
body: testData,
});
expect(status).toBe(201);
});
```
---
## Risk Assessment
**Note:** Full risk details in Architecture doc. This section summarizes risks relevant to QA test planning.
### High-Priority Risks (Score ≥6)
| Risk ID | Category | Description | Score | QA Test Coverage |
| ---------- | -------- | ------------------- | ----------- | ---------------------------- |
| **{R-ID}** | {CAT} | {Brief description} | **{Score}** | {How QA validates this risk} |
### Medium/Low-Priority Risks
| Risk ID | Category | Description | Score | QA Test Coverage |
| ------- | -------- | ------------------- | ------- | ---------------------------- |
| {R-ID} | {CAT} | {Brief description} | {Score} | {How QA validates this risk} |
---
## Entry Criteria
**QA testing cannot begin until ALL of the following are met:**
- [ ] All requirements and assumptions agreed upon by QA, Dev, PM
- [ ] Test environments provisioned and accessible
- [ ] Test data factories ready or seed data available
- [ ] Pre-implementation blockers resolved (see Dependencies section)
- [ ] Feature deployed to test environment
- [ ] {Additional project-specific entry criteria}
## Exit Criteria
**Testing phase is complete when ALL of the following are met:**
- [ ] All P0 tests passing
- [ ] All P1 tests passing (or failures triaged and accepted)
- [ ] No open high-priority / high-severity bugs
- [ ] Test coverage agreed as sufficient by QA Lead and Dev Lead
- [ ] Performance baselines met (if applicable)
- [ ] {Additional project-specific exit criteria}
---
## Project Team (Optional)
**Include only if roles/names are known or responsibility mapping is needed; otherwise omit.**
| Name | Role | Testing Responsibilities |
| ------ | --------- | ------------------------------------------------------------- |
| {Name} | QA Lead | Test strategy, E2E/API test implementation, test review |
| {Name} | Dev Lead | Unit tests, integration test support, testability hooks |
| {Name} | PM | Requirements clarification, acceptance criteria, UAT sign-off |
| {Name} | Architect | Testability review, NFR guidance, environment provisioning |
---
## Test Coverage Plan
**IMPORTANT:** P0/P1/P2/P3 = **priority and risk level** (what to focus on if time-constrained), NOT execution timing. See "Execution Strategy" for when tests run.
### P0 (Critical)
**Criteria:** Blocks core functionality + High risk (≥6) + No workaround + Affects majority of users
| Test ID | Requirement | Test Level | Risk Link | Notes |
| ---------- | ------------- | ---------- | --------- | ------- |
| **P0-001** | {Requirement} | {Level} | {R-ID} | {Notes} |
| **P0-002** | {Requirement} | {Level} | {R-ID} | {Notes} |
**Total P0:** ~{N} tests
---
### P1 (High)
**Criteria:** Important features + Medium risk (3-4) + Common workflows + Workaround exists but difficult
| Test ID | Requirement | Test Level | Risk Link | Notes |
| ---------- | ------------- | ---------- | --------- | ------- |
| **P1-001** | {Requirement} | {Level} | {R-ID} | {Notes} |
| **P1-002** | {Requirement} | {Level} | {R-ID} | {Notes} |
**Total P1:** ~{N} tests
---
### P2 (Medium)
**Criteria:** Secondary features + Low risk (1-2) + Edge cases + Regression prevention
| Test ID | Requirement | Test Level | Risk Link | Notes |
| ---------- | ------------- | ---------- | --------- | ------- |
| **P2-001** | {Requirement} | {Level} | {R-ID} | {Notes} |
**Total P2:** ~{N} tests
---
### P3 (Low)
**Criteria:** Nice-to-have + Exploratory + Performance benchmarks + Documentation validation
| Test ID | Requirement | Test Level | Notes |
| ---------- | ------------- | ---------- | ------- |
| **P3-001** | {Requirement} | {Level} | {Notes} |
**Total P3:** ~{N} tests
---
## Execution Strategy
**Philosophy:** Run everything in PRs unless there's significant infrastructure overhead. Playwright with parallelization is extremely fast (100s of tests in ~10-15 min).
**Organized by TOOL TYPE:**
### Every PR: Playwright Tests (~10-15 min)
**All functional tests** (from any priority level):
- All E2E, API, integration, unit tests using Playwright
- Parallelized across {N} shards
- Total: ~{N} Playwright tests (includes P0, P1, P2, P3)
**Why run in PRs:** Fast feedback, no expensive infrastructure
### Nightly: k6 Performance Tests (~30-60 min)
**All performance tests** (from any priority level):
- Load, stress, spike, endurance tests
- Total: ~{N} k6 tests (may include P0, P1, P2)
**Why defer to nightly:** Expensive infrastructure (k6 Cloud), long-running (10-40 min per test)
### Weekly: Chaos & Long-Running (~hours)
**Special infrastructure tests** (from any priority level):
- Multi-region failover (requires AWS Fault Injection Simulator)
- Disaster recovery (backup restore, 4+ hours)
- Endurance tests (4+ hours runtime)
**Why defer to weekly:** Very expensive infrastructure, very long-running, infrequent validation sufficient
**Manual tests** (excluded from automation):
- DevOps validation (deployment, monitoring)
- Finance validation (cost alerts)
- Documentation validation
---
## QA Effort Estimate
**QA test development effort only** (excludes DevOps, Backend, Data Eng, Finance work):
| Priority | Count | Effort Range | Notes |
| --------- | ----- | ------------------ | ------------------------------------------------- |
| P0 | ~{N} | ~{X}-{Y} weeks | Complex setup (security, performance, multi-step) |
| P1 | ~{N} | ~{X}-{Y} weeks | Standard coverage (integration, API tests) |
| P2 | ~{N} | ~{X}-{Y} days | Edge cases, simple validation |
| P3 | ~{N} | ~{X}-{Y} days | Exploratory, benchmarks |
| **Total** | ~{N} | **~{X}-{Y} weeks** | **1 QA engineer, full-time** |
**Assumptions:**
- Includes test design, implementation, debugging, CI integration
- Excludes ongoing maintenance (~10% effort)
- Assumes test infrastructure (factories, fixtures) ready
**Dependencies from other teams:**
- See "Dependencies & Test Blockers" section for what QA needs from Backend, DevOps, Data Eng
---
## Implementation Planning Handoff (Optional)
**Include only if this test design produces implementation tasks that must be scheduled.**
**Use this to inform implementation planning; if no dedicated QA, assign to Dev owners.**
| Work Item | Owner | Target Milestone (Optional) | Dependencies/Notes |
| ----------- | ------------ | --------------------------- | ------------------ |
| {Work item} | {QA/Dev/etc} | {Milestone or date} | {Notes} |
| {Work item} | {QA/Dev/etc} | {Milestone or date} | {Notes} |
---
## Tooling & Access
**Include only if non-standard tools or access requests are required.**
| Tool or Service | Purpose | Access Required | Status |
| ----------------- | --------- | --------------- | ----------------- |
| {Tool or Service} | {Purpose} | {Access needed} | {Ready / Pending} |
| {Tool or Service} | {Purpose} | {Access needed} | {Ready / Pending} |
**Access requests needed (if any):**
- [ ] {Access to request}
---
## Interworking & Regression
**Services and components impacted by this feature:**
| Service/Component | Impact | Regression Scope | Validation Steps |
| ----------------- | ------------------- | ------------------------------- | ----------------------------- |
| **{Service}** | {How it's affected} | {What existing tests must pass} | {How to verify no regression} |
**Regression test strategy:**
- {Describe which existing test suites must pass before release}
- {Note any cross-team coordination needed for regression validation}
---
## Appendix A: Code Examples & Tagging
**Playwright Tags for Selective Execution:**
```typescript
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
import { expect } from '@playwright/test';
// P0 critical test
test('@P0 @API @Security unauthenticated request returns 401', async ({ apiRequest }) => {
const { status, body } = await apiRequest({
method: 'POST',
path: '/api/endpoint',
body: { data: 'test' },
skipAuth: true,
});
expect(status).toBe(401);
expect(body.error).toContain('unauthorized');
});
// P1 integration test
test('@P1 @Integration data syncs correctly', async ({ apiRequest }) => {
// Seed data
await apiRequest({
method: 'POST',
path: '/api/seed',
body: {
/* test data */
},
});
// Validate
const { status, body } = await apiRequest({
method: 'GET',
path: '/api/resource',
});
expect(status).toBe(200);
expect(body).toHaveProperty('data');
});
```
**Run specific tags:**
```bash
# Run only P0 tests
npx playwright test --grep @P0
# Run P0 + P1 tests
npx playwright test --grep "@P0|@P1"
# Run only security tests
npx playwright test --grep @Security
# Run all Playwright tests in PR (default)
npx playwright test
```
---
## Appendix B: Knowledge Base References
- **Risk Governance**: `risk-governance.md` - Risk scoring methodology
- **Test Priorities Matrix**: `test-priorities-matrix.md` - P0-P3 criteria
- **Test Levels Framework**: `test-levels-framework.md` - E2E vs API vs Unit selection
- **Test Quality**: `test-quality.md` - Definition of Done (no hard waits, <300 lines, <1.5 min)
---
**Generated by:** BMad TEA Agent
**Workflow:** `_bmad/tea/testarch/test-design`
**Version:** 4.0 (BMad v6)

View File

@@ -0,0 +1,396 @@
---
stepsCompleted: []
lastStep: ''
lastSaved: ''
workflowType: 'testarch-test-design'
inputDocuments: []
---
# Test Design for QA: {Feature Name}
**Purpose:** Test execution recipe for QA team. Defines what to test, how to test it, and what QA needs from other teams.
**Date:** {date}
**Author:** {author}
**Status:** Draft
**Project:** {project_name}
**Related:** See Architecture doc (test-design-architecture.md) for testability concerns and architectural blockers.
---
## Executive Summary
**Scope:** {Brief description of testing scope}
**Risk Summary:**
- Total Risks: {N} ({X} high-priority score ≥6, {Y} medium, {Z} low)
- Critical Categories: {Categories with most high-priority risks}
**Coverage Summary:**
- P0 tests: ~{N} (critical paths, security)
- P1 tests: ~{N} (important features, integration)
- P2 tests: ~{N} (edge cases, regression)
- P3 tests: ~{N} (exploratory, benchmarks)
- **Total**: ~{N} tests (~{X}-{Y} weeks with 1 QA)
---
## Not in Scope
**Components or systems explicitly excluded from this test plan:**
| Item | Reasoning | Mitigation |
| ---------- | --------------------------- | ------------------------------------------------------------------------------- |
| **{Item}** | {Why excluded from testing} | {How risk is mitigated, e.g., "validated manually", "covered by upstream team"} |
**Note:** Items listed here have been reviewed and accepted as out-of-scope by QA, Dev, and PM.
---
## Dependencies & Test Blockers
**CRITICAL:** QA cannot proceed without these items from other teams.
### Backend/Architecture Dependencies (Pre-Implementation)
**Source:** See Architecture doc "Quick Guide" for detailed mitigation plans
1. **{Dependency 1}** - {Team} - {Timeline}
- {What QA needs}
- {Why it blocks testing}
2. **{Dependency 2}** - {Team} - {Timeline}
- {What QA needs}
- {Why it blocks testing}
### QA Infrastructure Setup (Pre-Implementation)
1. **Test Data Factories** - QA
- {Entity} factory with faker-based randomization
- Auto-cleanup fixtures for parallel safety
2. **Test Environments** - QA
- Local: {Setup details}
- CI/CD: {Setup details}
- Staging: {Setup details}
**Example factory pattern:**
```typescript
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
import { expect } from '@playwright/test';
import { faker } from '@faker-js/faker';
test('example test @p0', async ({ apiRequest }) => {
const testData = {
id: `test-${faker.string.uuid()}`,
email: faker.internet.email(),
};
const { status } = await apiRequest({
method: 'POST',
path: '/api/resource',
body: testData,
});
expect(status).toBe(201);
});
```
---
## Risk Assessment
**Note:** Full risk details in Architecture doc. This section summarizes risks relevant to QA test planning.
### High-Priority Risks (Score ≥6)
| Risk ID | Category | Description | Score | QA Test Coverage |
| ---------- | -------- | ------------------- | ----------- | ---------------------------- |
| **{R-ID}** | {CAT} | {Brief description} | **{Score}** | {How QA validates this risk} |
### Medium/Low-Priority Risks
| Risk ID | Category | Description | Score | QA Test Coverage |
| ------- | -------- | ------------------- | ------- | ---------------------------- |
| {R-ID} | {CAT} | {Brief description} | {Score} | {How QA validates this risk} |
---
## Entry Criteria
**QA testing cannot begin until ALL of the following are met:**
- [ ] All requirements and assumptions agreed upon by QA, Dev, PM
- [ ] Test environments provisioned and accessible
- [ ] Test data factories ready or seed data available
- [ ] Pre-implementation blockers resolved (see Dependencies section)
- [ ] Feature deployed to test environment
- [ ] {Additional project-specific entry criteria}
## Exit Criteria
**Testing phase is complete when ALL of the following are met:**
- [ ] All P0 tests passing
- [ ] All P1 tests passing (or failures triaged and accepted)
- [ ] No open high-priority / high-severity bugs
- [ ] Test coverage agreed as sufficient by QA Lead and Dev Lead
- [ ] Performance baselines met (if applicable)
- [ ] {Additional project-specific exit criteria}
---
## Project Team (Optional)
**Include only if roles/names are known or responsibility mapping is needed; otherwise omit.**
| Name | Role | Testing Responsibilities |
| ------ | --------- | ------------------------------------------------------------- |
| {Name} | QA Lead | Test strategy, E2E/API test implementation, test review |
| {Name} | Dev Lead | Unit tests, integration test support, testability hooks |
| {Name} | PM | Requirements clarification, acceptance criteria, UAT sign-off |
| {Name} | Architect | Testability review, NFR guidance, environment provisioning |
---
## Test Coverage Plan
**IMPORTANT:** P0/P1/P2/P3 = **priority and risk level** (what to focus on if time-constrained), NOT execution timing. See "Execution Strategy" for when tests run.
### P0 (Critical)
**Criteria:** Blocks core functionality + High risk (≥6) + No workaround + Affects majority of users
| Test ID | Requirement | Test Level | Risk Link | Notes |
| ---------- | ------------- | ---------- | --------- | ------- |
| **P0-001** | {Requirement} | {Level} | {R-ID} | {Notes} |
| **P0-002** | {Requirement} | {Level} | {R-ID} | {Notes} |
**Total P0:** ~{N} tests
---
### P1 (High)
**Criteria:** Important features + Medium risk (3-4) + Common workflows + Workaround exists but difficult
| Test ID | Requirement | Test Level | Risk Link | Notes |
| ---------- | ------------- | ---------- | --------- | ------- |
| **P1-001** | {Requirement} | {Level} | {R-ID} | {Notes} |
| **P1-002** | {Requirement} | {Level} | {R-ID} | {Notes} |
**Total P1:** ~{N} tests
---
### P2 (Medium)
**Criteria:** Secondary features + Low risk (1-2) + Edge cases + Regression prevention
| Test ID | Requirement | Test Level | Risk Link | Notes |
| ---------- | ------------- | ---------- | --------- | ------- |
| **P2-001** | {Requirement} | {Level} | {R-ID} | {Notes} |
**Total P2:** ~{N} tests
---
### P3 (Low)
**Criteria:** Nice-to-have + Exploratory + Performance benchmarks + Documentation validation
| Test ID | Requirement | Test Level | Notes |
| ---------- | ------------- | ---------- | ------- |
| **P3-001** | {Requirement} | {Level} | {Notes} |
**Total P3:** ~{N} tests
---
## Execution Strategy
**Philosophy:** Run everything in PRs unless there's significant infrastructure overhead. Playwright with parallelization is extremely fast (100s of tests in ~10-15 min).
**Organized by TOOL TYPE:**
### Every PR: Playwright Tests (~10-15 min)
**All functional tests** (from any priority level):
- All E2E, API, integration, unit tests using Playwright
- Parallelized across {N} shards
- Total: ~{N} Playwright tests (includes P0, P1, P2, P3)
**Why run in PRs:** Fast feedback, no expensive infrastructure
### Nightly: k6 Performance Tests (~30-60 min)
**All performance tests** (from any priority level):
- Load, stress, spike, endurance tests
- Total: ~{N} k6 tests (may include P0, P1, P2)
**Why defer to nightly:** Expensive infrastructure (k6 Cloud), long-running (10-40 min per test)
### Weekly: Chaos & Long-Running (~hours)
**Special infrastructure tests** (from any priority level):
- Multi-region failover (requires AWS Fault Injection Simulator)
- Disaster recovery (backup restore, 4+ hours)
- Endurance tests (4+ hours runtime)
**Why defer to weekly:** Very expensive infrastructure, very long-running, infrequent validation sufficient
**Manual tests** (excluded from automation):
- DevOps validation (deployment, monitoring)
- Finance validation (cost alerts)
- Documentation validation
---
## QA Effort Estimate
**QA test development effort only** (excludes DevOps, Backend, Data Eng, Finance work):
| Priority | Count | Effort Range | Notes |
| --------- | ----- | ------------------ | ------------------------------------------------- |
| P0 | ~{N} | ~{X}-{Y} weeks | Complex setup (security, performance, multi-step) |
| P1 | ~{N} | ~{X}-{Y} weeks | Standard coverage (integration, API tests) |
| P2 | ~{N} | ~{X}-{Y} days | Edge cases, simple validation |
| P3 | ~{N} | ~{X}-{Y} days | Exploratory, benchmarks |
| **Total** | ~{N} | **~{X}-{Y} weeks** | **1 QA engineer, full-time** |
**Assumptions:**
- Includes test design, implementation, debugging, CI integration
- Excludes ongoing maintenance (~10% effort)
- Assumes test infrastructure (factories, fixtures) ready
**Dependencies from other teams:**
- See "Dependencies & Test Blockers" section for what QA needs from Backend, DevOps, Data Eng
---
## Implementation Planning Handoff (Optional)
**Include only if this test design produces implementation tasks that must be scheduled.**
**Use this to inform implementation planning; if no dedicated QA, assign to Dev owners.**
| Work Item | Owner | Target Milestone (Optional) | Dependencies/Notes |
| ----------- | ------------ | --------------------------- | ------------------ |
| {Work item} | {QA/Dev/etc} | {Milestone or date} | {Notes} |
| {Work item} | {QA/Dev/etc} | {Milestone or date} | {Notes} |
---
## Tooling & Access
**Include only if non-standard tools or access requests are required.**
| Tool or Service | Purpose | Access Required | Status |
| ----------------- | --------- | --------------- | ----------------- |
| {Tool or Service} | {Purpose} | {Access needed} | {Ready / Pending} |
| {Tool or Service} | {Purpose} | {Access needed} | {Ready / Pending} |
**Access requests needed (if any):**
- [ ] {Access to request}
---
## Interworking & Regression
**Services and components impacted by this feature:**
| Service/Component | Impact | Regression Scope | Validation Steps |
| ----------------- | ------------------- | ------------------------------- | ----------------------------- |
| **{Service}** | {How it's affected} | {What existing tests must pass} | {How to verify no regression} |
**Regression test strategy:**
- {Describe which existing test suites must pass before release}
- {Note any cross-team coordination needed for regression validation}
---
## Appendix A: Code Examples & Tagging
**Playwright Tags for Selective Execution:**
```typescript
import { test } from '@seontechnologies/playwright-utils/api-request/fixtures';
import { expect } from '@playwright/test';
// P0 critical test
test('@P0 @API @Security unauthenticated request returns 401', async ({ apiRequest }) => {
const { status, body } = await apiRequest({
method: 'POST',
path: '/api/endpoint',
body: { data: 'test' },
skipAuth: true,
});
expect(status).toBe(401);
expect(body.error).toContain('unauthorized');
});
// P1 integration test
test('@P1 @Integration data syncs correctly', async ({ apiRequest }) => {
// Seed data
await apiRequest({
method: 'POST',
path: '/api/seed',
body: {
/* test data */
},
});
// Validate
const { status, body } = await apiRequest({
method: 'GET',
path: '/api/resource',
});
expect(status).toBe(200);
expect(body).toHaveProperty('data');
});
```
**Run specific tags:**
```bash
# Run only P0 tests
npx playwright test --grep @P0
# Run P0 + P1 tests
npx playwright test --grep "@P0|@P1"
# Run only security tests
npx playwright test --grep @Security
# Run all Playwright tests in PR (default)
npx playwright test
```
---
## Appendix B: Knowledge Base References
- **Risk Governance**: `risk-governance.md` - Risk scoring methodology
- **Test Priorities Matrix**: `test-priorities-matrix.md` - P0-P3 criteria
- **Test Levels Framework**: `test-levels-framework.md` - E2E vs API vs Unit selection
- **Test Quality**: `test-quality.md` - Definition of Done (no hard waits, <300 lines, <1.5 min)
---
**Generated by:** BMad TEA Agent
**Workflow:** `_bmad/tea/testarch/test-design`
**Version:** 4.0 (BMad v6)

View File

@@ -0,0 +1,344 @@
---
stepsCompleted: []
lastStep: ''
lastSaved: ''
---
# Test Design: Epic {epic_num} - {epic_title}
**Date:** {date}
**Author:** {user_name}
**Status:** Draft / Approved
---
## Executive Summary
**Scope:** {design_level} test design for Epic {epic_num}
**Risk Summary:**
- Total risks identified: {total_risks}
- High-priority risks (≥6): {high_priority_count}
- Critical categories: {top_categories}
**Coverage Summary:**
- P0 scenarios: {p0_count} ({p0_hours} hours)
- P1 scenarios: {p1_count} ({p1_hours} hours)
- P2/P3 scenarios: {p2p3_count} ({p2p3_hours} hours)
- **Total effort**: {total_hours} hours (~{total_days} days)
---
## Not in Scope
| Item | Reasoning | Mitigation |
| ---------- | -------------- | --------------------- |
| **{Item}** | {Why excluded} | {How risk is handled} |
---
## Risk Assessment
### High-Priority Risks (Score ≥6)
| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner | Timeline |
| ------- | -------- | ------------- | ----------- | ------ | ----- | ------------ | ------- | -------- |
| R-001 | SEC | {description} | 2 | 3 | 6 | {mitigation} | {owner} | {date} |
| R-002 | PERF | {description} | 3 | 2 | 6 | {mitigation} | {owner} | {date} |
### Medium-Priority Risks (Score 3-4)
| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner |
| ------- | -------- | ------------- | ----------- | ------ | ----- | ------------ | ------- |
| R-003 | TECH | {description} | 2 | 2 | 4 | {mitigation} | {owner} |
| R-004 | DATA | {description} | 1 | 3 | 3 | {mitigation} | {owner} |
### Low-Priority Risks (Score 1-2)
| Risk ID | Category | Description | Probability | Impact | Score | Action |
| ------- | -------- | ------------- | ----------- | ------ | ----- | ------- |
| R-005 | OPS | {description} | 1 | 2 | 2 | Monitor |
| R-006 | BUS | {description} | 1 | 1 | 1 | Monitor |
### Risk Category Legend
- **TECH**: Technical/Architecture (flaws, integration, scalability)
- **SEC**: Security (access controls, auth, data exposure)
- **PERF**: Performance (SLA violations, degradation, resource limits)
- **DATA**: Data Integrity (loss, corruption, inconsistency)
- **BUS**: Business Impact (UX harm, logic errors, revenue)
- **OPS**: Operations (deployment, config, monitoring)
---
## Entry Criteria
- [ ] Requirements and assumptions agreed upon by QA, Dev, PM
- [ ] Test environment provisioned and accessible
- [ ] Test data available or factories ready
- [ ] Feature deployed to test environment
- [ ] {Epic-specific entry criteria}
## Exit Criteria
- [ ] All P0 tests passing
- [ ] All P1 tests passing (or failures triaged)
- [ ] No open high-priority / high-severity bugs
- [ ] Test coverage agreed as sufficient
- [ ] {Epic-specific exit criteria}
## Project Team (Optional)
**Include only if roles/names are known or responsibility mapping is needed; otherwise omit.**
| Name | Role | Testing Responsibilities |
| ------ | -------- | ------------------------ |
| {Name} | QA Lead | {Responsibilities} |
| {Name} | Dev Lead | {Responsibilities} |
| {Name} | PM | {Responsibilities} |
---
## Test Coverage Plan
### P0 (Critical) - Run on every commit
**Criteria**: Blocks core journey + High risk (≥6) + No workaround
| Requirement | Test Level | Risk Link | Test Count | Owner | Notes |
| ------------- | ---------- | --------- | ---------- | ----- | ------- |
| {requirement} | E2E | R-001 | 3 | QA | {notes} |
| {requirement} | API | R-002 | 5 | QA | {notes} |
**Total P0**: {p0_count} tests, {p0_hours} hours
### P1 (High) - Run on PR to main
**Criteria**: Important features + Medium risk (3-4) + Common workflows
| Requirement | Test Level | Risk Link | Test Count | Owner | Notes |
| ------------- | ---------- | --------- | ---------- | ----- | ------- |
| {requirement} | API | R-003 | 4 | QA | {notes} |
| {requirement} | Component | - | 6 | DEV | {notes} |
**Total P1**: {p1_count} tests, {p1_hours} hours
### P2 (Medium) - Run nightly/weekly
**Criteria**: Secondary features + Low risk (1-2) + Edge cases
| Requirement | Test Level | Risk Link | Test Count | Owner | Notes |
| ------------- | ---------- | --------- | ---------- | ----- | ------- |
| {requirement} | API | R-004 | 8 | QA | {notes} |
| {requirement} | Unit | - | 15 | DEV | {notes} |
**Total P2**: {p2_count} tests, {p2_hours} hours
### P3 (Low) - Run on-demand
**Criteria**: Nice-to-have + Exploratory + Performance benchmarks
| Requirement | Test Level | Test Count | Owner | Notes |
| ------------- | ---------- | ---------- | ----- | ------- |
| {requirement} | E2E | 2 | QA | {notes} |
| {requirement} | Unit | 8 | DEV | {notes} |
**Total P3**: {p3_count} tests, {p3_hours} hours
---
## Execution Order
### Smoke Tests (<5 min)
**Purpose**: Fast feedback, catch build-breaking issues
- [ ] {scenario} (30s)
- [ ] {scenario} (45s)
- [ ] {scenario} (1min)
**Total**: {smoke_count} scenarios
### P0 Tests (<10 min)
**Purpose**: Critical path validation
- [ ] {scenario} (E2E)
- [ ] {scenario} (API)
- [ ] {scenario} (API)
**Total**: {p0_count} scenarios
### P1 Tests (<30 min)
**Purpose**: Important feature coverage
- [ ] {scenario} (API)
- [ ] {scenario} (Component)
**Total**: {p1_count} scenarios
### P2/P3 Tests (<60 min)
**Purpose**: Full regression coverage
- [ ] {scenario} (Unit)
- [ ] {scenario} (API)
**Total**: {p2p3_count} scenarios
---
## Resource Estimates
### Test Development Effort
| Priority | Count | Hours/Test | Total Hours | Notes |
| --------- | ----------------- | ---------- | ----------------- | ----------------------- |
| P0 | {p0_count} | 2.0 | {p0_hours} | Complex setup, security |
| P1 | {p1_count} | 1.0 | {p1_hours} | Standard coverage |
| P2 | {p2_count} | 0.5 | {p2_hours} | Simple scenarios |
| P3 | {p3_count} | 0.25 | {p3_hours} | Exploratory |
| **Total** | **{total_count}** | **-** | **{total_hours}** | **~{total_days} days** |
### Prerequisites
**Test Data:**
- {factory_name} factory (faker-based, auto-cleanup)
- {fixture_name} fixture (setup/teardown)
**Tooling:**
- {tool} for {purpose}
- {tool} for {purpose}
**Environment:**
- {env_requirement}
- {env_requirement}
---
## Quality Gate Criteria
### Pass/Fail Thresholds
- **P0 pass rate**: 100% (no exceptions)
- **P1 pass rate**: ≥95% (waivers required for failures)
- **P2/P3 pass rate**: ≥90% (informational)
- **High-risk mitigations**: 100% complete or approved waivers
### Coverage Targets
- **Critical paths**: ≥80%
- **Security scenarios**: 100%
- **Business logic**: ≥70%
- **Edge cases**: ≥50%
### Non-Negotiable Requirements
- [ ] All P0 tests pass
- [ ] No high-risk (≥6) items unmitigated
- [ ] Security tests (SEC category) pass 100%
- [ ] Performance targets met (PERF category)
---
## Mitigation Plans
### R-001: {Risk Description} (Score: 6)
**Mitigation Strategy:** {detailed_mitigation}
**Owner:** {owner}
**Timeline:** {date}
**Status:** Planned / In Progress / Complete
**Verification:** {how_to_verify}
### R-002: {Risk Description} (Score: 6)
**Mitigation Strategy:** {detailed_mitigation}
**Owner:** {owner}
**Timeline:** {date}
**Status:** Planned / In Progress / Complete
**Verification:** {how_to_verify}
---
## Assumptions and Dependencies
### Assumptions
1. {assumption}
2. {assumption}
3. {assumption}
### Dependencies
1. {dependency} - Required by {date}
2. {dependency} - Required by {date}
### Risks to Plan
- **Risk**: {risk_to_plan}
- **Impact**: {impact}
- **Contingency**: {contingency}
---
---
## Follow-on Workflows (Manual)
- Run `*atdd` to generate failing P0 tests (separate workflow; not auto-run).
- Run `*automate` for broader coverage once implementation exists.
---
## Approval
**Test Design Approved By:**
- [ ] Product Manager: {name} Date: {date}
- [ ] Tech Lead: {name} Date: {date}
- [ ] QA Lead: {name} Date: {date}
**Comments:**
---
---
---
## Interworking & Regression
| Service/Component | Impact | Regression Scope |
| ----------------- | -------------- | ------------------------------- |
| **{Service}** | {How affected} | {Existing tests that must pass} |
---
## Appendix
### Knowledge Base References
- `risk-governance.md` - Risk classification framework
- `probability-impact.md` - Risk scoring methodology
- `test-levels-framework.md` - Test level selection
- `test-priorities-matrix.md` - P0-P3 prioritization
### Related Documents
- PRD: {prd_link}
- Epic: {epic_link}
- Architecture: {arch_link}
- Tech Spec: {tech_spec_link}
---
**Generated by**: BMad TEA Agent - Test Architect Module
**Workflow**: `_bmad/tea/testarch/test-design`
**Version**: 4.0 (BMad v6)

View File

@@ -0,0 +1,344 @@
---
stepsCompleted: []
lastStep: ''
lastSaved: ''
---
# Test Design: Epic {epic_num} - {epic_title}
**Date:** {date}
**Author:** {user_name}
**Status:** Draft / Approved
---
## Executive Summary
**Scope:** {design_level} test design for Epic {epic_num}
**Risk Summary:**
- Total risks identified: {total_risks}
- High-priority risks (≥6): {high_priority_count}
- Critical categories: {top_categories}
**Coverage Summary:**
- P0 scenarios: {p0_count} ({p0_hours} hours)
- P1 scenarios: {p1_count} ({p1_hours} hours)
- P2/P3 scenarios: {p2p3_count} ({p2p3_hours} hours)
- **Total effort**: {total_hours} hours (~{total_days} days)
---
## Not in Scope
| Item | Reasoning | Mitigation |
| ---------- | -------------- | --------------------- |
| **{Item}** | {Why excluded} | {How risk is handled} |
---
## Risk Assessment
### High-Priority Risks (Score ≥6)
| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner | Timeline |
| ------- | -------- | ------------- | ----------- | ------ | ----- | ------------ | ------- | -------- |
| R-001 | SEC | {description} | 2 | 3 | 6 | {mitigation} | {owner} | {date} |
| R-002 | PERF | {description} | 3 | 2 | 6 | {mitigation} | {owner} | {date} |
### Medium-Priority Risks (Score 3-4)
| Risk ID | Category | Description | Probability | Impact | Score | Mitigation | Owner |
| ------- | -------- | ------------- | ----------- | ------ | ----- | ------------ | ------- |
| R-003 | TECH | {description} | 2 | 2 | 4 | {mitigation} | {owner} |
| R-004 | DATA | {description} | 1 | 3 | 3 | {mitigation} | {owner} |
### Low-Priority Risks (Score 1-2)
| Risk ID | Category | Description | Probability | Impact | Score | Action |
| ------- | -------- | ------------- | ----------- | ------ | ----- | ------- |
| R-005 | OPS | {description} | 1 | 2 | 2 | Monitor |
| R-006 | BUS | {description} | 1 | 1 | 1 | Monitor |
### Risk Category Legend
- **TECH**: Technical/Architecture (flaws, integration, scalability)
- **SEC**: Security (access controls, auth, data exposure)
- **PERF**: Performance (SLA violations, degradation, resource limits)
- **DATA**: Data Integrity (loss, corruption, inconsistency)
- **BUS**: Business Impact (UX harm, logic errors, revenue)
- **OPS**: Operations (deployment, config, monitoring)
---
## Entry Criteria
- [ ] Requirements and assumptions agreed upon by QA, Dev, PM
- [ ] Test environment provisioned and accessible
- [ ] Test data available or factories ready
- [ ] Feature deployed to test environment
- [ ] {Epic-specific entry criteria}
## Exit Criteria
- [ ] All P0 tests passing
- [ ] All P1 tests passing (or failures triaged)
- [ ] No open high-priority / high-severity bugs
- [ ] Test coverage agreed as sufficient
- [ ] {Epic-specific exit criteria}
## Project Team (Optional)
**Include only if roles/names are known or responsibility mapping is needed; otherwise omit.**
| Name | Role | Testing Responsibilities |
| ------ | -------- | ------------------------ |
| {Name} | QA Lead | {Responsibilities} |
| {Name} | Dev Lead | {Responsibilities} |
| {Name} | PM | {Responsibilities} |
---
## Test Coverage Plan
### P0 (Critical) - Run on every commit
**Criteria**: Blocks core journey + High risk (≥6) + No workaround
| Requirement | Test Level | Risk Link | Test Count | Owner | Notes |
| ------------- | ---------- | --------- | ---------- | ----- | ------- |
| {requirement} | E2E | R-001 | 3 | QA | {notes} |
| {requirement} | API | R-002 | 5 | QA | {notes} |
**Total P0**: {p0_count} tests, {p0_hours} hours
### P1 (High) - Run on PR to main
**Criteria**: Important features + Medium risk (3-4) + Common workflows
| Requirement | Test Level | Risk Link | Test Count | Owner | Notes |
| ------------- | ---------- | --------- | ---------- | ----- | ------- |
| {requirement} | API | R-003 | 4 | QA | {notes} |
| {requirement} | Component | - | 6 | DEV | {notes} |
**Total P1**: {p1_count} tests, {p1_hours} hours
### P2 (Medium) - Run nightly/weekly
**Criteria**: Secondary features + Low risk (1-2) + Edge cases
| Requirement | Test Level | Risk Link | Test Count | Owner | Notes |
| ------------- | ---------- | --------- | ---------- | ----- | ------- |
| {requirement} | API | R-004 | 8 | QA | {notes} |
| {requirement} | Unit | - | 15 | DEV | {notes} |
**Total P2**: {p2_count} tests, {p2_hours} hours
### P3 (Low) - Run on-demand
**Criteria**: Nice-to-have + Exploratory + Performance benchmarks
| Requirement | Test Level | Test Count | Owner | Notes |
| ------------- | ---------- | ---------- | ----- | ------- |
| {requirement} | E2E | 2 | QA | {notes} |
| {requirement} | Unit | 8 | DEV | {notes} |
**Total P3**: {p3_count} tests, {p3_hours} hours
---
## Execution Order
### Smoke Tests (<5 min)
**Purpose**: Fast feedback, catch build-breaking issues
- [ ] {scenario} (30s)
- [ ] {scenario} (45s)
- [ ] {scenario} (1min)
**Total**: {smoke_count} scenarios
### P0 Tests (<10 min)
**Purpose**: Critical path validation
- [ ] {scenario} (E2E)
- [ ] {scenario} (API)
- [ ] {scenario} (API)
**Total**: {p0_count} scenarios
### P1 Tests (<30 min)
**Purpose**: Important feature coverage
- [ ] {scenario} (API)
- [ ] {scenario} (Component)
**Total**: {p1_count} scenarios
### P2/P3 Tests (<60 min)
**Purpose**: Full regression coverage
- [ ] {scenario} (Unit)
- [ ] {scenario} (API)
**Total**: {p2p3_count} scenarios
---
## Resource Estimates
### Test Development Effort
| Priority | Count | Hours/Test | Total Hours | Notes |
| --------- | ----------------- | ---------- | ----------------- | ----------------------- |
| P0 | {p0_count} | 2.0 | {p0_hours} | Complex setup, security |
| P1 | {p1_count} | 1.0 | {p1_hours} | Standard coverage |
| P2 | {p2_count} | 0.5 | {p2_hours} | Simple scenarios |
| P3 | {p3_count} | 0.25 | {p3_hours} | Exploratory |
| **Total** | **{total_count}** | **-** | **{total_hours}** | **~{total_days} days** |
### Prerequisites
**Test Data:**
- {factory_name} factory (faker-based, auto-cleanup)
- {fixture_name} fixture (setup/teardown)
**Tooling:**
- {tool} for {purpose}
- {tool} for {purpose}
**Environment:**
- {env_requirement}
- {env_requirement}
---
## Quality Gate Criteria
### Pass/Fail Thresholds
- **P0 pass rate**: 100% (no exceptions)
- **P1 pass rate**: ≥95% (waivers required for failures)
- **P2/P3 pass rate**: ≥90% (informational)
- **High-risk mitigations**: 100% complete or approved waivers
### Coverage Targets
- **Critical paths**: ≥80%
- **Security scenarios**: 100%
- **Business logic**: ≥70%
- **Edge cases**: ≥50%
### Non-Negotiable Requirements
- [ ] All P0 tests pass
- [ ] No high-risk (≥6) items unmitigated
- [ ] Security tests (SEC category) pass 100%
- [ ] Performance targets met (PERF category)
---
## Mitigation Plans
### R-001: {Risk Description} (Score: 6)
**Mitigation Strategy:** {detailed_mitigation}
**Owner:** {owner}
**Timeline:** {date}
**Status:** Planned / In Progress / Complete
**Verification:** {how_to_verify}
### R-002: {Risk Description} (Score: 6)
**Mitigation Strategy:** {detailed_mitigation}
**Owner:** {owner}
**Timeline:** {date}
**Status:** Planned / In Progress / Complete
**Verification:** {how_to_verify}
---
## Assumptions and Dependencies
### Assumptions
1. {assumption}
2. {assumption}
3. {assumption}
### Dependencies
1. {dependency} - Required by {date}
2. {dependency} - Required by {date}
### Risks to Plan
- **Risk**: {risk_to_plan}
- **Impact**: {impact}
- **Contingency**: {contingency}
---
---
## Follow-on Workflows (Manual)
- Run `*atdd` to generate failing P0 tests (separate workflow; not auto-run).
- Run `*automate` for broader coverage once implementation exists.
---
## Approval
**Test Design Approved By:**
- [ ] Product Manager: {name} Date: {date}
- [ ] Tech Lead: {name} Date: {date}
- [ ] QA Lead: {name} Date: {date}
**Comments:**
---
---
---
## Interworking & Regression
| Service/Component | Impact | Regression Scope |
| ----------------- | -------------- | ------------------------------- |
| **{Service}** | {How affected} | {Existing tests that must pass} |
---
## Appendix
### Knowledge Base References
- `risk-governance.md` - Risk classification framework
- `probability-impact.md` - Risk scoring methodology
- `test-levels-framework.md` - Test level selection
- `test-priorities-matrix.md` - P0-P3 prioritization
### Related Documents
- PRD: {prd_link}
- Epic: {epic_link}
- Architecture: {arch_link}
- Tech Spec: {tech_spec_link}
---
**Generated by**: BMad TEA Agent - Test Architect Module
**Workflow**: `_bmad/tea/testarch/test-design`
**Version**: 4.0 (BMad v6)

View File

@@ -0,0 +1,73 @@
---
validationDate: 2026-01-27
workflowName: testarch-test-design
workflowPath: {project-root}/src/workflows/testarch/test-design
validationStatus: COMPLETE
completionDate: 2026-01-27 10:03:10
---
# Validation Report: testarch-test-design
**Validation Started:** 2026-01-27 09:50:21
**Validator:** BMAD Workflow Validation System (Codex)
**Standards Version:** BMAD Workflow Standards
## File Structure & Size
- workflow.md present: YES
- instructions.md present: YES
- workflow.yaml present: YES
- step files found: 8
**Step File Sizes:**
- steps-c/step-01-detect-mode.md: 93 lines [GOOD]
- steps-c/step-02-load-context.md: 112 lines [GOOD]
- steps-c/step-03-risk-and-testability.md: 76 lines [GOOD]
- steps-c/step-04-coverage-plan.md: 88 lines [GOOD]
- steps-c/step-05-generate-output.md: 85 lines [GOOD]
- steps-e/step-01-assess.md: 51 lines [GOOD]
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
- steps-v/step-01-validate.md: 53 lines [GOOD]
- workflow-plan.md present: YES
## Frontmatter Validation
- No frontmatter violations found
## Critical Path Violations
- No {project-root} hardcoded paths detected in body
- No dead relative links detected
## Menu Handling Validation
- No menu structures detected (linear step flow) [N/A]
## Step Type Validation
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
## Output Format Validation
- Templates present: test-design-architecture-template.md, test-design-qa-template.md, test-design-template.md
- Steps with outputFile in frontmatter:
- steps-c/step-05-generate-output.md
- steps-v/step-01-validate.md
## Validation Design Check
- checklist.md present: YES
- Validation steps folder (steps-v) present: YES
## Instruction Style Check
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
## Summary
- Validation completed: 2026-01-27 10:03:10
- Critical issues: 0
- Warnings: 0 (informational notes only)
- Readiness: READY (manual review optional)

View File

@@ -0,0 +1,73 @@
---
validationDate: 2026-01-27
workflowName: testarch-test-design
workflowPath: {project-root}/src/workflows/testarch/test-design
validationStatus: COMPLETE
completionDate: 2026-01-27 10:03:10
---
# Validation Report: testarch-test-design
**Validation Started:** 2026-01-27 09:50:21
**Validator:** BMAD Workflow Validation System (Codex)
**Standards Version:** BMAD Workflow Standards
## File Structure & Size
- workflow.md present: YES
- instructions.md present: YES
- workflow.yaml present: YES
- step files found: 8
**Step File Sizes:**
- steps-c/step-01-detect-mode.md: 93 lines [GOOD]
- steps-c/step-02-load-context.md: 112 lines [GOOD]
- steps-c/step-03-risk-and-testability.md: 76 lines [GOOD]
- steps-c/step-04-coverage-plan.md: 88 lines [GOOD]
- steps-c/step-05-generate-output.md: 85 lines [GOOD]
- steps-e/step-01-assess.md: 51 lines [GOOD]
- steps-e/step-02-apply-edit.md: 46 lines [GOOD]
- steps-v/step-01-validate.md: 53 lines [GOOD]
- workflow-plan.md present: YES
## Frontmatter Validation
- No frontmatter violations found
## Critical Path Violations
- No {project-root} hardcoded paths detected in body
- No dead relative links detected
## Menu Handling Validation
- No menu structures detected (linear step flow) [N/A]
## Step Type Validation
- Last step steps-v/step-01-validate.md has no nextStepFile (final step OK)
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
## Output Format Validation
- Templates present: test-design-architecture-template.md, test-design-qa-template.md, test-design-template.md
- Steps with outputFile in frontmatter:
- steps-c/step-05-generate-output.md
- steps-v/step-01-validate.md
## Validation Design Check
- checklist.md present: YES
- Validation steps folder (steps-v) present: YES
## Instruction Style Check
- All steps include STEP GOAL, MANDATORY EXECUTION RULES, EXECUTION PROTOCOLS, CONTEXT BOUNDARIES, and SUCCESS/FAILURE metrics
## Summary
- Validation completed: 2026-01-27 10:03:10
- Critical issues: 0
- Warnings: 0 (informational notes only)
- Readiness: READY (manual review optional)

View File

@@ -0,0 +1,116 @@
---
validationDate: 2026-01-27
workflowName: testarch-test-design
workflowPath: {project-root}/src/workflows/testarch/test-design
validationStatus: COMPLETE
completionDate: 2026-01-27 10:24:01
---
# Validation Report: testarch-test-design
**Validation Started:** 2026-01-27 10:24:01
**Validator:** BMAD Workflow Validation System (Codex)
**Standards Version:** BMAD Workflow Standards
## File Structure & Size
- workflow.md present: YES
- instructions.md present: YES
- workflow.yaml present: YES
- step files found: 8
**Step File Sizes:**
- steps-c/step-01-detect-mode.md: 92 lines [GOOD]
- steps-c/step-02-load-context.md: 111 lines [GOOD]
- steps-c/step-03-risk-and-testability.md: 75 lines [GOOD]
- steps-c/step-04-coverage-plan.md: 87 lines [GOOD]
- steps-c/step-05-generate-output.md: 84 lines [GOOD]
- steps-e/step-01-assess.md: 50 lines [GOOD]
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
- steps-v/step-01-validate.md: 52 lines [GOOD]
- workflow-plan.md present: YES
## Frontmatter Validation
- No frontmatter violations found
## Critical Path Violations
### Config Variables (Exceptions)
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
- No {project-root} hardcoded paths detected in body
- No dead relative links detected
- No module path assumptions detected
**Status:** ✅ PASS - No critical violations
## Menu Handling Validation
- No menu structures detected (linear step flow) [N/A]
## Step Type Validation
- steps-c/step-01-detect-mode.md: Init [PASS]
- steps-c/step-02-load-context.md: Middle [PASS]
- steps-c/step-03-risk-and-testability.md: Middle [PASS]
- steps-c/step-04-coverage-plan.md: Middle [PASS]
- steps-c/step-05-generate-output.md: Final [PASS]
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
## Output Format Validation
- Templates present: test-design-architecture-template.md, test-design-qa-template.md, test-design-template.md
- Steps with outputFile in frontmatter:
- steps-c/step-05-generate-output.md
- steps-v/step-01-validate.md
- checklist.md present: YES
## Validation Design Check
- Validation steps folder (steps-v) present: YES
- Validation step(s) present: step-01-validate.md
- Validation steps reference checklist data and auto-proceed
## Instruction Style Check
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
## Collaborative Experience Check
- Overall facilitation quality: GOOD
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
- Flow progression is clear and aligned to workflow goals
## Subagent Optimization Opportunities
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
- Pattern 1 (grep/regex): N/A for most steps
- Pattern 2 (per-file analysis): already aligned to validation structure
- Pattern 3 (data ops): minimal data file loads
- Pattern 4 (parallel): optional for validation only
## Cohesive Review
- Overall assessment: GOOD
- Flow is linear, goals are clear, and outputs map to TEA artifacts
- Voice and tone consistent with Test Architect persona
- Recommendation: READY (minor refinements optional)
## Plan Quality Validation
- Plan file present: workflow-plan.md
- Planned steps found: 8 (all implemented)
- Plan implementation status: Fully Implemented
## Summary
- Validation completed: 2026-01-27 10:24:01
- Critical issues: 0
- Warnings: 0 (informational notes only)
- Readiness: READY (manual review optional)

View File

@@ -0,0 +1,116 @@
---
validationDate: 2026-01-27
workflowName: testarch-test-design
workflowPath: {project-root}/src/workflows/testarch/test-design
validationStatus: COMPLETE
completionDate: 2026-01-27 10:24:01
---
# Validation Report: testarch-test-design
**Validation Started:** 2026-01-27 10:24:01
**Validator:** BMAD Workflow Validation System (Codex)
**Standards Version:** BMAD Workflow Standards
## File Structure & Size
- workflow.md present: YES
- instructions.md present: YES
- workflow.yaml present: YES
- step files found: 8
**Step File Sizes:**
- steps-c/step-01-detect-mode.md: 92 lines [GOOD]
- steps-c/step-02-load-context.md: 111 lines [GOOD]
- steps-c/step-03-risk-and-testability.md: 75 lines [GOOD]
- steps-c/step-04-coverage-plan.md: 87 lines [GOOD]
- steps-c/step-05-generate-output.md: 84 lines [GOOD]
- steps-e/step-01-assess.md: 50 lines [GOOD]
- steps-e/step-02-apply-edit.md: 45 lines [GOOD]
- steps-v/step-01-validate.md: 52 lines [GOOD]
- workflow-plan.md present: YES
## Frontmatter Validation
- No frontmatter violations found
## Critical Path Violations
### Config Variables (Exceptions)
Standard BMAD config variables treated as valid exceptions: bmb_creations_output_folder, communication_language, document_output_language, output_folder, planning_artifacts, project-root, project_name, test_artifacts, user_name
- No {project-root} hardcoded paths detected in body
- No dead relative links detected
- No module path assumptions detected
**Status:** ✅ PASS - No critical violations
## Menu Handling Validation
- No menu structures detected (linear step flow) [N/A]
## Step Type Validation
- steps-c/step-01-detect-mode.md: Init [PASS]
- steps-c/step-02-load-context.md: Middle [PASS]
- steps-c/step-03-risk-and-testability.md: Middle [PASS]
- steps-c/step-04-coverage-plan.md: Middle [PASS]
- steps-c/step-05-generate-output.md: Final [PASS]
- Step type validation assumes linear sequence (no branching/menu). Workflow-plan.md present for reference. [INFO]
## Output Format Validation
- Templates present: test-design-architecture-template.md, test-design-qa-template.md, test-design-template.md
- Steps with outputFile in frontmatter:
- steps-c/step-05-generate-output.md
- steps-v/step-01-validate.md
- checklist.md present: YES
## Validation Design Check
- Validation steps folder (steps-v) present: YES
- Validation step(s) present: step-01-validate.md
- Validation steps reference checklist data and auto-proceed
## Instruction Style Check
- Instruction style: Prescriptive (appropriate for TEA quality/compliance workflows)
- Steps emphasize mandatory sequence, explicit success/failure metrics, and risk-based guidance
## Collaborative Experience Check
- Overall facilitation quality: GOOD
- Steps use progressive prompts and clear role reinforcement; no laundry-list interrogation detected
- Flow progression is clear and aligned to workflow goals
## Subagent Optimization Opportunities
- No high-priority subagent optimizations identified; workflow already uses step-file architecture
- Pattern 1 (grep/regex): N/A for most steps
- Pattern 2 (per-file analysis): already aligned to validation structure
- Pattern 3 (data ops): minimal data file loads
- Pattern 4 (parallel): optional for validation only
## Cohesive Review
- Overall assessment: GOOD
- Flow is linear, goals are clear, and outputs map to TEA artifacts
- Voice and tone consistent with Test Architect persona
- Recommendation: READY (minor refinements optional)
## Plan Quality Validation
- Plan file present: workflow-plan.md
- Planned steps found: 8 (all implemented)
- Plan implementation status: Fully Implemented
## Summary
- Validation completed: 2026-01-27 10:24:01
- Critical issues: 0
- Warnings: 0 (informational notes only)
- Readiness: READY (manual review optional)

View File

@@ -0,0 +1,22 @@
# Workflow Plan: testarch-test-design
## Create Mode (steps-c)
- step-01-detect-mode.md
- step-02-load-context.md
- step-03-risk-and-testability.md
- step-04-coverage-plan.md
- step-05-generate-output.md
## Validate Mode (steps-v)
- step-01-validate.md
## Edit Mode (steps-e)
- step-01-assess.md
- step-02-apply-edit.md
## Outputs
- {test_artifacts}/test-design-architecture.md (system-level)
- {test_artifacts}/test-design-qa.md (system-level)
- {test_artifacts}/test-design-epic-{epic_num}.md (epic-level)

View File

@@ -0,0 +1,22 @@
# Workflow Plan: testarch-test-design
## Create Mode (steps-c)
- step-01-detect-mode.md
- step-02-load-context.md
- step-03-risk-and-testability.md
- step-04-coverage-plan.md
- step-05-generate-output.md
## Validate Mode (steps-v)
- step-01-validate.md
## Edit Mode (steps-e)
- step-01-assess.md
- step-02-apply-edit.md
## Outputs
- {test_artifacts}/test-design-architecture.md (system-level)
- {test_artifacts}/test-design-qa.md (system-level)
- {test_artifacts}/test-design-epic-{epic_num}.md (epic-level)

View File

@@ -0,0 +1,41 @@
---
name: testarch-test-design
description: Create system-level or epic-level test plans. Use when user says 'lets design test plan' or 'I want to create test strategy'
web_bundle: true
---
# Test Design and Risk Assessment
**Goal:** Epic-level test plan (Phase 4)
**Role:** You are the Master Test Architect.
---
## WORKFLOW ARCHITECTURE
This workflow uses **tri-modal step-file architecture**:
- **Create mode (steps-c/)**: primary execution flow
- **Validate mode (steps-v/)**: validation against checklist
- **Edit mode (steps-e/)**: revise existing outputs
---
## INITIALIZATION SEQUENCE
### 1. Mode Determination
"Welcome to the workflow. What would you like to do?"
- **[C] Create** — Run the workflow
- **[R] Resume** — Resume an interrupted workflow
- **[V] Validate** — Validate existing outputs
- **[E] Edit** — Edit existing outputs
### 2. Route to First Step
- **If C:** Load `steps-c/step-01-detect-mode.md`
- **If R:** Load `steps-c/step-01b-resume.md`
- **If V:** Load `steps-v/step-01-validate.md`
- **If E:** Load `steps-e/step-01-assess.md`

View File

@@ -0,0 +1,41 @@
---
name: testarch-test-design
description: Create system-level or epic-level test plans. Use when user says 'lets design test plan' or 'I want to create test strategy'
web_bundle: true
---
# Test Design and Risk Assessment
**Goal:** Epic-level test plan (Phase 4)
**Role:** You are the Master Test Architect.
---
## WORKFLOW ARCHITECTURE
This workflow uses **tri-modal step-file architecture**:
- **Create mode (steps-c/)**: primary execution flow
- **Validate mode (steps-v/)**: validation against checklist
- **Edit mode (steps-e/)**: revise existing outputs
---
## INITIALIZATION SEQUENCE
### 1. Mode Determination
"Welcome to the workflow. What would you like to do?"
- **[C] Create** — Run the workflow
- **[R] Resume** — Resume an interrupted workflow
- **[V] Validate** — Validate existing outputs
- **[E] Edit** — Edit existing outputs
### 2. Route to First Step
- **If C:** Load `steps-c/step-01-detect-mode.md`
- **If R:** Load `steps-c/step-01b-resume.md`
- **If V:** Load `steps-v/step-01-validate.md`
- **If E:** Load `steps-e/step-01-assess.md`

View File

@@ -0,0 +1,77 @@
# Test Architect workflow: test-design
name: testarch-test-design
# prettier-ignore
description: 'Create system-level or epic-level test plans. Use when the user says "lets design test plan" or "I want to create test strategy"'
# Critical variables from config
config_source: "{project-root}/_bmad/tea/config.yaml"
output_folder: "{config_source}:output_folder"
test_artifacts: "{config_source}:test_artifacts"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/_bmad/tea/workflows/testarch/test-design"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
# Note: Template selection is mode-based (see instructions.md Step 1.5):
# - System-level: test-design-architecture-template.md + test-design-qa-template.md
# - Epic-level: test-design-template.md (unchanged)
template: "{installed_path}/test-design-template.md"
# Variables and inputs
variables:
design_level: "full" # full, targeted, minimal - scope of design effort
mode: "auto-detect" # auto-detect (default), system-level, epic-level
test_stack_type: "auto" # auto, frontend, backend, fullstack - from config or auto-detected
# Output configuration
# Note: Actual output file determined dynamically based on mode detection
# Declared outputs for new workflow format
outputs:
# System-Level Mode (Phase 3) - TWO documents
- id: test-design-architecture
description: "System-level test architecture: Architectural concerns, testability gaps, NFR requirements for Architecture/Dev teams"
path: "{test_artifacts}/test-design-architecture.md"
mode: system-level
audience: architecture
- id: test-design-qa
description: "System-level test design: Test execution recipe, coverage plan, pre-implementation setup for QA team"
path: "{test_artifacts}/test-design-qa.md"
mode: system-level
audience: qa
- id: test-design-handoff
description: "TEA → BMAD handoff document: Bridges test design outputs with epic/story decomposition"
path: "{test_artifacts}/test-design/{project_name}-handoff.md"
mode: system-level
audience: bmad-integration
# Epic-Level Mode (Phase 4) - ONE document (unchanged)
- id: epic-level
description: "Epic-level test plan (Phase 4)"
path: "{test_artifacts}/test-design-epic-{epic_num}.md"
mode: epic-level
# Note: No default_output_file - mode detection determines which outputs to write
# Required tools
required_tools:
- read_file # Read PRD, epics, stories, architecture docs
- write_file # Create test design document
- list_files # Find related documentation
- search_repo # Search for existing tests and patterns
tags:
- qa
- planning
- test-architect
- risk-assessment
- coverage
execution_hints:
interactive: false # Minimize prompts
autonomous: true # Proceed without user input unless blocked
iterative: true

View File

@@ -0,0 +1,77 @@
# Test Architect workflow: test-design
name: testarch-test-design
# prettier-ignore
description: 'Create system-level or epic-level test plans. Use when the user says "lets design test plan" or "I want to create test strategy"'
# Critical variables from config
config_source: "{project-root}/_bmad/tea/config.yaml"
output_folder: "{config_source}:output_folder"
test_artifacts: "{config_source}:test_artifacts"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/_bmad/tea/workflows/testarch/test-design"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
# Note: Template selection is mode-based (see instructions.md Step 1.5):
# - System-level: test-design-architecture-template.md + test-design-qa-template.md
# - Epic-level: test-design-template.md (unchanged)
template: "{installed_path}/test-design-template.md"
# Variables and inputs
variables:
design_level: "full" # full, targeted, minimal - scope of design effort
mode: "auto-detect" # auto-detect (default), system-level, epic-level
test_stack_type: "auto" # auto, frontend, backend, fullstack - from config or auto-detected
# Output configuration
# Note: Actual output file determined dynamically based on mode detection
# Declared outputs for new workflow format
outputs:
# System-Level Mode (Phase 3) - TWO documents
- id: test-design-architecture
description: "System-level test architecture: Architectural concerns, testability gaps, NFR requirements for Architecture/Dev teams"
path: "{test_artifacts}/test-design-architecture.md"
mode: system-level
audience: architecture
- id: test-design-qa
description: "System-level test design: Test execution recipe, coverage plan, pre-implementation setup for QA team"
path: "{test_artifacts}/test-design-qa.md"
mode: system-level
audience: qa
- id: test-design-handoff
description: "TEA → BMAD handoff document: Bridges test design outputs with epic/story decomposition"
path: "{test_artifacts}/test-design/{project_name}-handoff.md"
mode: system-level
audience: bmad-integration
# Epic-Level Mode (Phase 4) - ONE document (unchanged)
- id: epic-level
description: "Epic-level test plan (Phase 4)"
path: "{test_artifacts}/test-design-epic-{epic_num}.md"
mode: epic-level
# Note: No default_output_file - mode detection determines which outputs to write
# Required tools
required_tools:
- read_file # Read PRD, epics, stories, architecture docs
- write_file # Create test design document
- list_files # Find related documentation
- search_repo # Search for existing tests and patterns
tags:
- qa
- planning
- test-architect
- risk-assessment
- coverage
execution_hints:
interactive: false # Minimize prompts
autonomous: true # Proceed without user input unless blocked
iterative: true