Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions .github/prompts/analyze.prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,9 +61,9 @@ Execution steps:

5. Severity assignment heuristic:
- CRITICAL: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality.
- HIGH: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion.
- MEDIUM: Terminology drift, missing non-functional task coverage, underspecified edge case.
- LOW: Style/wording improvements, minor redundancy not affecting execution order.
- HIGH: Duplicate or conflicting requirement, ambiguous security/performance attribute, or untestable acceptance criterion.
- MEDIUM: Terminology drift, missing non-functional task coverage, or underspecified edge case.
- LOW: Style/wording improvements, or minor redundancy not affecting execution order.

6. Produce a Markdown report (no file writes) with sections:

Expand Down Expand Up @@ -95,9 +95,9 @@ Execution steps:
* Critical Issues Count

7. At end of report, output a concise Next Actions block:
- If CRITICAL issues exist: Recommend resolving before `/implement`.
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions.
- Provide explicit command suggestions: e.g., "Run /specify with refinement", "Run /plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'".
- If CRITICAL issues exist: Recommend resolving them before `/implement`.
- If only LOW/MEDIUM issues: User may proceed, but provide improvement suggestions.
- Provide explicit command suggestions: e.g., "Run /specify with refinement", "Run /plan to adjust architecture", or "Manually edit tasks.md to add coverage for 'performance-metrics'".

8. Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)

Expand Down
8 changes: 4 additions & 4 deletions .github/prompts/clarify.prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,10 +79,10 @@ Execution steps:
- Information is better deferred to planning phase (note internally)

3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
- Maximum of 5 total questions across the whole session.
- Maximum of 5 total questions across the entire session.
- Each question must be answerable with EITHER:
* A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
* A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
* A one-word / short‑phrase answer (explicitly constrain: "Answer in ≀5 words").
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
Expand All @@ -102,9 +102,9 @@ Execution steps:
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
```

- For short‑answer style (no meaningful discrete options), output a single line after the question: `Format: Short answer (<=5 words)`.
- For short‑answer style (no meaningful discrete options), output a single line after the question: `Format: Short answer (≀5 words)`.
- After the user answers:
* Validate the answer maps to one option or fits the <=5 word constraint.
* Validate the answer maps to one option or fits the ≀5 word constraint.
* If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
* Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
- Stop asking further questions when:
Expand Down
4 changes: 2 additions & 2 deletions .github/prompts/constitution.prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Follow this execution flow:

1. Load the existing constitution template at `.specify/memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
**IMPORTANT**: The user might require fewer or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the document accordingly.

2. Collect/derive values for placeholders:
- If user input (conversation) supplies a value, use it.
Expand Down Expand Up @@ -52,7 +52,7 @@ Follow this execution flow:
6. Validation before final output:
- No remaining unexplained bracket tokens.
- Version line matches report.
- Dates ISO format YYYY-MM-DD.
- Dates in ISO format (YYYY-MM-DD).
- Principles are declarative, testable, and free of vague language ("should" β†’ replace with MUST/SHOULD rationale where appropriate).

7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
Expand Down
12 changes: 6 additions & 6 deletions .github/prompts/implement.prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,19 +34,19 @@ $ARGUMENTS
- **Validation checkpoints**: Verify each phase completion before proceeding

5. Implementation execution rules:
- **Setup first**: Initialize project structure, dependencies, configuration
- **Setup first**: Initialize project structure, dependencies, and configuration
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
- **Core development**: Implement models, services, CLI commands, endpoints
- **Integration work**: Database connections, middleware, logging, external services
- **Polish and validation**: Unit tests, performance optimization, documentation
- **Core development**: Implement models, services, CLI commands, and endpoints
- **Integration work**: Database connections, middleware, logging, and external services
- **Polish and validation**: Unit tests, performance optimization, and documentation

6. Progress tracking and error handling:
- Report progress after each completed task
- Halt execution if any non-parallel task fails
- For parallel tasks [P], continue with successful tasks, report failed ones
- For parallel tasks [P], continue with successful tasks and report failed ones
- Provide clear error messages with context for debugging
- Suggest next steps if implementation cannot proceed
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
- **IMPORTANT**: For completed tasks, make sure to mark the task as [X] in the tasks file.

7. Completion validation:
- Verify all required tasks are completed
Expand Down
4 changes: 2 additions & 2 deletions .github/prompts/tasks.prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ $ARGUMENTS
4. Task generation rules:
- Each contract file β†’ contract test task marked [P]
- Each entity in data-model β†’ model creation task marked [P]
- Each endpoint β†’ implementation task (not parallel if shared files)
- Each endpoint β†’ implementation task (not parallel if files are shared)
- Each user story β†’ integration test marked [P]
- Different files = can be parallel [P]
- Same file = sequential (no [P])
Expand All @@ -61,4 +61,4 @@ $ARGUMENTS

Context for task generation: $ARGUMENTS

The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without requiring additional context.
4 changes: 2 additions & 2 deletions .specify/memory/constitution.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,9 +120,9 @@ The `src/` folder contains the module source code that Build-PSModule compiles i
- `data/` - Configuration data files (`.psd1`) loaded as module variables
- `formats/` - Format definition files (`.ps1xml`) for object display
- `functions/private/` - Private functions (internal implementation)
- Supports subdirectories for grouping (e.g., `functions/public/CoponentA/`, `functions/public/ComponentB/`)
- Supports subdirectories for grouping (e.g., `functions/public/ComponentA/`, `functions/public/ComponentB/`)
- `functions/public/` - Public functions (exported to module consumers)
- Supports subdirectories for grouping (e.g., `functions/public/CoponentA/`, `functions/public/ComponentB/`)
- Supports subdirectories for grouping (e.g., `functions/public/ComponentA/`, `functions/public/ComponentB/`)
- Optional category documentation files (e.g., `functions/public/PSModule/PSModule.md`)
- `init/` - Initialization scripts (executed first during module load)
- `modules/` - Nested PowerShell modules (`.psm1`) or additional assemblies
Expand Down
20 changes: 10 additions & 10 deletions .specify/templates/plan-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
β†’ Set Structure Decision based on project type
3. Fill the Constitution Check section based on the content of the constitution document.
4. Evaluate Constitution Check section below
β†’ If violations exist: Document in Complexity Tracking
β†’ If no justification possible: ERROR "Simplify approach first"
β†’ If violations exist: Document them in Complexity Tracking
β†’ If no justification is possible: ERROR "Simplify approach first"
β†’ Update Progress Tracking: Initial Constitution Check
5. Execute Phase 0 β†’ research.md
β†’ If NEEDS CLARIFICATION remain: ERROR "Resolve unknowns"
Expand All @@ -35,15 +35,15 @@

## Technical Context

**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, Wasm or NEEDS CLARIFICATION]
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75, or NEEDS CLARIFICATION]
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM, or NEEDS CLARIFICATION]
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files, or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test, or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, Wasm, or NEEDS CLARIFICATION]
**Project Type**: [single/web/mobile - determines source structure]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps, or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable, or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens, or NEEDS CLARIFICATION]

## Constitution Check

Expand Down
4 changes: 2 additions & 2 deletions .specify/templates/spec-template.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
5. Generate Functional Requirements
β†’ Each requirement must be testable
β†’ Mark ambiguous requirements
6. Identify Key Entities (if data involved)
6. Identify Key Entities (if data is involved)
7. Run Review Checklist
β†’ If any [NEEDS CLARIFICATION]: WARN "Spec has uncertainties"
β†’ If implementation details found: ERROR "Remove tech details"
Expand Down Expand Up @@ -88,7 +88,7 @@ Example of marking unclear requirements:

### Key Entities *(include if feature involves data)*

- **[Entity 1]**: [What it represents, key attributes without implementation]
- **[Entity 1]**: [What it represents, key attributes without implementation details]
- **[Entity 2]**: [What it represents, relationships to other entities]

---
Expand Down
Loading
Loading