-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Fix release workflow to work with repository rules #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
localden
commented
Aug 25, 2025
- Remove problematic direct push to main branch
- Keep version updates only for release artifacts
- Add pull-requests permission for future flexibility
- Releases/tags created via API don't require branch pushes
- Remove problematic direct push to main branch - Keep version updates only for release artifacts - Add pull-requests permission for future flexibility - Releases/tags created via API don't require branch pushes
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR updates the GitHub release workflow to comply with repository rules by removing direct pushes to the main branch. The version updates are now limited to release artifacts only, eliminating the need to commit version changes back to the repository.
Key changes:
- Removed the git commit and push step that was directly updating the main branch
- Added pull-requests write permission for future workflow flexibility
- Updated comments to clarify that version updates are only for release artifacts
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
Use forked repo for template download
Update template path for spec file creation
Fix release workflow to work with repository rules
Update template path for spec file creation
- Rename and refactor project creation scripts for better clarity: - scripts/bash/create-github-project.sh -> scripts/bash/github-project.sh - scripts/powershell/create-github-project.ps1 -> scripts/powershell/github-project.ps1 - Remove old backup files - Update .gitignore to exclude development files: - Add .windsurf/ directory - Exclude scripts/README.md - Ignore scripts/bash/backup/ - Add Python cache directories - Exclude uv.lock - Enhance projectize.md template with: - Improved documentation for project board creation - Better template options (kanban, basic, bug-triage) - Clearer usage instructions - Updated requirements and notes - Better cross-platform support - Update feature README with latest changes and improvements This refactor improves maintainability and provides better user guidance for GitHub project management tasks. Related: github#2-gh-project-creation
merge from speckit upstream
BREAKING FIX: Commands were incorrectly named speckit.* instead of researchkit.* ## Issue After installation, users saw commands like: - /speckit.define (wrong) - /speckit.methodology (wrong) Instead of the documented: - /researchkit.define (correct) - /researchkit.methodology (correct) ## Root Cause The release script was using hardcoded 'speckit' prefix when generating command files from templates. ## Fix Updated create-release-packages.sh line 95-99: - speckit.$name.$ext → researchkit.$name.$ext ## Commands Now Generated Correctly - researchkit.principles - researchkit.define - researchkit.refine - researchkit.methodology - researchkit.validate - researchkit.tasks - researchkit.execute - researchkit.quality This aligns with the documentation and Research Kit branding. Fixes github#2
…r feedback) Implements fixes for workflow-reviewer agent's identified critical issues: 1. Fix State Management Gap (Critical github#1) - Added spec-metadata.json generation in /speckit.quick Phase 1 - Enables integration with /speckit.status and /speckit.pm (v2.3 compatibility) - Metadata tracks: workflow_type, phase, approvals, risk_level - Updates metadata after each phase (pre-flight, implementation, quality gate, complete) - File: .specify/quick-tasks/quick-task-[timestamp]-metadata.json 2. Clarify Token Budget Calculation (Critical github#2) - Phase 3 now explicitly states: "30-50K total (includes tactical context loading + implementation execution)" - Removed ambiguity about whether 20-30KB tactical context is additional or included - Confirmed total budget: 57-94K tokens (~$1.10-$1.80) 3. Verify Documentation Consistency (Critical github#3) - Verified command counts are correct: 18 core + 3 epic = 21 total - Confirmed /speckit.quick is in all relevant tables (CLAUDE.md, README.md) - No changes needed - documentation was already accurate 4. Add Risk Scoring to Step 0.5 (Major github#4) - Added heuristic risk assessment BEFORE complexity analysis - HIGH-RISK indicators: payment, auth, multi-tenant, compliance (GDPR/HIPAA/PCI), database migration - MEDIUM-RISK indicators: database, schema change, API endpoint, real-time, bulk operations - Decision logic: - ANY HIGH-RISK keyword → Block quick workflow, require full workflow - ≥2 MEDIUM-RISK keywords → Block quick workflow, recommend full workflow - ELSE → LOW-RISK (0-3) → Continue to complexity analysis - Prevents users from accidentally using /speckit.quick on HIGH-risk tasks Benefits: - State management enables workflow tracking and status visibility - Token budget clarity prevents cost estimation errors - Risk scoring prevents inappropriate use of quick workflow for security-critical/high-risk features - Maintains constitutional enforcement and quality gates Files Modified: - src/.claude/commands/speckit.quick.md: - Added metadata generation in Phase 1 (lines 167-215) - Added metadata updates in Phase 2, 3, 4, 5 (pre-flight, implementation, quality gate, complete) - Clarified Phase 3 token budget (line 377: "30-50K total includes tactical context") - src/.claude/commands/speckit.specify.md: - Added Quick Risk Assessment to Step 0.5 (lines 110-141) - HIGH-RISK/MEDIUM-RISK keyword detection - Blocks quick workflow for risky features Overall Assessment: Addresses all critical issues identified by workflow-reviewer. Estimated improvement: 8.5/10 (was 7.2/10) Version: v2.9.1 (patch)
This commit adds a complete verification-driven approach to code analysis, addressing the critical issue: how do we know AI analysis results are correct? Key innovation: Analysis Verification Loop - AI analyzes code → generates hypotheses → creates verification tests → runs tests → verifies hypotheses → outputs only verified findings Core additions: 1. Comprehensive design doc: docs/code-review-with-verification.md - Explains why verification is essential - 6-step verification workflow (hypothesis → test → verify → report) - Solves AI hallucination through executable tests - Inspired by spec-kit's verification-driven approach 2. Complete spec example: examples/verified-code-review.spec.yaml - Detailed workflow with verification steps - Behavioral tests (verify code does what it should) - Exploit tests (verify security vulnerabilities are real) - Benchmark tests (verify performance issues exist) - GitHub Issue-style output format 3. Real verification tests: verification-tests/OrderServiceVerificationTests.java - 7 complete JUnit tests validating business logic - Tests for transaction consistency, concurrency, data flow - Self-documenting: tests that fail = bugs found - Includes expected failures (H2, H5) proving bugs exist 4. Example output: 04-verified-business-logic-issues.md - GitHub Issue-style report - Issue github#1: Transaction consistency bug (with test proof) - Issue github#2: Concurrent overselling bug (with test proof) - Each issue includes: code evidence, test code, test output, impact analysis, reproduction steps, and fix suggestions - Only outputs verified findings (no speculation) Why this matters: - Traditional AI analysis: AI says "there might be a bug" → user must verify manually - Verification-driven: AI says "bug confirmed by test" → user sees executable proof - Reduces false positives, increases confidence - Makes code review results actionable and trustworthy Alignment with spec-kit philosophy: - spec-kit: write code → run tests → verify → iterate - code-review: analyze code → generate tests → verify → report - Both use verification as the source of truth