Handbook
Agentic coding standards (cross-cutting)
This guide defines prescriptive coding and review standards for teams using AI-assisted implementation (IDE assistants, codegen tools, review bots) alongside humans. It applies with any lifecycle or methodology (Scrum…
Purpose
This guide defines prescriptive coding and review standards for teams using AI-assisted implementation (IDE assistants, codegen tools, review bots) alongside humans. It applies with any lifecycle or methodology (Scrum, Kanban, phased, etc.). Teams running Forge SDLC use the same rules and map them to Charge, Ember Log, Versona sessions, and Assay evidence as described in the Forge overlay below.
Companion: principles and risks — Agentic SDLC (cross-cutting). Specs and durable intent: Spec-driven development.
Scope and layering
| Layer | What this guide governs |
|---|---|
| Human | Owns intent, acceptance, release, and policy exceptions. |
| AI tools | Generate or refactor code, tests, and drafts under team standards, review, and CI. |
| Repository | System of record for what shipped — commits, PRs, tests, and linked work units. |
Forge teams additionally align sessions and logs with Versona framework — kinds, interfaces, processes, sessions (session layout, §5 structured output when used).
Intent and repository as source of truth
- No chat-only specifications — acceptance criteria, IDs, and constraints live in tracked artifacts (issues, specs,
docs/, SDD inputs) that a reviewer can open without scrolling a thread. - Link work to intent — commits or PR descriptions reference backlog / requirement / Spark IDs where your process expects traceability (Agentic SDLC (cross-cutting) engineering-tracking table).
- Prefer spec-first for large AI edits — for non-trivial scope, written intent leads implementation (Spec-driven development); reduces rework and silent scope creep.
Generation discipline
- Minimal, scoped diffs — change only what the work unit requires; avoid drive-by refactors, formatting sweeps, or unrelated file churn.
- Match house style — naming, structure, and patterns consistent with the surrounding module and team directives (e.g.
.cursor/rules/,CONTRIBUTING.md). - Split when review would suffer — if agent throughput exceeds review capacity, smaller PRs and lower WIP beat huge batches (Agentic SDLC (cross-cutting) risks).
- Recoverable steps — prefer commits that are easy to bisect or revert; avoid squashing unrelated concerns into one change.
Attribution and identity
- PR transparency — state AI-assisted work when applicable (tool name optional unless policy requires it); summarize what the human verified.
- Bot and service accounts — if automation opens PRs or commits, policy should define labels, CODEOWNERS, and how they appear in history (Agentic SDLC (cross-cutting) Contributor row).
- Audit trail — reviewers must be able to see what changed and why without private chat context.
Verification
- CI and local gates — agreed checks (build, lint, tests) pass before merge unless a recorded, time-bounded exception exists (same bar as non-AI work; see Software development lifecycle Verify phase).
- Tests are not optional by default — new behavior needs tests or an explicit, reviewed justification in the work record.
- High-risk areas — auth, crypto, PII, payments, concurrency, and security-sensitive paths require human review and often extra discipline passes (e.g. Security Versona, threat-informed checklist); AI review does not replace that.
- Regulated contexts — follow organizational sign-off and evidence rules; automation supplements, not replaces, compliance gates.
Security
- LLM application risks — use OWASP Top 10 for LLM as a baseline when models touch prompts, tools, data, or generated code.
- Secure SDLC overlay — depth for secure design, review, and testing lives under Security / Cybersecurity and related practice guides.
- Secrets — never commit keys, tokens, or production data into prompts, repos, or recipe configs; use secret stores and ephemeral review environments.
AI-assisted code review flow (summary)
- Bound the change — diff or PR scope, SDLC phase, and risk class.
- Automated first — linters, tests, SAST/SCA as applicable.
- Structured discipline pass — optional §5-shaped reviews via Engineering-family Versonas (e.g. Software Engineering, Security, Testing) — see
forge/versona/catalog/discipline/engineering/versona-se.mdc.templateand Versona contract. - Human decision — merge, request changes, or escalate; record material trade-offs in Ember Log or ADRs when Forge or your process requires it.
IDE: teams may install the blueprint Cursor skill from run-engineering-ai-code-review/SKILL.md on GitHub. CI / container: optional template recipe Template recipe: `llm-diff-review` (copy to agents/recipes/ per Orchestration — new agent / recipe).
Forge overlay (optional)
Use this mapping when Forge SDLC is the team’s methodology (Forge — deep-dive package (blueprint)).
| Standard topic | Forge mapping |
|---|---|
| Daily execution | Charge lists Sparks; keep AI work visible in the same pull/PR stream as human work. |
| Decisions and waivers | Ember Log (ember-logs/) for trade-offs, risk acceptance, or scope shifts surfaced during AI-assisted work (Daily operations). |
| Discipline challenge | Versona session under forge-logs/versona/<actor>/<session-id>/ when running a formal lens pass (Versona framework — kinds, interfaces, processes, sessions §7–8). |
| Iteration quality | Review meeting: discipline review aligns with C4-shaped quality intent (Forge — ceremonies & events (prescriptive)). |
| Release evidence | Assay Gate checklists include tests, security, and decision hygiene as your gate defines (Forge — meeting model (operational)). |
Local visibility (forge-lenses)
The forge-lenses workspace dashboard (python3 -m lenses, default http://127.0.0.1:8080) shows Standards and agentic hygiene on the Overview and each Project page: a heuristic 0–100 score, per-check table, and Suggestions from repository signals (CI, CONTRIBUTING/docs, sdlc/ or blueprints/, .cursor rules or skills, Forge-related paths, lockfiles, optional commit-message sampling). The same structure is returned on each workspace child as standards_compliance in GET /api/workspace-state. This is not a compliance audit — see the forge-lenses dashboard reference and standards_compliance.py for check ids and registry overrides.
Related blueprint guides
- Agentic SDLC (cross-cutting) — agentic principles, ceremonies, risks.
- Spec-driven development — durable specs for agentic workflows.
- Forge SDLC — Forge methodology hub.
- Roles, archetypes & methodology titles — accountability and Contributor identity.
- Agents blueprint — structure & layers — containerized recipes and optional LLM steps.