Step 1: Set Up Your AI-Assisted Workflow
By the end of this step, you will have a repeatable AI workflow that produces consistent, reviewable outputs and slots cleanly into your existing development practices (branching, PRs, CI, code review).
1.1 Define the “contract” for AI use
Treat AI like a service with a clear interface.
Deliverable: a short “AI Use Policy” section in your repo README or engineering handbook.
1.2 Create a standard prompt structure (your “prompt template”)
Use the same headings every time so outputs are predictable and comparable.
Prompt Template
-
Goal: what you want (single sentence)
-
Context: relevant code/design constraints, definitions, domain rules
-
Inputs: files/snippets/data (only what’s needed)
-
Constraints: libraries, style guides, performance/security requirements
-
Output format: exact structure (diff, checklist, test plan, ADR, etc.)
-
Quality bar: tests required, linting, complexity limits, edge cases
-
Assumptions & questions: what to do if information is missing
Guardrail rule: If missing info prevents correctness, the AI must list assumptions explicitly instead of guessing.
1.3 Add “reviewability” guardrails
Make every response easy to inspect.
Require the AI to produce:
-
A small, bounded change set (no “rewrite everything”)
-
Rationale per change (1–2 lines each)
-
Risk notes (what might break)
-
Test impact (new/updated tests, how to run)
-
Checklist for reviewers
Example output formats
-
“Provide a unified diff”
-
“Return a PR description: Summary / Changes / Tests / Risks”
-
“Return an acceptance test plan in Gherkin”
-
“Return a table: Edge case | Expected behavior | Test approach”
1.4 Integrate into the normal dev flow (PR-first)
Keep AI outputs inside the same governance you already trust.
Recommended workflow:
-
Create a branch (human-owned)
-
Use AI to draft code/tests/docs
-
Run tests and linters locally
-
Open PR with AI-generated summary + your review notes
-
CI gates + human review
-
Merge
Key principle: AI can propose; humans approve.
1.5 Build your “context pack” (reusable, minimal)
A context pack is the small set of material you feed repeatedly.
Include:
-
Architecture summary (1 page)
-
Coding standards (lint rules, formatting)
-
Domain glossary (terms, invariants)
-
Test conventions (naming, fixtures, patterns)
-
Security constraints (red lines)
Keep it short enough to paste or reference reliably.
1.6 Step completion checklist
You’re done with Step 1 when you have:
Step 1 “artifact” you can reuse (copy/paste)
Definition of Done for AI outputs
-
Must list assumptions explicitly
-
Must provide bounded changes (no unscoped rewrites)
-
Must include rationale + risks
-
Must include tests and how to run them
-
Must be suitable for PR review
55