All Before You Code After Code Gen Product Decisions Packs

Full-Stack Review Pack

The all-in-one review pack. Run these six prompts on any significant PR to catch what human reviewers miss in AI-generated code: hidden dependencies, implicit choices, unhandled edge cases, missing tests, observability gaps, and naming inconsistencies.

Best used on PRs with 200+ lines of AI-generated changes, or any PR that touches core business logic.

Step 1 | v1.0

Blast Radius Analyzer

Maps every system, service, and data store affected by a proposed change to surface hidden risks before writing code.

Step 2 | v1.0

Implicit Decision Auditor

Surfaces every implicit decision the AI made in generated code — library picks, patterns, error strategies — so you can accept or override them deliberately.

Step 3 | v1.0

Edge Case Sweep

Systematically identifies unhandled edge cases in AI-generated code across null values, concurrency, boundaries, encoding, and partial failures.

Step 4 | v1.0

Test Coverage Gap Finder

Identifies untested code paths, missing test cases, weak assertions, and coverage blind spots in AI-generated code and its accompanying tests.

Step 5 | v1.0

Observability Gap Finder

Identifies missing logging, metrics, traces, alerts, and error classification in AI-generated code so you can debug production issues before they happen.

Step 6 | v1.0

Naming Consistency Audit

Checks naming conventions across AI-generated code for inconsistent casing, abbreviation drift, domain term misuse, and style guide violations.