Review Workflows
Your team has a rule: every PR needs at least one review before merging. In practice, reviews pile up. The senior developer who reviews most PRs is in meetings until 3pm, so PRs sit for hours. When they finally review, they are rushing through a backlog of 6 PRs. They catch the obvious formatting issues but miss the subtle bug where a null check is in the wrong order. That bug ships to production on Friday afternoon.
The bottleneck is not the rule — it is the workflow. AI-assisted reviews do not replace human reviewers. They amplify them. The AI catches the mechanical issues (missing error handling, inconsistent patterns, potential null references) so the human reviewer can focus on the things AI cannot judge: architecture decisions, business logic correctness, and whether the approach is the right one.
What You’ll Walk Away With
Section titled “What You’ll Walk Away With”- A self-review workflow that catches issues before you submit a PR
- Copy-paste prompts for reviewing your own code and others’ code
- Techniques for using Ask mode as a review assistant
- Strategies for writing review-ready PRs that get approved faster
The Self-Review Before Submission
Section titled “The Self-Review Before Submission”The highest-leverage review workflow is reviewing your own code before anyone else sees it. This catches 80% of review comments before they happen.
Run this prompt in Ask mode before creating your PR. Fix what it finds, then submit. Your reviewers will spend their time on architecture and business logic instead of pointing out that you forgot to handle the empty array case.
Reviewing Other People’s Code
Section titled “Reviewing Other People’s Code”Getting Up to Speed Quickly
Section titled “Getting Up to Speed Quickly”When you open a PR with 15 changed files and 400 lines of diff, the first challenge is understanding what is going on:
This saves 10 minutes of scrolling through the diff trying to figure out what the PR is actually doing.
Reviewing for Specific Concerns
Section titled “Reviewing for Specific Concerns”Once you understand the PR, focus your review on specific concerns rather than trying to catch everything in one pass:
Run separate focused reviews for security, performance, and error handling rather than one vague “review this code” prompt. Three focused passes catch more issues than one unfocused pass.
Generating PR Descriptions
Section titled “Generating PR Descriptions”Good PR descriptions speed up reviews. Let Cursor generate them from your actual changes:
A good PR description answers the reviewer’s first three questions before they even look at the code: what changed, why it changed, and what to pay attention to.
The Two-Pass Review Workflow
Section titled “The Two-Pass Review Workflow”For thorough reviews, use a structured two-pass approach:
- Pass 1: AI-assisted scan. Use Ask mode to analyze the diff for mechanical issues (missing error handling, inconsistent patterns, potential bugs). Fix or note everything the AI finds.
- Pass 2: Human review. With the mechanical issues already handled, focus on the things AI cannot judge — is this the right approach? Does the architecture make sense? Will this be maintainable in six months?
The AI handles the checklist. You handle the judgment.
Review Rules for Consistency
Section titled “Review Rules for Consistency”Create a rule that standardizes how the AI reviews code across your team:
---description: Standards for AI-assisted code reviewalwaysApply: false---
When reviewing code changes:
1. Check against our coding standards: - All public functions must have JSDoc comments - All API endpoints must validate inputs with Zod - All database queries must use parameterized queries - Error responses must follow the format in @src/lib/errors.ts
2. Check for common mistakes: - Promises without await or .catch() - Array methods on potentially undefined values - Missing null checks before property access - Hardcoded strings that should be constants or env vars
3. Do not flag: - Style preferences (formatting is handled by Prettier) - Import ordering (handled by ESLint) - Variable naming unless it is genuinely confusingStore this in .cursor/rules/review-standards.mdc. When anyone on the team runs a review prompt, the agent applies the same standards.
Responding to Review Comments
Section titled “Responding to Review Comments”When you receive review feedback, Cursor can help you address it efficiently:
This is particularly useful for junior developers who receive review feedback they do not fully understand. The AI explains the reviewer’s reasoning and helps decide whether to accept or push back.
Custom Review Commands
Section titled “Custom Review Commands”For reviews you run repeatedly, save them as custom commands:
---description: Run a security-focused review on the current changes---
Review all changes on the current branch compared to main for security issues.
Check for:- User input that reaches database queries without parameterization- Missing authentication or authorization checks- Secrets or credentials in code or configuration- User input rendered in HTML without sanitization- File paths constructed from user input- Deserialization of untrusted data
For each finding, rate it as Critical, High, Medium, or Low severity.Provide the file, line, the vulnerability, and a specific fix.Save as .cursor/commands/security-review.md and invoke with /security-review before every PR.
Reviewing AI-Generated Code
Section titled “Reviewing AI-Generated Code”AI-generated code needs extra scrutiny because it often looks correct but has subtle issues:
I used Agent mode to generate the code in @src/services/notification-service.ts.
Review it with extra scrutiny for:1. Plausible-looking code that actually does nothing (e.g., error handlers that swallow errors)2. Hardcoded values that should be configurable3. Mock-like patterns that accidentally made it into production code4. Functions that look complete but miss edge cases5. Dependencies that are imported but do not exist in package.json
This code was AI-generated, so do not assume any of it is correct. Verify everything.AI-generated code has a specific failure pattern: it looks more correct than it is. A function may have a try/catch block that catches errors but does nothing with them. A validation function may check three of four required fields. These are easy to miss in review because the code structure looks right even though the behavior is wrong.
When This Breaks
Section titled “When This Breaks”AI review misses business logic errors. AI reviews catch structural issues (missing error handling, inconsistent patterns) but cannot verify that the business logic is correct. A function that calculates tax wrong will pass every AI review if the code structure is clean. Business logic validation is always a human responsibility.
Reviews become a rubber stamp. If you always accept AI review output without reading it, you miss the context-specific issues. Use the AI findings as a starting point for your own review, not as a replacement for it.
Too many false positives. The AI flags things that are intentional. Add a review rule (as shown above) that specifies what NOT to flag. This trains the AI to match your team’s standards.
PR descriptions are generic. The AI-generated description says “updated the service” instead of explaining why. Add the “Why” section explicitly in your prompt and fill in the business context that the AI cannot infer.
What’s Next
Section titled “What’s Next”- Testing Patterns — Ensuring code quality before it reaches review
- Debugging Workflows — Investigating issues found during review
- Team Collaboration — Scaling review practices across a team