Skip to content

Using Cursor as AI Pair Programmer

You are building a notification preferences system. The requirements are clear, but the implementation has tricky parts: timezone-aware scheduling, channel-specific delivery rules, user-level overrides on top of organization-level defaults, and a real-time preview of what notifications a user would receive. You could write it all yourself, but you know from experience that the edge cases will take three times longer than the happy path. You need someone to think through the design with you, catch your blind spots, and handle the repetitive parts while you focus on the tricky logic.

That is what pair programming with Cursor looks like. Not a tool that generates code on demand, but a collaborator that participates in the entire arc of a feature — from design discussion through implementation to debugging.

  • A mental model for when to use Ask, Agent, and Plan modes during a pairing session
  • Techniques for maintaining context across a long development session
  • Copy-paste prompts for starting design discussions, implementing incrementally, and handling debugging detours
  • A workflow for reviewing AI-generated code in real time rather than after the fact
  • Strategies for keeping the AI on track when it drifts from your architecture

Effective pair programming in Cursor depends on switching between modes at the right moments. Think of it like pair programming with a human: sometimes you want to discuss an approach (Ask), sometimes you want your partner to drive (Agent), and sometimes you need a detailed plan before either of you touches the keyboard (Plan).

For any feature that touches more than two files or involves non-obvious design decisions, start in Plan mode. Press Shift+Tab from the chat input to rotate to Plan mode, or let Cursor suggest it when it detects a complex request.

Plan mode will research your codebase, ask clarifying questions (like “Do you already have a user settings table? What notification system are you using?”), and produce a step-by-step plan. You can edit this plan, push back on decisions, and refine it before a single line of code is written.

This is fundamentally different from jumping straight into Agent mode with “build a notification preferences system.” The plan gives you a shared understanding with the AI before implementation begins.

When you hit a design fork in the road, switch to Ask mode (Ctrl + . to cycle modes). Ask mode is read-only — it cannot modify files — which makes it safe for exploratory discussions.

@src/models/user.ts @src/models/organization.ts
I'm deciding between two approaches for org-level vs user-level notification preferences:
Option A: Single preferences table with a "scope" column (org or user) and a
precedence system that merges org defaults with user overrides at query time.
Option B: Separate tables for org_preferences and user_preferences, with a
merge function in the application layer.
Given our existing data models, which approach:
- Is easier to query for "what notifications will this user receive?"
- Handles the case where a user hasn't set any preferences (falls back to org defaults)?
- Is easier to extend when we add team-level preferences later?

Ask mode gives you a thoughtful analysis without generating any code. You make the decision, then switch to Agent mode for implementation.

Once you have a plan and have resolved design questions, Agent mode is your implementation partner. The key technique is working incrementally — one piece at a time, reviewing each change before moving on.

@src/models @src/lib/db/schema.ts
Implement step 1 from our plan: the notification preferences data model.
Create a preferences table with:
- id (uuid, primary key)
- scope_type (enum: 'organization', 'user')
- scope_id (uuid, references either org or user)
- channel (enum: 'email', 'sms', 'push', 'in_app')
- enabled (boolean, default true)
- delivery_window_start (time, nullable for "anytime")
- delivery_window_end (time, nullable)
- timezone (text, default 'UTC')
- created_at, updated_at timestamps
Follow the same Drizzle ORM patterns used in our existing schema.
Generate the migration file as well.

After Agent generates the code, review the diff carefully. This is the “pair” in pair programming — you are not just accepting code, you are evaluating it. If something is off, tell the agent:

The migration looks good, but delivery_window_start and delivery_window_end
should use the "time" type without timezone since we store timezone separately.
Also add a unique constraint on (scope_type, scope_id, channel) to prevent
duplicate preferences.

Agent will update the code based on your feedback. This back-and-forth is where the real productivity comes from — not in the first generation, but in the rapid iteration.

Long pairing sessions (building an entire feature over several hours) require context management. The AI does not remember your previous conversations automatically, but you can structure your workflow to maintain continuity.

When Agent is working on one step, you can queue follow-up instructions. Type your next request and press Enter — it waits in the queue and executes after the current task finishes. This lets you plan ahead while Agent works, mimicking how a human pair would think ahead while their partner types.

If you created a plan earlier, save it to your workspace (Plan mode offers a “Save to workspace” option). Then reference it in subsequent prompts:

@plans/notification-preferences.md
We've completed steps 1-3. Now implement step 4: the preference merge function
that combines org defaults with user overrides. Follow the approach we decided on
(single table with scope-based precedence).

If your conversation gets long enough that responses start losing coherence, use the /summarize command. This compresses the conversation history while preserving key decisions and context. Alternatively, start a new chat and reference your plan file plus the files you have already modified:

@plans/notification-preferences.md @src/models/preferences.ts @src/services/preferences.ts
Continuing from a previous session. Steps 1-4 are complete (data model, migration,
API endpoints, merge function). Now implement step 5: the real-time preview endpoint
that shows what notifications a user would receive with their current settings.

When something goes wrong during implementation, resist the urge to debug silently. Instead, bring Cursor into the debugging process the same way you would involve a human pair.

Switch to Debug mode (available in the mode picker) for tricky bugs. Debug mode is specifically designed for this: it generates hypotheses, adds instrumentation, asks you to reproduce the bug, then analyzes the collected logs to find the root cause.

For simpler issues, Ask mode works well:

The AI will analyze the code path and identify likely culprits. It might spot that the user’s organization ID is not being passed correctly to the merge function, or that the query is filtering on the wrong scope_type.

AI pair programmers drift. They introduce patterns you did not agree on, use libraries you do not want, or solve a different problem than the one you asked about. Here are strategies for keeping them aligned.

Implement the delivery window check. Do NOT:
- Use moment.js or any date library besides the built-in Temporal API
- Add a new database query -- use the preferences data already loaded
- Change the function signature of checkDeliveryWindow()

Negative constraints are surprisingly effective at preventing drift.

Before Agent makes a large change, create a checkpoint manually or rely on Cursor’s automatic checkpoints. If the generated code goes in a wrong direction, restore the checkpoint and try a different prompt. Think of it as git stash for your pairing session.

Do not ask Agent to implement an entire feature in one prompt. Break it into steps and review each step’s diff before moving on. This is slower per-step but faster overall because you catch problems early instead of untangling them from 500 lines of generated code.

If you find yourself repeatedly correcting the same issue (“no, use our custom logger, not console.log”), create a project rule:

.cursor/rules/coding-standards.md
# Coding Standards
- Use the logger from src/lib/logger.ts, never console.log
- Use Temporal API for dates, not moment or dayjs
- Use Zod for request validation, not manual checks
- Error responses must use the AppError class from src/lib/errors.ts
- Database queries go through the repository layer in src/repositories/

This rule applies to every Agent interaction, so you stop repeating yourself.

Agent ignores your plan and improvises. This happens when the plan file is not in context. Always @reference the plan file in your prompt. If the plan is too long, excerpt the relevant step.

The AI generates more code than you asked for. You asked for the data model and it also built the API endpoints and tests. Be explicit about scope: “Implement ONLY the data model. Stop after generating the migration. Do not create any API routes or tests.”

Context window fills up in a long session. After many back-and-forth exchanges, responses get shorter and less coherent. Use /summarize or start a fresh chat with explicit file references to the work done so far.

The AI suggests a fundamentally wrong approach. If you are three prompts deep and realize the AI went down the wrong path, do not try to course-correct incrementally. Restore the checkpoint, revise your prompt to be more specific about the approach you want, and start that section over.

Generated code does not match your existing patterns. This is the most common issue in pair programming sessions. The fix is always the same: reference a concrete example of the pattern you want. @src/services/orders.ts "Follow the exact same error handling pattern used here" beats a paragraph of description.