Productivity Patterns
Deep-dive into keyboard shortcuts, prompt templates, and time-saving workflows for daily development.
The agent just refactored your authentication module. It moved functions between files, updated imports, and added a new validation layer. The tests pass — but the login page throws a white screen in the browser. The terminal shows no errors. The agent is confident its changes are correct. Welcome to the reality of AI-assisted development: the code looks right, passes lint, and breaks in ways that are subtle and frustrating. This article is your field guide to diagnosing and recovering from exactly these situations.
AI-generated bugs are different from human bugs. Understanding the difference changes how you debug:
| Human Bugs | AI-Generated Bugs |
|---|---|
| Typos and off-by-one errors | Structurally correct but semantically wrong |
| Forgotten edge cases | Confident about wrong assumptions |
| Copy-paste mistakes | Consistent pattern applied to wrong context |
| Logic errors in complex flows | Working code that solves the wrong problem |
The most dangerous AI bugs are the ones that look perfectly reasonable. The code compiles, the types check, the tests pass — but the behavior is subtly wrong because the agent misunderstood a requirement or applied a pattern from the wrong part of your codebase.
Cursor creates a checkpoint before every set of changes the agent makes. This is your most important recovery tool.
Every time the agent modifies files, Cursor saves the state of all affected files. You can see checkpoints in the conversation timeline as numbered markers.
For complex tasks, create explicit restoration points by committing to git between phases:
[Agent builds database schema] --> git commit[Agent builds API endpoints] --> git commit[Agent builds frontend] --> git commitIf the frontend work goes wrong, you can restore to the checkpoint and still have the API work safely committed in git.
Cursor offers a dedicated Debug mode for bugs that are hard to reproduce or understand from reading the code alone. Debug mode takes a different approach from standard Agent mode: instead of immediately writing fixes, it instruments your code, collects runtime evidence, and then makes targeted fixes.
Switch to Debug mode from the mode picker (Cmd+.).
Describe the bug
The login form submits successfully (200 response) but the user is notredirected to the dashboard. The session cookie appears to be set correctly.This started happening after the auth refactor yesterday.Agent generates hypotheses
The agent explores relevant files and proposes multiple possible causes.
Agent adds instrumentation
It inserts targeted log statements that send data to a local debug server running in the Cursor extension.
You reproduce the bug
The agent tells you exactly what steps to take. Follow them precisely.
Agent analyzes the logs
After reproduction, it reviews the collected data and identifies the root cause.
Agent makes a targeted fix
Instead of guessing, it fixes the exact line causing the issue, based on runtime evidence.
Verify and clean up
Reproduce again to confirm the fix. The agent removes all instrumentation.
Before submitting a PR, run the agent through a comprehensive cleanup pass. This catches issues that slip through individual task conversations.
This single prompt, with auto-run enabled, handles:
The agent iterates on each failure until the build is clean. The review at the end catches semantic issues that linters miss.
The agent makes a change, the test fails, it reverts, tries a different approach, fails again, and starts repeating itself.
Fix: Press Escape to stop it. Start a new conversation with more specific context:
@src/auth/session.ts The test "should redirect after login" is failingbecause the session middleware expects req.session to be populated,but the test mock does not set it up. Fix the mock, not the production code.The key insight: when the agent loops, it has lost track of the problem. A fresh conversation with a specific diagnosis breaks the loop.
Everything passes in the terminal but the feature does not work in the browser or when manually tested.
Fix: Use Debug mode to add instrumentation, or ask the agent to add logging:
The form submission works in tests but fails in the browser.Add console.log statements at each step of the form submission handlerin @src/handlers/form-submit.ts so I can see where it divergesfrom expected behavior. I'll paste the console output back to you.Run the application, reproduce the issue, copy the console output back into the chat. The agent now has runtime evidence instead of static analysis.
You asked for a small change and the agent rewrote half the codebase.
Fix: Restore the checkpoint immediately. Then be more specific:
Only modify src/services/billing.ts. Do not change any other files.Add a retry mechanism to the processPayment function. Use theexisting retry utility in src/utils/retry.ts.Constraints like “only modify” and “do not change” are instructions the agent respects. The broader your prompt, the more files it considers fair game.
Tests that were passing now fail intermittently.
Fix: This often indicates the agent introduced timing dependencies or shared state:
@src/services/billing.test.ts These tests are now flaky -- they passindividually but fail when run together. Check for shared state betweentests (database connections, module-level variables, un-cleared mocks)and add proper isolation.The agent moved code between files and got some imports wrong.
Fix: Let the agent use the TypeScript compiler to find and fix all import issues:
Run tsc and fix all import errors. Do not change any logic,only fix the import paths and missing exports.Sometimes the best debugging strategy is to throw away the broken changes and try again with a better prompt. This is not failure — it is a deliberate strategy.
Start over when:
Do not start over when:
To start over cleanly:
git stash)The best debugging is the debugging you never have to do. These practices dramatically reduce the frequency of AI-generated bugs:
@ references to existing files give the agent concrete examples to followOccasionally the issue is with Cursor rather than the generated code:
AI not responding: Check your subscription status and internet connection. Switch models if one API is down.
High CPU usage: Indexing may be running. Check Settings then Indexing and Docs. If it persists, try disabling extensions with cursor --disable-extensions.
MCP server errors: Check the Output panel (View then Output, select the MCP server). Restart Cursor if a server is stuck.
Lost changes after a crash: Check git stash, Cursor’s checkpoint history, and the backup files in ~/.config/Cursor/Backups/.
You have completed the Cursor Quick Start. From here:
Productivity Patterns
Deep-dive into keyboard shortcuts, prompt templates, and time-saving workflows for daily development.
Advanced Techniques
Explore agent mode deep dives, custom MCP servers, and large codebase strategies for complex projects.
Shared Workflows
Learn cross-tool workflows that work across Cursor, Claude Code, and Codex.
Tips and Tricks
Browse the tips collection for battle-tested patterns from experienced Cursor users.