Skip to content

Frequently Asked Questions

You installed the tool, opened a project, and now you have questions. This page answers the ones developers actually ask — from “which model should I use?” to “why did I hit my message limit after ten prompts?”

What is the difference between Cursor, Claude Code, and Codex?

Section titled “What is the difference between Cursor, Claude Code, and Codex?”

Cursor is an AI-native IDE (fork of VS Code) with inline editing, tab completions, and agent mode built in. Claude Code is a CLI-first agent that lives in your terminal and works alongside any editor. Codex is OpenAI’s multi-surface agent — a web app, CLI, IDE extension, and cloud service with GitHub/Slack/Linear integrations.

Pick Cursor if you want everything in one IDE. Pick Claude Code if you live in the terminal. Pick Codex if you want cloud-based automation and multi-platform access.

For complex coding tasks (default): Claude Opus 4.6 — top SWE-Bench scores, best agentic performance, strongest reasoning.

For everyday work on a budget: Claude Sonnet 4.5 — excellent performance at roughly one-fifth the cost of Opus 4.6, plus a 1M token context window.

For Codex users: GPT-5.3-Codex — the latest model powering all Codex surfaces. Strong at implementation, bug fixing, and UI generation.

For speed-critical work in Cursor: Cursor Composer 1 — 250 tokens/sec (4x faster). Great second choice after Opus 4.6.

For extreme context or multimodal: Gemini 3 Pro — 1M token context, best image/video analysis, Deep Think mode.

See the full Model Comparison for detailed specs and pricing.

Individual Plans

  • Cursor Pro: $20/month (500 fast requests)
  • Cursor Ultra: $200/month (unlimited)
  • Claude Code Pro: $20/month (10-40 msgs/5hrs)
  • Claude Code Max 5x: $100/month (50-200 msgs/5hrs)
  • Claude Code Max 20x: $200/month (200-800 msgs/5hrs)
  • Codex Pro: $20/month (included in ChatGPT Plus)
  • Codex Pro+: $200/month (higher limits)

Team Plans

  • Cursor Business: $40/user/month
  • Claude Code Team: $25-30/user/month
  • Codex Enterprise: Custom pricing
  • Enterprise (all tools): Custom quotes

See the full Pricing Calculator for detailed cost scenarios.

Absolutely. Many developers use Cursor for interactive editing and tab completions, Claude Code for terminal-based autonomous tasks, and Codex for cloud-based automation and GitHub issue handling. The tools are complementary, not mutually exclusive. Your project configuration files (.cursor/rules/, CLAUDE.md, AGENTS.md) can coexist in the same repository.

All three tools offer privacy guarantees at paid tiers:

  • Cursor: Privacy mode prevents code from being sent for training. SOC 2 certified.
  • Claude Code: Anthropic does not train on code submitted through Claude Code. Enterprise mode adds audit logging.
  • Codex: OpenAI does not train on API/Codex inputs. Enterprise plans include data residency options.

Always verify the current privacy policy for your tier. Free tiers may have different terms.

How do I switch between Agent and Ask mode?

Section titled “How do I switch between Agent and Ask mode?”

Open the AI chat panel (Cmd/Ctrl+I), then click the mode selector dropdown at the top. Choose “Agent” for autonomous file editing or “Ask” for read-only exploration. You can also press Cmd/Ctrl+. to toggle modes.

  1. Check your plan — Free tier is limited to 2,000 completions/month
  2. Verify indexing — Settings > Features > Codebase Indexing should show “Indexed”
  3. Check model status — Visit status.cursor.sh for outages
  4. File type support — Some file types have limited completion support
  5. Restart Cursor — Extensions or state corruption can block completions

Settings > Models > API Keys. Enter your Anthropic, OpenAI, or Google key, click Verify, then select “Use my API key” in the model dropdown. Own API keys work for chat models but not Tab completion or specialized features.

  • Fast requests: Priority processing, limited by plan tier (500/month for Pro, ~10,000 for Ultra)
  • Slow requests: Unlimited quantity but increasing delays during peak usage
  • Requests reset on your billing date. Check usage: Settings > Subscription > Usage
  1. Install the BugBot GitHub App from your Cursor team dashboard
  2. Create .cursor/bugbot.yml in your repository
  3. BugBot automatically reviews new PRs and posts inline comments
  4. Click “Fix in Cursor” on any BugBot comment to open the fix in your IDE
Terminal window
npm install -g @anthropic-ai/claude-code
claude --version

Then run claude in any project directory. You will be prompted to authenticate on first run.

What is the difference between /think, /think hard, and /ultrathink?

Section titled “What is the difference between /think, /think hard, and /ultrathink?”

Each level allocates progressively more computation for reasoning:

ModeThinking budgetBest forCost impact
NormalStandardSimple tasksBaseline
/think~2xModerate complexitySlight increase
/think hard~5xArchitecture decisionsNotable increase
/think harder~10xComplex debuggingSignificant increase
/ultrathinkMaximumSystem design, hard bugsHighest

Use deeper thinking only when the task genuinely requires it. Most coding tasks do fine with normal or /think.

Granular approach (recommended): Configure allowed tools in your settings or use Claude Code’s permission system to pre-approve specific operations.

Full trust approach (use with caution):

Terminal window
claude --dangerously-skip-permissions

Best practice: Use --dangerously-skip-permissions only in sandboxed environments (Docker containers, CI pipelines).

Why did I hit my message limit so quickly?

Section titled “Why did I hit my message limit so quickly?”

Claude Code Pro allows roughly 10-40 messages per 5-hour rolling window (varies by complexity). Long conversations with big files consume more capacity. Strategies:

  1. Use /compact to summarize long conversations
  2. Use /clear between unrelated tasks
  3. Batch related questions into single prompts
  4. Upgrade to Max 5x ($100/month) or Max 20x ($200/month)
  5. Use API access for extremely heavy workloads

Yes. Install the “Claude Code” extension from the VS Code marketplace. It launches a Claude Code terminal panel inside your editor. You get the full CLI experience without leaving your IDE.

Codex is available in four forms:

  1. Codex App — Web-based interface at codex.openai.com. Full project management, task history, cloud execution.
  2. Codex CLI — Terminal tool (codex). Interactive and headless modes. Open-source.
  3. Codex IDE Extension — VS Code and JetBrains plugins. Inline agent panel.
  4. Codex Cloud — Background agents running on OpenAI infrastructure. Triggered from App, CLI, GitHub, Slack, or Linear.

Every Codex task runs in an isolated Git worktree — a separate checkout of your repository. This means:

  • Multiple tasks can run in parallel without file conflicts
  • Each task has its own clean working directory
  • Completed tasks produce a diff that you review before merging
  • Failed tasks are discarded without affecting your main branch

You do not need to manage worktrees manually. Codex creates and cleans them up automatically.

Codex can respond to external events automatically:

  1. GitHub: Assign an issue to @codex or mention it in a PR comment
  2. Slack: Message Codex in a configured channel
  3. Linear: Assign a ticket to the Codex integration
  4. Scheduled: Set up cron-based tasks from the Codex App dashboard

Each automation spawns a cloud agent that executes the task and creates a PR or posts results back to the source.

ModeFile readsFile writesCommand execution
on-requestYesAsks firstAsks first
on-failureYesYesAuto-approves, asks on errors
neverYesYesYes (full autonomy)
--full-auto flagYesYesYes (sandboxed)

Start with on-request until you trust the agent’s output, then graduate to on-failure. Use --full-auto or approval_policy = "never" only with sandboxed environments or when you are confident in the task scope.

MCP (Model Context Protocol) servers extend what your AI agent can do by connecting it to external tools — databases, APIs, browsers, and more. You do not strictly need them, but they unlock powerful workflows. For example, the GitHub MCP lets the agent create PRs directly, and the Postgres MCP lets it query your database.

All three tools support MCP. Start with one or two essential servers and add more as needed.

How do MCP servers differ from Agent Skills?

Section titled “How do MCP servers differ from Agent Skills?”

MCP servers maintain a persistent connection and expose multiple tools. They are heavier to set up but more powerful for deep integrations (databases, browser automation, cloud services).

Agent Skills are lightweight, single-purpose augmentations installed via npx skills add <owner/repo>. They work across 35+ agents and are easier to share. Think of Skills as focused recipes and MCP servers as full integration platforms.

When both exist for a workflow, Skills are faster to set up; MCP servers offer deeper, persistent access.

Settings > Features > MCP > ”+ Add MCP Server”. Choose from the gallery or add a custom server configuration.

What is the PRD-to-Plan-to-Todo methodology?

Section titled “What is the PRD-to-Plan-to-Todo methodology?”

A systematic approach for building complex features with AI:

  1. PRD: Write a clear specification with user stories and acceptance criteria
  2. Plan: Ask the AI to create a detailed implementation plan. Use /think or extended reasoning for complex plans
  3. Todo: Convert the plan into a checklist. The AI works through items systematically, checking them off as it goes

This works identically across all three tools. The key is giving the AI a structured starting point rather than a vague request.

Use Agent mode (or equivalent autonomous mode) when:

  • Making multi-file changes
  • Implementing features from a plan
  • Refactoring code
  • Running commands as part of a workflow

Use Ask/Chat/Suggest mode when:

  • Exploring and understanding code
  • Planning before implementing
  • Learning about the codebase
  • Reviewing options before committing to an approach
  1. Choose the right model: Do not default to Opus 4.6 for simple completions
  2. Clear context regularly: /clear in Claude Code, new chat in Cursor
  3. Be specific: Precise prompts consume fewer tokens than vague ones
  4. Use @ references: Include only the files the AI needs, not the entire codebase
  5. Monitor usage: Cursor Settings > Usage, Claude Code /cost, Codex dashboard
  6. Batch operations: Group related changes into a single conversation