Skip to content

Team Collaboration

Your team of eight engineers all use Cursor, but the output quality varies wildly. One developer’s AI-generated API routes include proper error handling, validation, and logging. Another developer’s routes are missing error handling entirely. A third developer keeps getting code that uses var instead of const because their personal rules override the project settings. There is no shared standard for how the team uses AI, and the inconsistency is creating more code review work, not less.

Scaling AI-assisted development across a team requires shared rules, coordinated workflows, and consistent configuration. This article covers the Cursor-specific features and practices that make team-wide AI adoption work.

  • A strategy for building and maintaining shared .cursor/rules/ across your team
  • Team Rules configuration for Cursor’s Team and Enterprise plans
  • Custom commands that standardize common team workflows
  • Code review practices adapted for AI-generated code

The most important step for team consistency: check your .cursor/rules/ directory into version control. Every rule in this directory is available to every developer who clones the repo.

Start with three foundational rules that every team should have:

Treat rules like code: they need maintenance. A practical approach:

  • Rule reviews in PRs: When someone adds or modifies a rule, it gets the same review process as code
  • Rule update from mistakes: When code review catches a pattern the agent got wrong, update the relevant rule to prevent recurrence
  • Quarterly rule audit: Review all rules to remove outdated information and add new patterns

Cursor’s Team and Enterprise plans offer Team Rules, managed centrally from the Cursor dashboard. These rules apply to all team members across all projects.

Use CaseTeam RulesProject Rules
Organization-wide coding standardsYesNo
Security and compliance requirementsYesNo
Project-specific architecture patternsNoYes
Technology-specific conventionsNoYes
Communication style preferencesYesNo
  1. Open the Cursor dashboard at cursor.com/dashboard
  2. Navigate to the team content tab
  3. Click “Add Rule” to create a new team rule
  4. Write the rule as plain text (Team Rules do not support globs or metadata)
  5. Choose whether to enforce the rule (prevents team members from disabling it)
  6. Enable the rule to make it active

Team Rules take the highest precedence: Team Rules > Project Rules > User Rules. This means an enforced Team Rule overrides any conflicting Project Rule or User Rule. Use this for standards that must be consistent, like security practices and error handling patterns.

Custom slash commands let you create reusable workflows triggered with a / prefix. Store them in .cursor/commands/ and they are available to the entire team.

Create .cursor/commands/review.md:

---
description: Review code changes for quality, security, and consistency
---
Review the current changes (use git diff to see them) and check for:
1. **Security**: SQL injection, XSS, hardcoded secrets, missing auth checks
2. **Error handling**: All async operations have try/catch, errors use AppError format
3. **Testing**: New logic has corresponding tests, edge cases are covered
4. **Patterns**: Code follows existing patterns in the codebase (check similar files)
5. **Performance**: No N+1 queries, no unnecessary re-renders, no blocking operations
For each issue found, provide:
- The file and line number
- What the issue is
- A specific fix
Use only search tools -- do not make any edits.

Developers invoke this with /review in the chat input.

Create .cursor/commands/pr-description.md:

---
description: Generate a PR description from current changes
---
Analyze the current branch changes (compare against main) and generate a PR description with:
1. **Summary**: 2-3 sentence overview of what changed and why
2. **Changes**: Bulleted list of specific changes, organized by area
3. **Testing**: How these changes were tested
4. **Migration notes**: Any database changes, environment variable additions, or breaking changes
Use git diff main...HEAD to see all changes. Do not make any edits.

Code Review Practices for AI-Generated Code

Section titled “Code Review Practices for AI-Generated Code”

AI-generated code requires a different review lens than manually written code. The agent is good at following patterns but can miss subtle business logic, introduce security vulnerabilities, and over-engineer simple solutions.

  • Hallucinated imports: The agent may import packages that are not in your dependencies
  • Inconsistent patterns: The agent may follow a different pattern than what exists, especially if rules are incomplete
  • Missing edge cases: AI tends to handle the happy path well but miss error paths
  • Over-engineering: The agent may add abstractions, caching, or error handling that the feature does not need yet
  • Security blind spots: SQL injection through string concatenation, missing auth checks, hardcoded test credentials left in production code

When a new developer joins the team, their Cursor experience should work well from day one:

  1. They clone the repo (which includes .cursor/rules/ and .cursor/commands/)
  2. Team Rules from the dashboard apply automatically
  3. Point them to the project README for architecture context
  4. Have them run through one feature task using Ask mode first to understand the codebase
  5. Pair on their first Agent mode task to demonstrate effective prompting

The combination of shared rules and team commands means a new developer gets the same AI quality as a veteran team member, without needing to build up personal prompt expertise first.

Team members override shared rules with personal User Rules. User Rules have the lowest precedence, so they cannot override Team Rules or Project Rules. If someone is getting different output, check if they have local rule files that are not checked in.

Rules become outdated as the codebase evolves. Make rule updates part of your development process. When you refactor a pattern that a rule references, update the rule in the same PR.

Too many rules slow down the AI. Rules consume context tokens. If you have 30+ rules with alwaysApply: true, the agent starts every conversation with significant context overhead. Audit which rules truly need to be always-apply versus glob-scoped or agent-decided.

Team members do not know which commands are available. Document your custom commands in your project’s README or contributing guide. The / prefix in chat shows a list of available commands, but new team members may not know to look there.