Skip to content

Microservices Development in Cursor

Your monolith just hit a wall. The order processing module takes 45 seconds to deploy because it shares a codebase with the user service, the inventory tracker, and the notification system. A bug in email rendering crashes the entire checkout flow. Your team of eight engineers keeps stepping on each other’s toes in the same repository. Leadership wants microservices. You need to decompose a running system into independently deployable services — without losing a single transaction.

  • A Cursor rules setup that keeps the AI aware of cross-service contracts
  • A workflow for generating new services with consistent structure, shared types, and Docker configs
  • Copy-paste prompts for API contract design, service scaffolding, and inter-service communication
  • A strategy for managing multi-root workspaces in Cursor so Agent mode understands your entire distributed system
  • Techniques for debugging request flows that span multiple services

The biggest challenge in microservices development with AI assistance is context. Cursor needs to understand that your system is not one project but a constellation of services that talk to each other. Multi-root workspaces are the foundation.

Open all your service directories in a single Cursor window using File > Add Folder to Workspace. If you have orders-service/, users-service/, inventory-service/, and api-gateway/, add them all. Cursor indexes every root, so Agent mode can search across services when you ask about cross-cutting concerns.

Next, create project rules that encode your service topology. This is the single most impactful thing you can do for microservices work in Cursor.

.cursor/rules/microservices.md
# Microservices Architecture
## Service Topology
- api-gateway (Node.js/Express): Routes external requests, rate limiting, auth
- orders-service (Node.js/Express): Order CRUD, payment orchestration
- users-service (Python/FastAPI): User accounts, profiles, auth tokens
- inventory-service (Go): Stock management, warehouse allocation
- notification-service (Node.js): Email, SMS, push via event consumers
## Communication Patterns
- Synchronous: REST between gateway and services, gRPC between orders and inventory
- Asynchronous: RabbitMQ for events (order.created, payment.completed, stock.reserved)
## Shared Conventions
- All services expose /health and /ready endpoints
- All services use structured JSON logging with correlation IDs
- API versioning via URL path (/v1/, /v2/)
- Database-per-service: no shared databases

With this rule set to always apply, every prompt you send in Agent mode will carry your architecture as context. The AI stops suggesting shared database queries and starts thinking in terms of API calls and events.

When you need a new service, starting from scratch is slow. Starting from a generic template is dangerous because it misses your team’s conventions. The sweet spot is having Cursor generate a service that matches your existing patterns.

The key technique here is referencing an existing service with @ symbols. By pointing Agent at orders-service/src/app.ts, you give it a concrete example of your patterns rather than relying on generic conventions. The generated service will use your error handling middleware, your logging format, your health check implementation — not some generic boilerplate.

After generation, review the diff carefully. Agent mode shows you every file it creates. Pay attention to:

  • Whether environment variable names follow your naming convention
  • Whether the Dockerfile matches your base image and build stages
  • Whether the event names follow your domain event naming pattern

In microservices, the contract between services is more important than the implementation behind it. Cursor excels at contract-first development because you can generate OpenAPI specs, then implement against them.

Once you have the spec, you can use it as context for implementation:

@payments-service/docs/openapi.yml
Implement the route handlers for payments-service that match this OpenAPI spec exactly. Use the same validation middleware pattern as @orders-service/src/routes/orders.ts. For the Stripe integration, use the stripe npm package with webhook signature verification.

This two-step approach — spec first, then implement against the spec — prevents the AI from inventing its own API surface. The spec becomes a shared contract that both the implementing team and consuming teams can rely on.

The trickiest part of microservices is getting services to talk to each other reliably. Cursor Agent can help you implement both synchronous and asynchronous patterns, but you need to guide it toward the right pattern for each interaction.

For request-response patterns between the API gateway and backend services:

// In Agent mode, reference both services:
// @api-gateway/src/routes/checkout.ts @orders-service/src/routes/orders.ts
"The checkout endpoint in the API gateway needs to:
1. Call users-service to validate the auth token and get customer_id
2. Call inventory-service to check stock availability for all items
3. Call orders-service to create the order
4. Call payments-service to initiate payment
Implement this with proper error handling:
- If inventory check fails, return 409 with unavailable items
- If order creation fails, no payment should be initiated
- Include circuit breaker pattern for each downstream call
- Pass correlation ID through all requests via x-correlation-id header
- Set 5-second timeout on each downstream call"

For event-driven patterns, the AI needs to understand your message broker setup:

// @orders-service/src/events/publisher.ts @notification-service/src/events/consumer.ts
"Create an event consumer in payments-service that listens for 'order.created'
events from RabbitMQ. When received, it should:
1. Extract order_id, customer_id, and total_amount from the event payload
2. Create a Stripe PaymentIntent
3. Publish 'payment.initiated' event with payment_intent_id
4. On Stripe webhook confirmation, publish 'payment.completed' or 'payment.failed'
Follow the same consumer pattern used in notification-service, including:
- Dead letter queue for failed processing
- Idempotency check using event_id
- Structured logging with correlation_id from the event metadata"

When a request fails and it touches four services, finding the root cause is painful. Cursor’s Ask mode is surprisingly powerful here because you can feed it logs from multiple services and ask it to correlate them.

Start by collecting logs filtered by correlation ID:

I have logs from three services for correlation ID "abc-123-def":
API Gateway:
[2026-02-08T10:15:32Z] INFO gateway: Received POST /checkout, correlation_id=abc-123-def
[2026-02-08T10:15:32Z] INFO gateway: Calling inventory-service, correlation_id=abc-123-def
[2026-02-08T10:15:33Z] INFO gateway: inventory-service responded 200, correlation_id=abc-123-def
[2026-02-08T10:15:33Z] INFO gateway: Calling orders-service, correlation_id=abc-123-def
[2026-02-08T10:15:38Z] ERROR gateway: orders-service timeout after 5000ms, correlation_id=abc-123-def
Orders Service:
[2026-02-08T10:15:33Z] INFO orders: Received POST /v1/orders, correlation_id=abc-123-def
[2026-02-08T10:15:33Z] INFO orders: Validating order items
[2026-02-08T10:15:33Z] INFO orders: Acquiring database lock for customer cust_456
[2026-02-08T10:15:38Z] ERROR orders: Lock acquisition timeout after 5000ms
Analyze these logs. What is the root cause? What should I investigate next?

Ask mode can correlate timestamps, identify the bottleneck (database lock contention in orders-service), and suggest investigation paths — all without modifying any files.

Agent generates a shared database query across services. This happens when your rules don’t explicitly state “database per service.” The fix is adding that constraint to your project rules and being explicit in prompts: “This service only accesses its own database. To get user data, call the users-service API.”

The AI creates circular dependencies between services. If payments-service calls orders-service which calls payments-service, you have a loop. Use Ask mode first to map out the dependency graph: “Given these service interactions, draw me a dependency graph and identify any cycles.” Fix cycles by introducing events or a new coordinating service.

Generated Docker configs don’t work in your compose network. Agent often generates standalone Dockerfiles without considering your docker-compose networking. Always reference your existing docker-compose.yml when asking for new service configs, and specify the network name explicitly.

Event schemas drift between publisher and consumer. When you update an event schema in one service, consumers break silently. Create a shared types package or a schema registry, and reference it in your prompts: “Use the event schemas defined in @shared/events/schemas.ts.”