Elixir gives you fault tolerance and concurrency that other languages bolt on as afterthoughts. But AI tools sometimes generate code that fights OTP patterns rather than leveraging them. These recipes produce idiomatic Elixir — proper GenServers, Ecto changesets with real validations, and Phoenix LiveView that uses the socket correctly.
Phoenix API and LiveView recipes with proper channel/socket patterns
Ecto schema and query recipes with composable queries and proper changesets
OTP patterns: GenServer, Supervisor trees, and Task supervision
Testing recipes for ExUnit with proper sandbox and async patterns
Scenario: You need a REST API for a task management app with proper validation and error formatting.
Tip
Create a Phoenix JSON API for tasks. (1) Generate the context: Tasks context module with Ecto schemas — Task (title string required min 3, description text optional, status enum :todo/:in_progress/:done/:cancelled, priority enum :low/:medium/:high/:urgent, due_date date optional must be in the future, assignee_id references users). (2) Changeset: validate_required for title and status, validate_length for title (3..200), validate_inclusion for status and priority, custom validate_status_transition that prevents done->todo. (3) Context functions: list_tasks(filters) with composable Ecto queries (filter by status, priority, assignee, due_date range, full-text search on title), create_task(attrs), update_task(task, attrs), delete_task(task). (4) Controller with proper error handling: renders changeset errors as { "errors": { "title": ["can't be blank"] } }. (5) OpenAPI spec using open_api_spex for each endpoint. (6) Tests: context tests with DataCase, controller tests with ConnCase testing success, validation, and auth.
Expected output: Context module, schema with changeset, controller, JSON views, OpenAPI spec, and tests.
Scenario: Your dashboard shows live metrics that currently require page refresh. You need real-time updates.
Tip
Build a LiveView dashboard. (1) DashboardLive module that assigns initial data in mount (active users count, recent orders, system health). (2) Subscribe to PubSub topics on mount: “orders:new”, “users:activity”, “system:health”. Handle each message in handle_info to update assigns. (3) Use temporary_assigns for the orders list to prevent memory growth — only keep the 20 most recent in memory. (4) Create LiveComponents: StatsCardComponent (displays a metric with trend arrow), OrderTableComponent (sortable table with phx-click for sorting), HealthGaugeComponent (animated gauge using SVG). (5) Add a live search: phx-change on an input debounced at 300ms, handle_event “search” filters orders in the socket. (6) Push events for client-side chart updates using push_event and JavaScript hooks. (7) Handle disconnection gracefully: show a “reconnecting” banner using phx-disconnected CSS class. Test with LiveViewTest, verifying PubSub messages update the page.
Expected output: LiveView module, 3 LiveComponents, PubSub subscriptions, JS hooks, and LiveViewTest tests.
Scenario: Your API needs per-user rate limiting but you do not want an external dependency like Redis for this.
Tip
Implement a GenServer-based rate limiter. (1) RateLimiter GenServer: state is a map of %{user_id => {count, window_start}}. init starts a periodic cleanup timer. (2) check_rate(user_id, limit, window_seconds) — call the GenServer, check if the user is within limits, return :ok or {:error, :rate_limited, retry_after}. Use :ets table for fast reads without blocking the GenServer for check-only operations. (3) handle_info(:cleanup, state) — removes expired entries every minute. (4) Add to the supervision tree under a DynamicSupervisor so you can have one RateLimiter per endpoint or per tenant for isolation. (5) Create a Plug middleware RateLimitPlug that calls the GenServer, returns 429 with Retry-After header on limit exceeded. (6) Make it distributed: use :pg (process groups) to sync rate limit state across cluster nodes. (7) Configuration via application env: default limits per plan tier. Test rate enforcement, window reset, and cleanup.
Expected output: GenServer, ETS optimization, Plug middleware, distribution support, and tests.
Scenario: Creating an order involves updating stock, creating payment records, and sending notifications — if any step fails, everything should roll back.
Tip
Implement order creation using Ecto.Multi for transactional safety. (1) Orders.create_order(user, items, payment_info) builds an Ecto.Multi pipeline: :validate_stock — for each item, check that the product has sufficient stock (return error if not). :create_order — insert the order record. :create_items — insert all order items with unit prices captured at purchase time. :update_stock — decrement stock for each product using optimistic locking (where: [stock: fragment("stock >= ?", ^quantity)]). :process_payment — call the payment gateway (wrap in a try/catch, if it fails after stock is decremented, the transaction rolls back). :calculate_total — update the order total from item prices. (2) Handle Ecto.Multi results: on success, broadcast “orders:new” via PubSub and queue email confirmation. On failure, return specific error (out_of_stock with product name, payment_failed with gateway error). (3) Create a with chain in the context that maps Multi errors to user-friendly messages. Test the happy path, stock insufficient, payment failure, and concurrent order race condition.
Expected output: Ecto.Multi pipeline, error handling, PubSub broadcast, and transaction tests.
Scenario: You need reliable background jobs with retries, scheduling, and monitoring.
Tip
Set up Oban for background job processing. (1) Configure Oban in config/config.exs with PostgreSQL notifier, queues: default (concurrency 10), emails (concurrency 5), reports (concurrency 2), imports (concurrency 1). (2) Create workers: EmailWorker — sends transactional emails via Swoosh, retries 3 times with exponential backoff, discards after 24 hours. ReportWorker — generates CSV/PDF reports, uploads to S3, notifies user. ImportWorker — processes CSV imports in batches of 500, reports progress via PubSub (LiveView subscribes to show progress bar). (3) Add unique constraints: email jobs unique by {recipient, template} within 1 minute (prevent duplicate sends). (4) Scheduled jobs with Oban.Cron: daily at 2 AM run cleanup, weekly on Monday run analytics report. (5) Add Oban.Web for monitoring dashboard at /admin/jobs. (6) Implement job cancellation: store the Oban job ID when starting a long import, allow cancellation from the UI. Test workers using Oban.Testing.perform_job/3 with mocked dependencies.
Expected output: Oban config, 3 workers, cron schedule, unique constraints, and worker tests.
Scenario: Your API needs JWT auth with role-based access control and token refresh.
Tip
Implement authentication using Guardian. (1) Guardian module with subject_for_token and resource_from_claims callbacks using user ID. Configure with HS256 and 15-minute TTL for access tokens, 30-day for refresh tokens. (2) AuthPipeline plug pipeline: Guardian.Plug.VerifyHeader (scheme: “Bearer”), Guardian.Plug.EnsureAuthenticated, Guardian.Plug.LoadResource. (3) EnsureRole plug that checks Guardian.Plug.current_resource(conn).role against required roles, returns 403 if insufficient. (4) Auth controller: register (hash with Argon2, return tokens), login (verify password, return tokens), refresh (exchange refresh token for new pair), logout (revoke in database). (5) Token revocation: store revoked tokens in an ETS-backed GenServer (check on every request) with periodic sync to database for persistence across restarts. (6) Apply pipelines in router: pipe_through [:api, :auth] for protected routes, pipe_through [:api, :auth, :admin] for admin routes. Test every auth scenario including token expiry and revocation.
Expected output: Guardian config, auth pipeline, role plug, auth controller, token revocation, and tests.
Scenario: Your queries are duplicated across contexts with slight variations. You need composable, reusable query building.
Tip
Create a composable query system. (1) QueryBuilder module with functions that accept and return Ecto.Queryable: filter_by(query, :status, value), filter_by(query, :date_range, {from, to}), filter_by(query, :search, term) using ILIKE with proper escaping, sort_by(query, field, direction) with whitelist of allowed fields, paginate(query, page, page_size) returning %{entries: list, total: int, page: int, total_pages: int}. (2) Compose in context: list_products(params) pipes Product |> filter_by(:status, params.status) |> filter_by(:search, params.q) |> sort_by(params.sort_by, params.sort_dir) |> paginate(params.page, params.page_size). (3) Add query preloading: with_preloads(query, [:category, :variants]) that conditionally preloads based on caller needs. (4) Add query scopes: active(query) adds where: [is_active: true], published(query) adds where: [status: :published]. (5) Aggregation queries: with_stats(query) adds subquery for order_count and avg_rating. Test each query function individually and in composition.
Expected output: QueryBuilder module, scopes, preload helpers, aggregation queries, and composition tests.
Scenario: You need multi-room chat with presence tracking and message persistence.
Tip
Build a Phoenix Channels chat system. (1) RoomChannel: join requires authentication, loads last 50 messages from database on join, broadcasts new messages to room. (2) Message handling: validate message content (non-empty, max 5000 chars), persist to database, broadcast to room with user info and timestamp. (3) Presence tracking using Phoenix.Presence: track user join/leave per room, broadcast presence_state and presence_diff, provide an “online users” list that LiveView can display. (4) Typing indicators: handle “typing” event, broadcast to room except sender, auto-expire after 3 seconds. (5) File attachments: accept file uploads via channel, upload to S3 in a Task, broadcast the attachment URL when complete. (6) Rate limiting: max 10 messages per second per user using a per-channel GenServer. (7) Create a ChannelCase test helper for testing channel joins, messages, and presence.
Expected output: Room channel, message persistence, presence tracking, typing indicators, and channel tests.
Scenario: Your application has components that crash independently. You need proper supervision for fault isolation.
Tip
Design a supervision tree for a data processing application. (1) Top-level Application supervisor (strategy: :one_for_one): starts Repo, PubSub, Endpoint, and a WorkerSupervisor. (2) WorkerSupervisor (strategy: :rest_for_one): starts ConnectionPool (GenServer managing WebSocket connections to external data sources), DataProcessor (GenServer that receives data from connections and processes it), MetricsCollector (GenServer that aggregates metrics and exposes them). rest_for_one ensures that if ConnectionPool crashes, DataProcessor and MetricsCollector restart too since they depend on it. (3) ConnectionPool uses a DynamicSupervisor to manage individual WebSocket connections — each connection is a supervised Task that reconnects on crash with exponential backoff. (4) Add a Registry for naming connections by data source ID. (5) Add a TaskSupervisor for fire-and-forget tasks (sending notifications, writing to external APIs) that should not crash the caller. (6) Implement circuit breaker in ConnectionPool: if a data source fails 5 times in 1 minute, stop reconnecting for 5 minutes. Test that crashing one component does not affect siblings, and that the restart strategy works correctly.
Expected output: Application supervisor, WorkerSupervisor, ConnectionPool with DynamicSupervisor, and restart tests.
Scenario: Your deployment copies the project to the server and runs mix phx.server. You need proper releases.
Tip
Set up Mix releases for production deployment. (1) Configure mix.exs releases section with runtime config, cookie generation, and custom VM args (set schedulers to match CPU cores, set process limit). (2) Create config/runtime.exs that reads all config from environment variables at release boot time: DATABASE_URL, SECRET_KEY_BASE, PHX_HOST, PORT, POOL_SIZE. Validate required variables and fail fast with descriptive errors. (3) Create a rel/overlays/bin/migrate script that runs Ecto migrations using the release: bin/app eval "App.Release.migrate". (4) Create lib/app/release.ex with migrate() and rollback(version) functions. (5) Dockerfile: multi-stage build using elixir:1.16-otp-26-alpine for build, alpine:3.19 for runtime. Copy only the release tarball. Add health check. Run as non-root user. (6) GitHub Actions: compile and test, build release in Docker, push to registry, deploy with zero-downtime using rolling update. Test the release boots, runs migrations, and serves traffic.
Expected output: Release config, runtime.exs, migration scripts, Dockerfile, CI workflow, and boot tests.
Caution
Common Elixir pitfalls:
Process bottlenecks: If the AI sends all messages through a single GenServer, it becomes a bottleneck. Ensure stateless operations are handled in the caller process, not the GenServer.
Ecto N+1 queries: The AI may generate code that accesses associations without preloading them, causing lazy-load N+1 queries in production. Always verify preloads are specified.
LiveView memory: If the AI stores large datasets in socket assigns without temporary_assigns, each connected user consumes significant memory. Use temporary_assigns for lists.
Pattern matching exhaustiveness: If the AI forgets a function clause for an enum value, the GenServer crashes at runtime. Ensure all enum/status values are handled.