Database Operations from the CLI
You are staring at a query that takes eight seconds. The EXPLAIN output is a wall of sequential scans and nested loops. Your product manager wants a report that joins five tables with three levels of aggregation. The intern just pushed a migration that dropped a column in production. Database work is where small mistakes are expensive and expertise is hard to come by.
Claude Code brings that expertise to your terminal. It reads your schema, understands your query patterns, and generates migrations that are safe to run in production. It writes complex SQL that you would spend an hour debugging, and it catches the mistakes in migrations before they hit staging. This lesson covers the database workflows that work in real projects.
What You’ll Walk Away With
Section titled “What You’ll Walk Away With”- A workflow for designing schemas and generating safe migrations from the terminal
- Prompts for writing and optimizing complex queries
- The migration safety pattern that prevents production disasters
- Headless mode patterns for automated database audits
Schema Design from Requirements
Section titled “Schema Design from Requirements”When starting a new feature that needs database changes, have Claude design the schema before writing any code.
Review the schema design carefully. Column types, constraints, and indexes are much harder to change after the migration runs. Specifically check:
- Are foreign keys set up with the right ON DELETE behavior?
- Are unique constraints where they need to be?
- Are the indexes sufficient for the queries you plan to run?
- Did Claude use the same conventions as your existing schema?
Generating Safe Migrations
Section titled “Generating Safe Migrations”Migrations are the most dangerous database operation in a production system. A bad migration can lock tables, drop data, or cause downtime. Claude can generate migrations that avoid these pitfalls.
-
Generate the migration
Based on the schema changes we just designed, generate amigration file. Follow our existing migration patterns inmigrations/ (or prisma/migrations/, or drizzle/).The migration must:- Use IF NOT EXISTS for all CREATE statements- Add columns as nullable first, then backfill, then add NOT NULL- Include an explicit DOWN migration for rollback- Never lock large tables (use concurrent index creation)- Include comments explaining each step -
Review the migration before running it
Show me the generated migration SQL. For each statement,explain: will it lock the table? How long will it takeon a table with 5 million rows? Is it reversible? -
Test the migration on a copy
Run the migration against our test database. Then verify:- All new tables/columns exist- Existing data is preserved- The application still connects and queries work- The rollback migration works (run down, then up again) -
Deploy with confidence
Once the migration passes review and testing, you can deploy knowing it is safe.
The zero-downtime migration pattern
Section titled “The zero-downtime migration pattern”For changes to existing tables with production traffic, use the expand-contract pattern:
I need to rename the column "fname" to "first_name" on the userstable. This table has 2 million rows and serves 500 requests/second.
Generate a migration sequence that avoids downtime:
Phase 1 (expand): Add the new column "first_name" alongside "fname"Phase 2 (dual-write): Application writes to both columnsPhase 3 (backfill): Copy data from "fname" to "first_name" in batchesPhase 4 (switch reads): Application reads from "first_name"Phase 5 (contract): Drop the "fname" column
Generate the SQL migration for each phase. The backfill mustprocess in batches of 10,000 rows with a brief pause betweenbatches to avoid overwhelming the database.Writing Complex Queries
Section titled “Writing Complex Queries”Claude excels at SQL. It understands joins, window functions, CTEs, and the performance implications of each approach.
Analytical queries
Section titled “Analytical queries”Query optimization
Section titled “Query optimization”When a query is slow, pipe the query and its EXPLAIN output to Claude:
This query takes 8 seconds. Here's the EXPLAIN ANALYZE output:[paste output]
The query:[paste query]
Read our schema to understand the table structure and existingindexes. Tell me:1. Why is it slow? (identify the specific bottleneck)2. What indexes would help?3. Can the query be restructured for better performance?4. Show me the optimized version and its expected EXPLAIN plan.ORM query generation
Section titled “ORM query generation”For projects using Prisma, Drizzle, or SQLAlchemy, have Claude write the ORM queries instead of raw SQL:
Read our Drizzle schema in src/db/schema.ts. Write a queryusing the Drizzle query builder that:
1. Finds all orders from the last 30 days2. Includes the customer name and email3. Includes the order items with product names4. Filters by status "completed" or "shipped"5. Sorts by order total descending6. Paginates with cursor-based pagination
Use the Drizzle relational query API where possible.Show me the generated SQL to verify it's efficient.Seeding Test Data
Section titled “Seeding Test Data”Good tests need realistic data. Claude can generate seed scripts that create comprehensive test datasets.
Read our database schema and generate a seed script that creates:
- 50 users with realistic names and emails- 10 teams with 3-15 members each- 200 products across 5 categories- 500 orders with realistic distribution (most recent month has more orders, weekends have fewer)- Reviews for ~60% of products with realistic rating distribution (most are 4-5 stars, few are 1-2)
Use our existing seed utility in scripts/seed.ts as a reference.The seed should be idempotent -- running it twice should notcreate duplicates.For test fixtures specifically:
Generate factory functions for our test suite following thepatterns in tests/fixtures/.
Create:- createTestUser(overrides?) - creates a user with sensible defaults- createTestTeam(owner, overrides?) - creates a team with the owner- createTestOrder(user, products, overrides?) - creates an order
Each factory should return the created object and clean upafter the test (or use transactions that roll back).Database Auditing with Headless Mode
Section titled “Database Auditing with Headless Mode”Use headless mode for automated database health checks:
# Check for common schema issuesclaude -p "Read our database schema and check for:1. Tables missing primary keys2. Foreign keys without indexes3. Columns that should be NOT NULL but aren't4. Missing created_at/updated_at timestamps5. Inconsistent naming conventions6. Tables without any indexes beyond the primary key
Report each issue with the table name, column name,and suggested fix." --output-format json > db-audit.json# Analyze query performanceclaude -p "Read the slow query log at /var/log/postgresql/slow.logand analyze the top 10 slowest queries. For each one:1. Explain why it's slow2. Suggest an index that would help3. Show the optimized query if it can be restructured4. Estimate the improvement
Read our schema to understand the table structure." \ --output-format json > query-audit.jsonWorking with Multiple Database Types
Section titled “Working with Multiple Database Types”PostgreSQL-specific features
Section titled “PostgreSQL-specific features”We use PostgreSQL. Generate queries that use PG-specific features:
1. Use JSONB columns for flexible metadata with GIN indexes2. Use array columns for tags with array operators3. Use full-text search with ts_vector and ts_query4. Use row-level security for multi-tenant isolation5. Use LISTEN/NOTIFY for real-time updates
Show me how to set up each feature and integrate it withour ORM (Drizzle/Prisma).SQLite for development (D1 for production)
Section titled “SQLite for development (D1 for production)”Our production database is Cloudflare D1 (SQLite). Generatemigrations that work within SQLite's constraints:
- No ALTER TABLE for column renames or type changes- No concurrent index creation- Limited ALTER TABLE support (add column only)
For schema changes that SQLite can't handle directly, generatethe migration as: create new table, copy data, drop old table,rename new table. Include proper transaction handling.Sub-agents for Database Research
Section titled “Sub-agents for Database Research”When you need to understand how the database is being used before making changes:
Use sub-agents to investigate our database usage:
1. Find all queries in the codebase that touch the users table. Group them by: SELECT, INSERT, UPDATE, DELETE. List the file and line number for each.
2. Find all places where we use raw SQL instead of the ORM. Check if any of them are vulnerable to SQL injection.
3. Analyze the migration history and list any migrations that were destructive (dropped columns, tables, or data). Check if they had proper rollback procedures.
Report findings so we can plan the schema changes safely.When This Breaks
Section titled “When This Breaks”The generated migration locks the table in production. Claude did not account for table size. Always ask: “This table has N million rows. Will this migration lock it? How long will it take?” For large tables, use online schema change tools or the expand-contract pattern.
ORM queries are correct but slow. The ORM generates suboptimal SQL. Ask Claude to show you the generated SQL: “Show me the raw SQL this Prisma/Drizzle query generates. Then compare its EXPLAIN plan with a hand-written query that does the same thing.”
Seed data causes constraint violations. The seed script creates data in the wrong order (orders before users, for example). Tell Claude: “Generate the seed data in dependency order. Users first, then teams, then products, then orders. Use actual IDs from previously created records.”
The rollback migration does not work. This happens when the down migration was an afterthought. Always test the full cycle: “Run the migration up, verify data, run the migration down, verify the schema is exactly as it was before. Show me the schema diff.”
Schema design does not account for future queries. Claude designed the schema based on current requirements, but you know a feature is coming that will need different access patterns. Tell Claude: “We’ll also need to query this data by [describe the future query]. Does the current schema support that efficiently, or do we need additional indexes or denormalization?”
What’s Next
Section titled “What’s Next”Your database layer is designed, migrated, and optimized. Now build the API endpoints that sit on top of it.