Testing Patterns
You just finished implementing a payment processing service. It works in your manual testing, but you have no automated tests. Writing tests for 15 functions across payment creation, validation, refunds, and webhook handling feels like it will take longer than writing the service itself. So you ship it without tests, promising yourself you will add them later. Two weeks later, a refactoring breaks the refund logic and nobody catches it until a customer reports it.
AI is remarkably good at writing tests. It can analyze your code, identify the important paths and edge cases, and generate a comprehensive test suite in minutes. The key is knowing how to prompt for tests that actually catch bugs, not tests that just achieve coverage numbers.
What You’ll Walk Away With
Section titled “What You’ll Walk Away With”- A workflow for generating comprehensive test suites from existing code
- The test-driven development pattern with AI (write tests first, then implementation)
- Techniques for adding tests to untested legacy code
- Copy-paste prompts that produce high-quality, maintainable tests
The Test-First Pattern (TDD with AI)
Section titled “The Test-First Pattern (TDD with AI)”The most powerful testing pattern with AI: write the tests first, then let Agent write the implementation and iterate until the tests pass.
Why this works: the AI writes tests based on your specification of expected behavior. Then it writes an implementation that passes those tests. If a test fails, the implementation is wrong (not the test), so the AI fixes the implementation. This prevents the common problem of AI “fixing” tests to match broken code.
Generating Tests for Existing Code
Section titled “Generating Tests for Existing Code”Comprehensive Test Suite Generation
Section titled “Comprehensive Test Suite Generation”Adding Tests to Legacy Code
Section titled “Adding Tests to Legacy Code”For untested code where you are not sure what it does:
Characterization tests are essential before refactoring legacy code. They tell you when your refactoring changes behavior, even if the original behavior was not documented.
Building on an Existing Test Suite
Section titled “Building on an Existing Test Suite”One of the most practical patterns: incrementally expand test coverage by analyzing what is not tested.
Using Failing Tests from Production Bugs
Section titled “Using Failing Tests from Production Bugs”When you encounter a production bug, the first step should be writing a test that reproduces it:
We discovered a bug: orders with decimal quantities (e.g., 2.5 units)cause the total calculation to return NaN.
Before fixing the bug:1. Write a failing test that demonstrates the exact bug2. The test should pass valid decimal quantities and assert the correct total3. Run the test to confirm it fails with the current code
Then fix the bug and confirm the test passes.
This ensures the fix actually addresses the issue and prevents regression.Test Data Factory Pattern
Section titled “Test Data Factory Pattern”Instead of writing inline test objects in every test, create factory functions:
When This Breaks
Section titled “When This Breaks”AI writes tests that just assert the implementation. If a test calls a function and asserts that the return value is whatever the function returns, it is testing nothing. Specify the expected behavior explicitly in your prompt: “should return 42 when given inputs X and Y.”
Tests pass but do not catch real bugs. The tests are too focused on happy paths. Always explicitly request error path testing, edge case testing, and boundary condition testing.
AI “fixes” the test instead of the code. Explicitly state: “The test represents the correct behavior. The implementation is wrong. Fix the implementation.”
Generated tests are flaky. Tests that depend on timing, random values, or external state are unreliable. Add to your testing rule: “Never use real timers, random values, or network calls in unit tests. Mock everything.”
Test file becomes too large. Split tests by domain (auth tests, profile tests, payment tests) rather than having one monolithic test file per service.
What’s Next
Section titled “What’s Next”- Debugging Workflows — Tests are the foundation of effective debugging
- Refactoring Strategies — Tests enable safe refactoring
- Review Workflows — Reviewing AI-generated tests alongside code