- Introduction
- Setting Up Your Testing Environment
- Understanding Your Project Structure
- Creating Effective AI Prompts for Testing
- Using the AI Prompt Composer
- Testing Different Types of Components
- Handling Mocks and Dependencies
- Troubleshooting Common Issues
- Best Practices
- Conclusion
Unit testing is a critical part of maintaining a robust codebase, but writing tests can be time-consuming and sometimes repetitive. AI assistants, such as the Cursor AI tool, can help accelerate this process by generating test files based on your existing code. This document outlines a systematic approach to using AI and the Prompt Composer for generating comprehensive unit tests for your JavaScript/TypeScript projects.
Before you start generating tests with AI, ensure your project has the proper testing infrastructure:
For a TypeScript React project, you'll typically need:
npm install --save-dev vitest @testing-library/react @testing-library/jest-dom @testing-library/user-event
Create a vitest.config.ts
file in your project root:
import { defineConfig } from 'vitest/config';
import react from '@vitejs/plugin-react';
import path from 'path';
export default defineConfig({
plugins: [react()],
test: {
environment: 'jsdom',
globals: true,
setupFiles: ['./vitest.setup.ts'],
coverage: {
reporter: ['text', 'json', 'html'],
},
},
resolve: {
alias: {
'@': path.resolve(__dirname, './src'),
'@features': path.resolve(__dirname, './src/redux/features'),
'@hooks': path.resolve(__dirname, './src/hooks'),
'@store': path.resolve(__dirname, './src/redux/store'),
},
},
});
Create a vitest.setup.ts
file to configure global test settings:
import '@testing-library/jest-dom';
import { afterEach, afterAll, vi, beforeEach, beforeAll } from 'vitest';
import { cleanup } from '@testing-library/react';
// Mock common browser APIs
beforeAll(() => {
// Mock window properties
Object.defineProperty(window, 'matchMedia', {
writable: true,
value: vi.fn().mockImplementation((query) => ({
matches: false,
media: query,
onchange: null,
addListener: vi.fn(),
removeListener: vi.fn(),
addEventListener: vi.fn(),
removeEventListener: vi.fn(),
dispatchEvent: vi.fn(),
})),
});
});
// Cleanup after each test
afterEach(() => {
cleanup();
vi.clearAllMocks();
});
Set up helper functions for testing. For Redux applications, create a utility to set up a test store:
// __tests__/utils/setupApiStore.ts
import { configureStore } from '@reduxjs/toolkit';
import { setupListeners } from '@reduxjs/toolkit/query';
export function setupApiStore(api, extraReducers = {}) {
const getStore = () =>
configureStore({
reducer: {
[api.reducerPath]: api.reducer,
...extraReducers,
},
middleware: (getDefaultMiddleware) =>
getDefaultMiddleware().concat(api.middleware),
});
const initialStore = getStore();
const refObj = {
api,
store: initialStore,
refetch: () => {
refObj.store = getStore();
return refObj.store;
},
};
setupListeners(initialStore.dispatch);
return {
store: initialStore,
storeRef: refObj,
};
}
Before generating tests, analyze your project structure to understand what needs testing:
- Redux slices and reducers
- API endpoints
- Utility functions
- React components
- Custom hooks
Follow a consistent pattern for test file locations. For example:
__tests__/
├── components/
├── hooks/
├── redux/
│ ├── features/
│ │ ├── auth/
│ │ ├── user/
│ │ └── ...
│ └── store.test.ts
└── utils/
When asking an AI, like the Cursor AI tool, to generate tests, structure your prompts effectively:
Please create a unit test for the file [FILE_PATH]. The file contains [BRIEF DESCRIPTION].
The test should verify [SPECIFIC FUNCTIONALITY].
Here's the content of the file:
[PASTE FILE CONTENT]
Our project uses Vitest for testing and follows these patterns:
[DESCRIBE ANY SPECIFIC PATTERNS OR CONVENTIONS]
Create a unit test for the Redux slice at redux/features/orders/orderSlice.ts.
This slice manages order data with actions for setting table data, page size, and total count.
Here's the content of the file:
[PASTE SLICE CONTENT]
The test should verify:
1. The initial state is correct
2. Each reducer correctly updates the state
3. Action creators return the expected action objects
Generate a test file for the API endpoints in redux/features/orders/orderApi.ts.
This file defines endpoints for fetching order data and handling order operations.
Here's the content of the file:
[PASTE API CONTENT]
The test should:
1. Verify each endpoint has the correct configuration (URL, method)
2. Test success and error handling for key endpoints
3. Confirm the correct hooks are exported
The AI Prompt Composer is a tool that helps you create structured prompts for generating unit tests. It ensures that the AI receives all the necessary information to produce accurate and comprehensive tests. Here's how to use it effectively:
Navigate to the _prompt
directory in your project, where you can find pre-defined prompt templates. These templates are designed to guide you in creating effective prompts for various testing scenarios.
The _prompt
directory contains several markdown files, each serving a specific purpose:
-
bug-fix.md: Use this template when you need to generate tests that focus on verifying bug fixes. It helps ensure that the bug is resolved and doesn't reoccur.
-
generate-mock.md: This template is useful for creating prompts that require mocking dependencies. It guides the AI in setting up mocks for external modules and APIs.
-
test-all.md: Use this template when you want to generate tests for all functions or components in a file. It ensures comprehensive coverage.
-
test-one-by-one.md: This template is ideal for generating tests for individual functions or components. It allows for focused testing of specific functionalities.
To create a prompt using the composer, follow these steps:
-
Select a Template: Choose the appropriate template based on your testing needs. For example, if you need to mock dependencies, open
generate-mock.md
. -
Fill in the Details: Each template contains placeholders for specific information. Fill in these placeholders with details about the file, functionality, and any specific requirements.
Example from
generate-mock.md
:Please create a unit test for the file [FILE_PATH]. The file contains [BRIEF DESCRIPTION]. The test should verify [SPECIFIC FUNCTIONALITY] and include mocks for [DEPENDENCIES]. Here's the content of the file: [PASTE FILE CONTENT]
-
Use the Prompt: Once you've filled in the details, use the prompt with the AI to generate the test. The AI will use the structured information to create a test file that meets your requirements.
After generating the tests, review them to ensure they accurately reflect the intended functionality. Make any necessary adjustments to improve test coverage and reliability.
If the generated tests don't fully meet your expectations, refine the prompt and try again. The more specific and detailed your prompt, the better the AI can tailor the tests to your needs.
By using the AI Prompt Composer, you can efficiently generate high-quality unit tests that enhance your project's reliability and maintainability. This tool is especially useful for teams looking to streamline their testing processes and ensure comprehensive test coverage.
For Redux slices, focus on testing:
- Initial state
- Reducer functionality
- Action creators
Example test structure:
import { describe, it, expect } from 'vitest';
import reducer, { setOrdersTableData, setPageSize, setTotal } from '@/redux/features/orders/orderSlice';
describe('order slice', () => {
describe('reducer', () => {
it('should return the initial state', () => {
const initialState = reducer(undefined, { type: undefined });
expect(initialState).toEqual({
ordersTableData: [],
pageSize: 100,
total: 0
});
});
it('should handle setOrdersTableData', () => {
// Test implementation
});
});
describe('actions', () => {
it('should create setOrdersTableData action', () => {
// Test implementation
});
});
});
For API endpoints, test:
- Query configurations
- Response handling
- Side effects (like dispatching actions)
Example test structure:
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { setupApiStore } from '@/__tests__/utils/setupApiStore';
import { ADMIN_API } from '@/app.config';
// Mock dependencies
vi.mock('@/redux/features/api/apiSlice', () => ({
default: {
// Mock implementation
}
}));
// Import after mocking
import { permissionApi } from '@/redux/features/permission/permissionApi';
describe('Permission API', () => {
describe('getPermission endpoint', () => {
it('should have correct configuration', () => {
// Test implementation
});
});
});
For utility functions, focus on input/output testing:
import { describe, it, expect } from 'vitest';
import { isValidLatLon, fixSwappedLatLon } from '@/utils/mapUtils';
describe('mapUtils', () => {
describe('isValidLatLon', () => {
it('should return true for valid coordinates', () => {
expect(isValidLatLon(23.8103, 90.4125)).toBe(true);
});
it('should return false for invalid coordinates', () => {
expect(isValidLatLon(200, 300)).toBe(false);
});
});
});
Use Vitest's mocking capabilities to mock external dependencies:
// Mock entire modules
vi.mock('@/redux/features/api/apiSlice', () => ({
default: {
injectEndpoints: vi.fn().mockReturnValue({
endpoints: {
getPermission: {
query: vi.fn().mockReturnValue({
url: '/api/permissions',
method: 'GET'
})
}
}
})
}
}));
// Mock specific functions
vi.mock('@/utils', () => ({
showAlert: vi.fn(),
processApiResponse: vi.fn()
}));
Create realistic mock data for your tests:
const mockOrdersData = [
{
id: 1,
order_no: 'ORD001',
outlet_id: 1,
outlet_name: 'Test Outlet',
total_amount: 1000,
status: 'completed'
}
];
const mockApiResponse = {
data: {
data: {
data: mockOrdersData,
total: 100,
per_page: 50
}
}
};
For components that interact with Redux, mock the store:
// Setup a mock store with required state
const mockStore = configureStore({
reducer: {
orders: orderReducer
},
preloadedState: {
orders: {
ordersTableData: mockOrdersData,
pageSize: 50,
total: 100
}
}
});
Use async/await
for testing asynchronous code:
it('should handle async operations', async () => {
const dispatch = vi.fn();
const queryFulfilled = Promise.resolve(mockApiResponse);
await orderApi.endpoints.getOrdersTableData.onQueryStarted({}, { queryFulfilled, dispatch });
expect(dispatch).toHaveBeenCalledWith(setOrdersTableData(mockOrdersData));
});
If your tests fail due to mock implementation issues:
// Problem: Mock not returning expected value
vi.mock('@/redux/features/api/apiSlice', () => ({
default: {
injectEndpoints: vi.fn() // Missing return value
}
}));
// Solution: Provide complete mock implementation
vi.mock('@/redux/features/api/apiSlice', () => ({
default: {
injectEndpoints: vi.fn().mockReturnValue({
endpoints: {
getOrdersTableData: {
query: vi.fn().mockImplementation((body) => ({
url: '/api/orders',
method: 'POST',
body
})),
onQueryStarted: async (arg, { queryFulfilled, dispatch }) => {
try {
const result = await queryFulfilled;
// Implementation
} catch (err) {
// Error handling
}
}
}
}
})
}
}));
Each test should focus on a single aspect of functionality:
// Good: Focused test
it('should set orders table data', () => {
const previousState = { ordersTableData: [], pageSize: 100, total: 0 };
const newState = reducer(previousState, setOrdersTableData(mockOrdersData));
expect(newState.ordersTableData).toEqual(mockOrdersData);
});
// Avoid: Testing multiple things
it('should handle all order actions', () => {
// Testing too many things in one test
});
Write clear test descriptions that explain what's being tested:
// Good
it('should return the initial state when no action is provided', () => {});
// Avoid
it('tests initial state', () => {});
Structure your tests using the AAA pattern:
it('should handle success case', async () => {
// Arrange
const mockData = { data: { data: { /* ... */ } } };
const dispatch = vi.fn();
const queryFulfilled = Promise.resolve(mockData);
// Act
await orderApi.endpoints.getOrdersTableData.onQueryStarted({}, { queryFulfilled, dispatch });
// Assert
expect(dispatch).toHaveBeenCalledWith(setOrdersTableData(mockData.data.data.data));
});
Ensure tests don't affect each other:
beforeEach(() => {
vi.clearAllMocks();
});
afterEach(() => {
cleanup();
});
Generating unit tests with AI prompts can significantly accelerate your testing process. By following this guide, you can create comprehensive test coverage for your application with minimal manual effort. Remember that while AI can generate the initial test structure, you should review and refine the tests to ensure they accurately verify your application's functionality.
The key to success is providing clear, detailed prompts that include:
- The file to be tested
- The functionality to verify
- Any specific patterns or conventions to follow
- Relevant context about dependencies and interactions
With practice, you'll develop a workflow that allows you to quickly generate high-quality tests that improve your application's reliability and maintainability.