MOCK SETUP CONFIGURATION WORKFLOW
PHASE 1: PROJECT STRUCTURE ANALYSIS
1. Redux: Located in redux/, with feature-specific logic in redux/features/
2. API Responses: Used in pages/ and hooks/
3. LocalStorage/SecureLocalStorage: Used in utils/ or directly in components
4. Tests: Stored in __tests__/ with mocks designed for testing
PHASE 2: ORGANIZATION REQUIREMENTS
1. File Structure:
- Redux mocks: __tests__/mocks/redux/ with .mock.ts extension
- API response mocks: __tests__/mocks/api/ with .api.mock.ts extension
- Storage mocks: __tests__/mocks/storage/
- Third-party library mocks: __tests__/mocks/libs/
PHASE 3: REDUX MOCKING IMPLEMENTATION
1. Create mock Redux state matching store.ts and redux/features/
2. Include mock actions, reducers, and initial states
3. Ensure type-safety matching actual Redux store structure
4. Implement selectors for common state access patterns
PHASE 4: API MOCKING IMPLEMENTATION
1. Simulate API responses for expected payloads
2. Structure according to hooks/ or API calls in pages/
3. Include both success and error response scenarios
4. Create helper functions for generating dynamic responses
PHASE 5: STORAGE MOCKING IMPLEMENTATION
1. Mock standard browser localStorage functionality
2. Add SecureLocalStorage mocks with test-friendly values
3. Include common storage keys used in the application
4. Implement storage event simulation for listeners
PHASE 6: VITEST SETUP INTEGRATION
1. Update vitest.setup.ts with necessary mocks:
- Browser APIs (matchMedia, IntersectionObserver)
- Storage APIs (localStorage, secureStorage)
- Redux store and API slices
- Notification API
- Geolocation API
- Feature-specific API mocks
2. Ensure consistent mock behavior across tests
3. Include cleanup functions to reset mocks between tests
PHASE 7: ADDITIONAL MOCK REQUIREMENTS
1. Add feature-specific API mocks (statistics, auth, orders)
2. Include common utility function mocks (alertUtils, apiUtils)
3. Implement properly typed mocks matching actual implementation
4. Add global error handling mocks
5. Create mocks for third-party libraries
EXPECTED OUTPUT
1. Mock files in specified folder structure with .mock.ts extension
2. Updated vitest.setup.ts with all necessary mock configurations
3. Mocks usable seamlessly in unit and integration tests
4. Documentation for each mock file's purpose and usage
5. Type-safe implementations matching actual code structure
Your task is to automate the process of generating and validating unit tests based on my project configuration by identifying untested files, creating structured test cases, and iterating through test execution until all listed files tests pass successfully. The focus should be on simplicity—avoiding complex mocks, reducing TypeScript-specific configurations, and ignoring linting issues to streamline the testing process. Do not stop your process untill all files generation and test pass.
## PHASE 0: INITIALIZATION
1. **Target Identification**
- Scan the proejct find files recursively, which do not have unit test.
- List down find out files result
- Prioritize testing order (components first, then hooks, then utilities).
- Maintain a progress tracker for completed tests.
## PHASE 1: TEST GENERATION LOOP
For each untested file, follow these steps until all tests pass:
### Step 1: Analyze the File
- Identify key functions and components that require testing.
- Determine necessary mocks but keep them minimal.
- Avoid over-engineering TypeScript types—keep test logic straightforward.
### Step 2: Mock Management (Keep It Simple)
- Use existing mocks if available.
- If needed, create **basic** and reusable mocks, avoiding unnecessary complexity.
- Place new mocks inside `__tests__/mocks/`, ensuring they are easy to maintain.
### Step 3: Write Test Cases
- Follow a clear structure for each test file:
- **For components**: Test rendering, interactions, and state changes.
- **For functions**: Test expected outputs and edge cases.
- **For hooks**: Test behavior under different conditions.
- Use `__tests__/[file].test.ts(x)` as the naming convention.
- Skip excessive TypeScript configurations—focus on testing behavior.
### Step 4: Run & Fix Tests
- Execute tests: `npm run test [test-file]`.
- If tests fail, analyze errors and make necessary fixes.
- Ignore linting warnings to avoid unnecessary distractions.
- Keep debugging iterations minimal—move on if a test is too complex.
### Step 5: Complete & Repeat
- Mark the current file as tested.
- Move to the next file in the queue.
- Repeat until all files have passing tests.
## KEY CONSTRAINTS
1. **DO NOT MODIFY SOURCE CODE**
- Only create or edit files inside `__tests__/`.
2. **IGNORE LINTING ISSUES**
- Skip formatting or lint-related problems.
3. **KEEP MOCKS SIMPLE**
- No over-engineered or deeply nested mocks.
- Prioritize existing mocks over creating new ones.
4. **SIMPLIFY TYPESCRIPT USAGE**
- Avoid unnecessary TypeScript constraints in tests.
- Focus on functional correctness over strict typing.
5. **IGNORE ALREADY TESTED FILES**
- Ignore files which already have test files
## TESTING APPROACH
1. **React Components**: Use `render`, `screen`, and `fireEvent` from RTL.
2. **APIs & Hooks**: Use simple mocks for API calls and dependencies.
3. **Redux**: If needed, use a lightweight mock store for state-dependent tests.
4. **Next.js Features**: Mock `useRouter` and built-in Next.js functions if required.
## SUCCESS CRITERIA
1. Each targeted file has a working test.
2. All tests pass when running `npm run test`.
3. No complex mocks or unnecessary TypeScript constraints.
4. No linting issues block progress.
5. Minimal debugging required to keep efficiency high.
UNIT TEST GENERATION/UPDATE WORKFLOW
PHASE 1: CHANGE ASSESSMENT
1. Identify affected files due to:
- New features added to components/files
- New logic implemented in existing code
- Folder structure changes affecting imports/paths
2. Recursively scan specified folders for changes
3. Determine which files need new tests vs. test updates
PHASE 2: TEST PLANNING
1. For each affected file:
- Check if existing test needs updating
- Identify new functionality requiring test coverage
- Map dependencies needing mock updates
- Determine test scope and requirements
PHASE 3: TEST IMPLEMENTATION
For each affected file:
1. Generate or update unit tests following vitest.setup.ts patterns
2. Create any required new mocks in __tests__/mocks/
3. Update vitest.setup.ts if new global mocks are needed
4. Follow existing test structure and patterns
5. Ensure component render tests are prioritized
PHASE 4: TEST VERIFICATION
For each implemented test:
1. Run individual test: npm run test [file_path]
2. Analyze any failures
3. Fix test implementation issues (not source code)
4. Verify test covers new functionality/logic
5. Rerun until test passes
6. Document any concerns or limitations
PHASE 5: MOCK MANAGEMENT
1. Create new mocks only when necessary
2. Store all mocks in __tests__/mocks/ with proper organization
3. Update existing mocks carefully to avoid breaking other tests
4. Ensure mocks are type-safe and match actual implementations
5. Register global mocks in vitest.setup.ts when appropriate
6. Do NOT create invalid mock
PHASE 6: FINAL VERIFICATION
1. Run complete test suite: npm run test
2. Verify no regressions in existing tests
3. Address any integration issues between tests
4. Document test coverage and outcomes
CRITICAL CONSTRAINTS
1. Only modify:
- Test files in __tests__/
- Test setup in vitest.setup.ts
- Mock files in __tests__/mocks/
2. Never modify source files
3. Always use alias imports (@/components/...)
4. Use globally declared mocks consistently
5. Create reusable, scalable test utilities
6. Use proper TypeScript types (avoid any when possible)
TEST REQUIREMENTS
1. Component tests must include render logic test case
2. Render logic test must pass even if other tests fail
3. All tests must be compatible with existing test suite
4. Tests should verify specific functionality, not implementation details
5. New tests should follow existing patterns and conventions
6. Do NOT create invalid mock
EXPECTED OUTPUT
1. New or updated test files saved in __tests__/ folder
2. All tests passing individually and as a suite
3. New mocks properly organized in __tests__/mocks/
4. No modifications to source files
5. Consistent use of alias imports and global mocks
6. Documentation of test implementation and coverage
7. List of all updated/created test files
TEST BUG FIXING WORKFLOW (Updated Approach)
PHASE 1: INITIAL ASSESSMENT
1. Run all tests to identify failing cases:
npm run test
2. List all failing test files along with error details.
3. Categorize failures by common issues (mocking, assertions, Redux, API middleware).
PHASE 2: SYSTEMATIC FIXING
For each failing test file:
1. Review & Analyze
- Identify failing test cases and their errors.
- Do NOT modify main components, business logic, or feature implementations.
- Check the corresponding component or Redux slice.
2. Fix & Improve
- Modify only test files and mocks.
- Update test cases, assertions, and mock setups.
- Ensure correct provider setup (Redux, API middleware, context).
- Fix assertions using toHaveAttribute, expect.stringContaining, etc.
- Handle errors and edge cases properly.
3. Verify the Fix
- Run the specific test file:
npm run test [test-file-path]
- If it passes, move to the next failing test.
- If it fails, refine the fix and retry.
4. Document Fixes
- Track common patterns and issues encountered.
- Ensure reusable mock implementations where applicable.
PHASE 3: REPEATED VERIFICATION
1. After fixing all failing tests, re-run the full test suite:
npm run test
2. If new failures appear, repeat the process until all tests pass.
COMMON ISSUES TO ADDRESS
- Style Testing: Use toHaveAttribute with expect.stringContaining.
- Redux Mocks: Ensure proper store setup and provider wrapping.
- API Middleware: Verify API mocks without modifying endpoints.
- Mock Function Implementation: Correctly stub dependencies.
- Error Handling & Edge Cases: Validate boundary conditions.
FIX PRIORITIZATION
1. Redux Store & Configuration Tests
2. Feature Slice Tests
3. Feature Component Tests (Only within test files)
TESTING COMMANDS
- Run a single test file:
npm run test [test-file-path]
- Run all tests:
npm run test
IMPORTANT GUIDELINES
- Do NOT modify main components or core features.
- Only modify test files and mocks.
- Do NOT change API endpoints (refer to app.config.ts for validation)..
- Fix tests one by one in a structured manner.
- Ensure reusable mocks and correct test setups.
- Continue this process until all tests pass successfully.
VITEST UNIT TESTING FRAMEWORK FOR REACT-REDUX APPLICATIONS
PHASE 1: SETUP AND CONFIGURATION
- Import necessary testing libraries:
import { describe, it, expect, vi, beforeEach } from 'vitest' import { render, screen, waitFor, fireEvent } from '@testing-library/react' import { renderHook } from '@testing-library/react-hooks'
- Mock redux store at module level:
vi.mock('@/redux/store', async () => ({ default: { getState: vi.fn(), dispatch: vi.fn(), subscribe: vi.fn() }, useAppDispatch: vi.fn(() => vi.fn()), useAppSelector: vi.fn(selector => selector(mockState)) }))
- Use component-specific mock setup before tests
- Reset mocks between tests with vi.clearAllMocks()
PHASE 2: REDUX STORE TESTING
- Create wrapper utilities for Redux components:
const createWrapper = (preloadedState = {}) => ({ children }) => ( <Provider store={configureStore({ reducer: { /* reducers */ }, preloadedState })}> {children} </Provider> )
- Test slice reducers in isolation
- Test selectors with various state configurations
- Mock dispatch and verify correct actions
PHASE 3: RTK QUERY TESTING
- Use setupApiStore utility for RTK Query tests:
const { store, wrapper } = setupApiStore(api)
- Mock API endpoints consistently:
const endpoint = api.endpoints.getData.initiate() const result = await endpoint(store.dispatch, store.getState, undefined) expect(result.data).toEqual(expectedData)
- Test both success and error paths
PHASE 4: COMPONENT TESTING
- Render with Provider for connected components:
render(<Component />, { wrapper: createWrapper() })
- Test UI interactions affecting Redux state
- Verify Redux actions trigger UI updates
- Use proper attribute assertions for styles:
expect(element).toHaveAttribute('style', expect.stringContaining('color: red'))
PHASE 5: MOCK IMPLEMENTATIONS
- Mock Redux hooks with specific implementations:
vi.mocked(useAppSelector).mockImplementation(selector => selector({ feature: { data: mockData } }) )
- Mock API responses consistently:
vi.mocked(api.endpoints.getData.useQuery).mockReturnValue({ data: mockData, isLoading: false, isError: false })
- Test edge cases with boundary values:
['', null, undefined, {}, []].forEach(value => { it(`handles ${JSON.stringify(value)} correctly`, () => { // Test with boundary value }) })
PHASE 6: TEST STRUCTURE
- Use describe blocks for logical grouping:
describe('Component', () => { describe('when data is loading', () => { // Setup loading state it('shows loading indicator', () => {}) }) describe('when data is loaded', () => { // Setup loaded state it('renders content correctly', () => {}) }) describe('when error occurs', () => { // Setup error state it('shows error message', () => {}) }) })
- Structure assertions from general to specific
PHASE 7: COMMON TEST PATTERNS
- Test initial render state
- Test state changes after user interactions
- Test API loading, success, and error states
- Test Redux state synchronization
- Test form submission and validation
- Test conditional rendering logic
EXPECTED OUTPUT
- Comprehensive test coverage
- Consistent testing patterns
- Reliable mock implementations
- Clear test structure following phases above
services:
watchtower:
image: containrrr/watchtower
container_name: watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: --interval 30 gts-ecommerce-frontend