Fix your Tests - Best Practices
Reference This Guide to Maximize Test Debugging Efficiency
Understanding Test Results Statuses
Result Status | Description | Result Accessibility |
All steps executed successfully without any issues. | Accessible | |
Test case passed with minor issues (e.g., retries, slow load times). | Accessible | |
One or more steps failed due to functional/UI errors. | Accessible | |
Tests did not complete (e.g., crash, timeout, or manual stop). | Not Accessible | |
Tests did not start. | Not Accessible |
Review Test Run Result
Access Detailed Reports: Utilize Sofy's Lab Run or Scheduled Run views to access comprehensive reports, including step-by-step execution details, screenshots, and logs.
Identify Failure Points: Look for steps marked as failed or with warnings. Sofy's visual indicators (e.g., red crosses for failures) help pinpoint problematic steps quickly.
Analyze Failure Reasons: Review the 'Reason(s)' section to understand why a test failed—be it due to missing elements, incorrect actions, or unexpected UI changes.
Utilize Debugging Tools
Compare to Recording: Use this feature to juxtapose the current test run with the baseline recording, highlighting any deviations in UI or behavior.
Element Explorer: Element Explorer allows you to inspect UI elements' properties, such as class names, resource IDs, and hierarchy. This aids in verifying if the correct elements were interacted with during the test.
Reproduce the Failure Manually
Validating the failure manually ensures it's not a false positive:
Replicate Test Steps: Run the Test case in the lab on the same build as used in schedule run to see if the issue persists.
Check for Intermittent Issues: Some failures might be due to transient issues like network latency or temporary glitches. Reproducing the test helps confirm the consistency of the failure.
Identify Common Failure Causes
- Dynamic Elements: Elements that change frequently can cause locator issues.
- Example: A promotional banner shows "$4 TEEN BURGER® (ONTARIO ONLY)" but changes to "$3 FISH BURGER® (QUEBEC ONLY)" based on location
- Fix: Use stable attributes like resource-id, or text patterns instead of full text match XCUIElementTypeStaticText[contains(@value, 'BURGER')] instead of exact text
- Timing Issues: Delays or asynchronous operations may lead to test failures.
- Example 1: A popup appears after a slight delay and the test fails because it tries to click before the popup is ready.
- Fix: Use Time Delay or Dynamic wait for Element step to wait until the popup is fully loaded.
- Example 2: After performing an action like tapping a button, a popup/modal may or may not appear depending on the backend response, app state, or user data. If the next step blindly tries to interact with it, the test fails.
- Fix: Add condition to Step to ensure the UI is ready before interacting.
Apply Targeted Fixes
- Update Locators: Modify selectors to be more resilient to UI changes.
- Example: Test breaks because the XPath is based on full hierarchy and the app layout changed slightly.
- Fix: Use relative XPath (//button[text()='Submit']) instead of absolute XPath (/frame/layout/button[1]).
- Implement Wait Strategies: Use explicit waits to handle timing-related issues.
- Isolate Test Cases: Ensure tests are independent to prevent cascading failures.
- Example: Multiple test cases fail due to the same login issue.
- Fix: Create a Reusable Template for login steps. If the login process changes, just update the template.
Enhance Test Stability
- Use Assertions Wisely: Validate critical functionalities without over-constraining tests.
- Example: Overusing "Component Not Present" causes unnecessary failures if app behavior changes slightly.
- Fix: Only assert critical elements. Use soft validation where presence isn’t guaranteed.
- Handle Flaky Tests: Identify and refactor tests that fail intermittently.
Flaky tests are those that fail sometimes and pass other times without any code or test change. They reduce trust in automation and increase debug time. In mobile apps, flakiness is often caused by timing issues, network delays, or dynamic elements.
Best Practices to Reduce Flakiness
- Use stable locators to avoid element mismatch.
- Always wait for elements to load completely before interacting with them. Use "Dynamic Wait" instead of fixed delays.
- Add waits after any navigation or state change (e.g., after login, before tapping dashboard).
- Regularly check flaky runs in the Sofy Lab Run and update the flow as needed.
- Build test templates for common actions (like login or checkout). If something changes, update the template once instead of fixing multiple tests.
- Track which tests fail intermittently and fix them immediately. Don't ignore tests that "sometimes work."
- Write clear descriptions of what each test does so team members understand the expected behavior.
- Passes in Lab Run, but fails in a scheduled Run. To fix this, ensure your scheduled runs use the same device type and OS version as your successful lab tests. Add extra wait times before key actions since scheduled runs might execute faster than manual testing and vice versa. Also add app restart steps at the beginning of scheduled tests to ensure they start from a clean state not continuing from previous test sessions.
- Regularly review tests and update them to align with application changes.