Tech/Engineering

Closing the UI Gap: A Testing Strategy for Enterprise Workloads

Roshni Devadiga , Principal Engineer, Enterprise Workload

Problem statement and objectives

Enterprise Workloads UIs are complex, data-heavy, and critical to customers. Over time, our teams leaned heavily on API automation — it was fast, reliable, and easy to scale. Meanwhile, manual UI testing naturally decreased due to bandwidth constraints.

This shift, however, led to a growing issue: UI-specific defects in state management, component interactions, and workflow logic began slipping into production.

We needed a sustainable testing strategy that would:

  • Restore confidence in the UI without reintroducing heavy manual cycles.
  • Scale with feature velocity and team growth.
  • Integrate with existing tooling to reuse prior automation investments.

Testing strategy overview

We adopted a two-layered approach centered on Cypress — balancing speed, reliability, and realism:

  • Component Tests (CT) for deep, fast, deterministic coverage of pages and component interactions using mocked APIs.
  • End-to-End UI automation (E2E) for primary, business-critical workflows that must run end-to-end in real environments.

Together, these layers bridge the confidence gap: CTs cover breadth and interaction logic, while a targeted E2E layer validates real-world workflows and environment nuances.

CT-E2E pie chart

Why we shifted from UTs to CTs

We initially relied on Jest and Enzyme for unit tests. While effective for isolated validation, this approach introduced three key friction points:

  • Integration blind spots: Unit tests validate isolated parts, but confidence drops once those parts interact on a page.
  • High boilerplate: Complex pages (5–10 components) required extensive, brittle mocks for each component.
  • Environment fidelity: UTs often ran in jsdom-like environments. Cypress CT runs in a real browser, which better reflects production behavior.

Moving to Cypress Component Testing lets us test pages as cohesive units while rendering in an actual browser. We saw roughly 50–60% less test code with higher confidence because we tested interactions and integrated behavior rather than isolated pieces.

Component testing workflow and architecture

Our goals for Component Testing were determinism, speed, and realism. To achieve this, we standardized on:

  • MSW (Mock Service Worker) to intercept network calls and return controlled responses.
  • Page-level mounting to render whole pages (or complex composites) so inter-component behavior is exercised.
  • Scenario matrices to codify multiple API response combinations (empty, error, partial, large) and assert the resultant UI.

These tests run as part of CI for every merge request, serving as a quality gate. Merges are blocked unless CTs pass, ensuring consistent >85% coverage across features and driving UI quality earlier in the development cycle.

Why we still needed E2E UI automation

CTs give excellent coverage, but some failures only surface in real environments: configuration quirks, auth/integration edges, and multi-system workflows. We added a thin, reliable E2E layer to validate the primary, business-critical paths end-to-end.

When designing our E2E strategy, we recognized two important factors:

  • Certain essential steps, like provisioning a backup proxy or creating a VM, occur outside the browser.
  • Crucially, our existing rich API automation in PyTest was perfectly suited to manage these out-of-browser operations.

To bridge the gap, we built a hybrid workflow connecting Cypress with our existing Python-based automation.

End-to-end hybrid workflow: Cypress + automation server + PyTest

We implemented a small FastAPI automation server that exposes HTTP endpoints. Cypress calls these endpoints to trigger existing PyTest automation functions, receives context back (IDs, credentials, connection info), and proceeds with UI steps.

flow chart

Reuse through a shared Page Object Model (POM)

To improve maintainability, we adopted a shared Page Object Model (POM) used by both CT and E2E layers.

Benefits:

  • Single source of truth for selectors, reducing brittleness during UI changes.
  • Shared actions across CT and E2E, minimizing duplication and drift.

CI/CD, reporting, and traceability

  • Our Jenkins pipelines execute CT on every merge request, while scheduled E2E suites run against target environments.
  • Results are automatically published via internal channels and email for immediate visibility.
  • Integration with Jira/Xray links automated runs to test cases and plans, ensuring traceability across features.

Results and impact

  • Coverage: >85% via CT across most features, enforced by CI quality gates.
  • Confidence: E2E surfaced environment and integration issues that mocks could not capture.
  • Efficiency: ~50–60% less UI test code compared to Enzyme-based unit tests.
  • Defect containment: Significant reduction in UI regressions reaching production.

Practical tips and lessons learned

  • Keep E2E lean: focus on business-critical, happy-path flows; shift edge cases to CT.
  • Stabilize selectors: use accessible roles/labels to reduce flakiness and aid accessibility.
  • Own your mocks: maintain MSW handler libraries per domain to keep CTs expressive and fast.
  • Instrument environments: add logs/identifiers to UIs and APIs for triage speed.
  • Parallelize: split CT suites by domain for fast feedback on merge requests.

Conclusion and key metrics

Adopting Cypress for both component and end-to-end testing established a scalable, maintainable, and high-confidence UI quality program for Enterprise Workloads.

  • CT coverage: >85% across features, enforced by merge gating.
  • Test authoring efficiency: 50–60% reduction in UI test code compared to legacy unit tests.
  • Production quality: measurable reduction in UI regressions escaping to production.

This approach balances speed and fidelity, fits enterprise delivery constraints, and scales with team growth and feature velocity.