Workplace Image SEQIS Group GmbH

Atlassian Forge Custom UI Testing und Asynchronität

Description

Daniel Kleißl von SEQIS spricht in seinem devjobs.at TechTalk über eine Eigenheit von Forge beim UI Testing und zeigt in einer Live Demo, wie das Team dieses Problem gelöst hat.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In “Atlassian Forge Custom UI Testing und Asynchronität,” Daniel Kleißl (SEQIS Group GmbH) explains that Forge Custom UI runs in an Atlassian sandbox and importing the Forge Bridge breaks a normal browser session, preventing straightforward local execution. He demonstrates a practical testing setup: provide a mock/stub of the Forge Bridge in a file loaded before other imports, use React Testing Library to render, and mock bridge responses like a regular API. He then shows how Mocha can swallow async errors and how enabling Node’s unhandled-rejections strict mode yields clear stack traces—actionable guidance for making Forge app tests reliable and informative.

Atlassian Forge Custom UI Testing and Asynchronicity: Bridge Mocks, React Tests, and Node Flags — Key Lessons from Daniel Kleißl’s Session

Why this talk stands out: A focused, engineering-first perspective

In “Atlassian Forge Custom UI Testing und Asynchronität,” Daniel Kleißl (SEQIS Group GmbH) chose a refreshingly practical route: a narrow but crucial topic that many Forge teams hit early—how to test Custom UI reliably when the browser environment isn’t standard and async errors go missing in the test runner.

From our DevJobs.at editorial seat, we appreciated the engineering clarity. Instead of a general framework tour, Kleißl explained exactly what broke, why it broke, and how his team at ratzfatz.eu (the development department of SEQIS) fixed it while pushing their Forge app “Intelligent Risk Assessment – Journey to Rome” to marketplace readiness. His anchor metaphor—“Many roads lead to Rome, but maybe Rome isn’t the destination”—applies neatly to framework defaults: they work if you follow the paved path, but if you need to go elsewhere, you’ll have to adapt.

Who spoke—and why the perspective is hands-on

Daniel Kleißl has been with SEQIS for about two years and serves as Lead Developer for the Atlassian Forge app “Intelligent Risk Assessment – Journey to Rome.” He is concluding his bachelor’s studies at FH Campus Wien, underscoring that this is an active, production-minded journey rather than a theoretical exercise. The upshot: the testing recommendations come from daily practice of shipping a real Forge app.

Forge in short: Platform value—and the testing trapdoor

Atlassian describes Forge as a “Serverless App Development Platform” for Atlassian Cloud products—namely Confluence Cloud, Jira Cloud, and Jira Service Management Cloud. The appeal is clear: hosting, runtime, and integration are provided by Atlassian, letting third-party developers build React-based apps and bring them to the marketplace without worrying about infrastructure.

That’s the promise. The testing reality is where Kleißl zoomed in.

Architecture primer: Runtime, UI Kit, Custom UI—and the Bridge

Kleißl breaks the architecture down into:

  • Runtime: Where Atlassian manages execution.
  • UI Kit: Suited for simpler UI needs.
  • Custom UI: A sand-boxed environment where you can build a full React app.

For a Custom UI app to talk to the runtime and product instance, Forge provides the Bridge. It links the sandboxed React app with the Atlassian-side context. That separation is powerful—and becomes the main challenge when you try to render and test your app outside the Atlassian environment.

The core issue: a non-standard browser environment

Kleißl’s demo makes the problem tangible. A minimal React app renders fine locally—a box and an image. The moment the Forge Bridge import is enabled, the local browser goes blank and the console lights up with a high-signal error:

Unable to establish a connection with the custom UI bridge, if you are trying to run your app locally, Forge apps only work in the context of Atlassian products.

What’s happening? Importing the Bridge triggers a connection attempt back to the Atlassian context. In a normal local browser, that context does not exist—so the app fails to render. If the app can’t boot, you can’t test its UI.

The testing consequence: You must mock the Bridge

When an import alone expects a fully present Atlassian context, test code needs to provide an alternative. Kleißl’s approach hinges on three moves:

  • Use React Testing Library to render components and assert UI states.
  • Introduce a mocking framework to replace the Forge Bridge.
  • Create a Bridge stub in a dedicated file that is imported before any other module that references the Bridge.

That import order is the non-negotiable piece: the stub must be in place before any code pulls in the Bridge. Then you treat Bridge calls like an ordinary API—mock responses per test, assert the resulting UI. The authoritative list of endpoints and capabilities lives in the Forge documentation; your mocks should mirror that surface.

Once the stub is wired correctly, the rest of the testing flow feels familiar: render, fire events, await state transitions, assert the UI. The special environment fades into the background.

From “blank screen” to a testable app: the short path that matters

Kleißl walks the sequence:

  1. A minimal React component renders locally without issue.
  2. Importing the Forge Bridge kills local rendering with the console error quoted above.
  3. Adding an early, central Bridge stub unblocks the entire test harness, allowing the app to initialize and react.

This sequence underscores a simple truth: without a deterministic Bridge mock, every other testing detail is moot. The most value lies in getting this foundation right.

Asynchronicity under test: when Mocha “swallows” errors

With a working Bridge stub, the next challenge is asynchrony. Kleißl notes that Mocha “swallows” async errors. He’s explicit that this is not a bug—it’s “Works As Designed.” Uncaught promise rejections are not thrown as exceptions anymore; they show up as warnings or are otherwise absorbed by runner behavior.

Why does that matter? Because it makes debugging slow. Kleißl wants to cover edge cases and common programmer mistakes—typos, mis-referenced functions, and rejections from async calls. If the original cause is obscured, you end up chasing ghosts.

A common UI flow that surfaces the problem

Kleißl walks through a very typical UI pattern:

  • A button triggers an event handler.
  • The handler calls an async function.
  • Once resolved, it sets a loading flag to false, revealing a previously hidden div.

The test clicks the button via React Testing Library, then “waits” for the div to appear—this wait serves as the assertion. When the async function rejects (or a trivial typo prevents it from being called), Mocha, by default, fails the test with a generic “waited too long” output. The root cause—the unhandled rejection—never surfaces in the stack trace.

What we want instead: visible, source-level stack traces

Kleißl’s goal is to see the real failure: an “Unhandled Promise Rejection” with a stack trace that points toward the source. That’s the shortest path to fix—whether the bug is in the test setup, the mock, the component code, or the async implementation.

The pragmatic fix: Make Node strict about unhandled rejections

After working through community resources, Kleißl credits a conversation with a large language model for surfacing the exact lever: a Node runtime flag that enables a strict mode for unhandled promise rejections. With this mode enabled, the test output includes stack traces for async failures—the precise feedback loop he needed.

In his demo, starting the exact same test run with the additional flag immediately yields a clear stack trace. Instead of a generic timeout while waiting for a div, you get an “Unhandled Promise Rejection” and a path to the offending code. That clarity is worth a lot in day-to-day debugging.

The broader point is straightforward: sometimes a small runtime switch transforms testing from “opaque” to “actionable.”

“Rome vs. Constantinople”: Question the defaults when your goal differs

Kleißl’s analogy resonates: frameworks often streamline you toward a default destination—“Rome.” If that’s not where you need to go, you’ll have to choose different roads. In Custom UI testing for Forge, two adjustments are essential:

  • The local browser is not a Forge environment—mock the Bridge early and decisively.
  • Mocha’s default handling of async failures is not strict—configure Node so unhandled rejections become visible exceptions.

Neither step is a heavy lift, but both are critical if you view tests as a quality instrument rather than a light smoke test.

A practical checklist that reflects the session

Translating Kleißl’s demos into a repeatable practice, we recommend the following checklist:

  1. Ground yourself in Forge architecture.
  • Custom UI runs in a sandbox, communicating with the runtime via the Bridge.
  • Your local browser will not provide that Bridge context by itself.
  1. Establish the React Testing Library baseline.
  • Use familiar render and query patterns.
  • Assert UI states that should appear after user events.
  1. Mock the Forge Bridge.
  • Create a Bridge stub in a dedicated file.
  • Import that stub before any module that imports or references the Bridge.
  • Treat Bridge interactions as an API and mock responses deliberately per test.
  1. Make async failures visible.
  • Accept that Mocha does not throw unhandled rejections by default.
  • Enable strict handling of unhandled rejections in the Node runtime so stack traces appear in test output.
  1. Cover edge cases deliberately.
  • Simulate rejections from async functions.
  • Validate that simple programmer mistakes (typos, miswired imports) are surfaced in your setup.
  1. Use Forge documentation as the source of truth.
  • Mirror its described Bridge endpoints in your test doubles.

Patterns you can reuse beyond Forge

Kleißl’s examples are intentionally minimal, yet they yield reusable testing patterns:

  • “Button triggers async handler”: Test the full chain from event to UI response—including the failure path.
  • “Conditional rendering after loading”: Use the testing library’s wait capabilities to assert post-async UI states.
  • “Expose hidden errors”: Force visibility of unhandled rejections so debugging time shrinks dramatically.

These patterns are widely applicable in React testing. In Forge, because of the sandboxed environment and Bridge, they are non-optional.

Boundaries and clarifications from the session

Kleißl sticks to the facts and avoids overgeneralizing. We highlight three important clarifications:

  • Mocha’s behavior is by design. If you want strict error propagation for unhandled rejections, flip the Node runtime switch.
  • The Bridge stub is not negotiable. Once you import the Forge Bridge, it expects the Atlassian environment—tests won’t boot without a stub.
  • The Forge documentation dictates which endpoints are available. Your mocks should align with that surface.

Why the effort pays off

If you treat UI tests as an integration tool, the payoff is tangible:

  • Faster root-cause analysis: Instead of vague timeouts, stack traces point you to the origin.
  • More robust pipelines: Tests that cover failure paths reduce regressions as the app grows.
  • Better developer experience: Once set up, the testing workflow feels like conventional React testing—even in Forge’s special environment.

Where to go next—according to Kleißl

Kleißl points to the Forge documentation as the entry point. He also mentions a separate “10-More-Things” talk in which he walks through the framework from “start to marketplace release,” offering a comprehensive overview. Finally, his team’s app (“Intelligent Risk Assessment – Journey to Rome”) is available in the Atlassian Marketplace and can be installed for free—useful if you want to see what a shipped Forge app looks like.

Closing thoughts: Precise levers for reliable tests

“Atlassian Forge Custom UI Testing und Asynchronität” by Daniel Kleißl (SEQIS Group GmbH) is a concise set of levers that make Custom UI testing workable:

  • Mock the Forge Bridge early so the app can initialize in tests.
  • Make Node strict about unhandled rejections so async errors become first-class citizens in your test output.

Taken together, these adjustments recognize—and work with—Forge’s architecture rather than fighting it. If “Rome” isn’t your destination, pick the right road to where you need to go. In this case, the detour is small, and the testing benefits are decisive.

More Tech Talks

More Tech Lead Stories

More Dev Stories