Workplace Image SEQIS Group GmbH

JavaScript and (Non) Blocking UI

Description

Klemens Loschy von SEQIS fokusiert sich in seinem devjobs.at TechTalk "JavaScript and (Non) Blocking UI" auf eine Problemstellung aus dem Gebiet der eventloops in JavaScript und demonstriert einen Lösungsweg.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In JavaScript and (Non) Blocking UI, Klemens Loschy shows how the single-threaded event loop (call stack, callback queue) governs UI updates and how enabling a request cache can turn an async fetch into synchronous work, preventing the loading overlay from painting. His demo illustrates that await alone doesn’t guarantee asynchrony and that blocking the main thread blocks the UI. The practical fix is to use double requestAnimationFrame before heavy synchronous processing so the browser repaints the overlay first—a framework-agnostic pattern developers can apply to keep UIs responsive.

JavaScript and (Non) Blocking UI: When Caching Blocks Your Frontend – and How Double requestAnimationFrame Fixes It

A technical recap of the session by Klemens Loschy (SEQIS Group GmbH)

Frontend users do not forgive sluggishness. Once a task exceeds two seconds, visible feedback becomes essential. In the session titled JavaScript and (Non) Blocking UI, Klemens Loschy walked through a very practical lesson: a seemingly harmless optimization via request caching made the UI feel blocked. The solution was surprisingly simple yet counterintuitive – relying on the browser’s requestAnimationFrame, and doing it twice in a row.

From our DevJobs.at editorial perspective, we followed the thread from symptoms to root cause, through the Event Loop mechanics, and to a robust fix. This recap is for frontend and full-stack engineers who want a clear, engineering-grade understanding of how the single-threaded JavaScript runtime, the call stack, and browser repaints directly shape perceived responsiveness.

The talk’s goal: grasp the Event Loop enough to avoid blocking

Klemens set expectations upfront: the mission wasn’t to research the Event Loop in extreme depth but to understand the concrete failure case well enough to explain it and to trust the solution. The focus was practical understanding and improved intuition. The overarching message: non-blocking UI is critical for user experience, and developers must ensure the browser has a chance to update the screen.

A memorable threshold from the talk: everything over two seconds should be visualized to the user. That’s where this story begins.

The setup: one button, one request, a processing step, and a spinner

The demo scenario is straightforward:

  • A button click in the browser kicks off a load.
  • Data is fetched via a web request.
  • A potentially complex processing step follows.
  • If the total operation exceeds two seconds, an overlay or loader should appear and signal progress.

The stack used: Node.js on the backend, Vue.js on the frontend, running on V8. Crucially, none of these frameworks are the cause or the solution. The phenomenon is general and reproducible in plain HTML, vanilla JavaScript, and CSS.

An iterative path through four versions

Klemens presented the story through four incremental versions, mirroring the way teams naturally evolve features. It’s a textbook example of how non-blocking by accident can flip into blocking by accident.

V1 – The baseline: everything is still fast enough

In the initial version, responses arrive in the 200–500 ms range, and results render at around 700 ms. The UI is responsive, and there’s no spinner yet. Nothing feels wrong at this stage.

V2 – User feedback for longer tasks: show the overlay

As complexity grows, response times climb. Multiple backends, sequential dependencies, larger payloads – overall processing stretches into 2.5 seconds and beyond. This is the point where the overlay comes in. Below two seconds, the UI stays as-is; above that threshold, a loading animation appears and disappears after the work completes.

The pattern: only show the overlay for slower paths to keep the UI communicative without being noisy.

V3 – Request caching: faster on paper, blocked in perception

Next, an optimization that makes perfect sense: requests often repeat with identical parameters, and the data doesn’t change frequently. Enabling a request cache saves backend round trips and, in the demo, shaves off roughly 900 ms. Great in theory.

But after enabling the cache, something breaks in perception: because the cache returns synchronously and control never yields, the overlay never appears. The UI feels blocked, ironically right where we made the path faster.

JavaScript fundamentals: single-threaded execution, the call stack, the callback queue, and the Event Loop

Klemens grounded the explanation in the runtime mechanics that matter:

  • JavaScript in the browser runs single-threaded on the main thread.
  • There is a call stack where functions execute.
  • Asynchronous operations (like web requests, timers, DOM operations) go through Web APIs and post completion callbacks.
  • These callbacks go into a callback queue (with a distinction between tasks and microtasks, which was not important for this case).
  • The Event Loop moves callbacks from the queue to the call stack – but only when the stack is empty.

This last detail is pivotal: as long as the call stack remains busy, the browser has no opportunity to repaint. Merely queuing a UI update isn’t enough; the UI can only update when execution yields and the Event Loop gets a turn.

Why the non-cached path worked: non-blocking by accident

When no cache is in place, the happy path unfolds like this:

  1. A button click triggers the load routine.
  2. The loading flag is set to true, which queues a UI update to render the overlay.
  3. An asynchronous web request is dispatched. Because it’s asynchronous, control leaves the call stack, which can then become empty.
  4. The Event Loop finds the stack empty, takes the UI update callback, and the browser repaints; the overlay becomes visible.
  5. When the response arrives, its callback is queued, moved to the stack, and the complex processing starts.
  6. Finally, the loading flag is set to false, another UI update is queued, and the overlay is removed after the stack yields again.

The important bit: the async request created a gap for the repaint.

Why caching broke the experience: a synchronous path that removed the gap

With the request cache enabled, the control flow looks subtly but crucially different:

  • Setting loading to true still queues a UI update.
  • Instead of an asynchronous web request, the code now hits a local cache synchronously.
  • The data arrives immediately, and the heavy processing starts right away.
  • The call stack remains occupied the whole time; the Event Loop never gets a chance to run the UI update callback.
  • At the end, loading is set to false and a repaint is queued – but the overlay was never rendered in the first place.

In effect, the synchronous cache path removed the tiny but critical window in which the browser could update the screen.

Another key insight from the talk: an await is not a guarantee of asynchronous behavior. It depends on what you await. If a function returns synchronously (as the cache did here), the stack stays occupied despite syntactic sugar.

Non-blocking by design: the deliberate fix

The follow-up question is: how do we ensure the overlay becomes visible before heavy work continues, especially on synchronous paths? The answer is to deliberately give the UI a frame to paint, not by sleeping, but by aligning with the browser’s paint cycle.

The tool for that: requestAnimationFrame.

  • requestAnimationFrame schedules a callback for the next frame.
  • That timing aligns with when the browser is ready to repaint.
  • If we start the heavy work after that callback runs, we ensure the overlay is on screen first.

The practical twist from Klemens’ experience: double requestAnimationFrame. Calling it twice ensures, across different browsers and engines, that the overlay is indeed visible before computation proceeds. Implementations differ; in practice, the second frame makes the effect robust.

The control flow with double requestAnimationFrame

Klemens illustrated the working path step by step:

  1. A click starts the load and sets loading to true, which queues a UI update.
  2. The first requestAnimationFrame schedules a frame callback.
  3. The call stack empties; the Event Loop runs the repaint callback and the overlay becomes visible.
  4. The second requestAnimationFrame schedules another frame callback.
  5. Again, the stack yields; the browser repaints and the visible state is now guaranteed.
  6. Only then does the code fetch data. Whether the response is synchronous (from cache) or asynchronous (from backend) no longer matters – the overlay is already visible.
  7. After the processing completes, loading is set to false, the final repaint is queued, and the UI returns to normal.

The outcome: the UI no longer swallows the spinner on fast, synchronous paths. Perceived responsiveness is restored.

What we learned

Klemens Loschy’s talk is a reminder that performance is as much about perception as it is about raw speed. In frontends, responsiveness depends on whether the browser gets time to repaint – which is governed by the Event Loop and the state of the call stack.

Key lessons we took away:

  • Non-blocking behavior must be engineered deliberately, especially on synchronous code paths like caches.
  • If the call stack stays busy, queued UI updates won’t run; the overlay will not show.
  • Async networking naturally yields; synchronous caches don’t. Treat them differently in your UI flow.
  • An await does not force asynchrony. If the awaited code is synchronous, the stack remains blocked.
  • requestAnimationFrame is the reliable way to align your logic with the browser’s paint cycle. Calling it twice makes the repaint durable across different browsers.

Practical guidance for engineering teams

Translating these insights into day-to-day engineering practice:

  • Decouple the loading state from the heavy work. After setting the loading flag, allow the browser a frame to render before starting computation. Double requestAnimationFrame is a pragmatic way to do this.
  • Treat caches as synchronous. When enabling a request cache, verify that UI updates still occur on time; if the overlay stops appearing, the stack is likely perpetually occupied.
  • Watch the processing step, not just the network. UI-blocking can be caused by computation alone.
  • Time state transitions explicitly. Ensure there is at least one frame between showing the overlay and starting the heavy work.
  • Foster a shared mental model. As Klemens did with his diagrams, think in terms of call stack, queues, and frames when debugging responsiveness. It’s a powerful communication tool within the team.

Common misconceptions addressed

The session quietly debunked a few widespread assumptions:

  • Syntactic asynchrony is not the same as actual asynchrony. Awaiting a function that returns synchronously does not free the call stack.
  • Frameworks do not automatically solve Event Loop issues. Neither Node.js nor Vue.js nor the V8 engine are the culprit or the fix; the underlying runtime mechanics are.
  • Spinners are not mere decoration. Past the two-second mark, they are essential for a coherent user experience.

Further resources from the talk

Klemens cited several helpful references:

  • In The Loop – a clear and entertaining explanation of the Event Loop.
  • What the Heck is the Event-Loop Anyway – complete with a small environment that visualizes JavaScript code, the call stack, and the callback queue.
  • Task, Micro-task, Queues and Schedules – a blog post that juxtaposes code, queues, and the stack while calling out differences across browsers and engines.
  • The documentation for requestAnimationFrame – for the specifics of the API.

These resources also underpin the double requestAnimationFrame recommendation: differences across browsers and engines mean that two successive frame callbacks are a pragmatic way to ensure the overlay actually renders.

Engineering mindset: iterate and take symptoms seriously

A strong thread throughout the talk is the iterative approach. The journey from V1 to V4 shows how an optimization can unintentionally worsen UX perception. The way forward is to observe carefully, form hypotheses, measure, and reason about the Event Loop. Klemens admitted he initially doubted that double requestAnimationFrame could be the real answer – only to be convinced after digging into the mechanics and seeing it work in practice.

Conclusion: non-blocking UI is a must – and one frame makes the difference

The essence of JavaScript and (Non) Blocking UI by Klemens Loschy: a responsive UI is not an accident; it’s the result of deliberately managing the main thread. The call stack and the Event Loop decide whether the browser gets a chance to repaint. Asynchronous requests often create that chance naturally; synchronous paths like local caches don’t.

The pragmatic fix in the case presented is double requestAnimationFrame: make the overlay visible first, then start the heavy work. That way, the UI remains trustworthy and feels fast, even when the computation behind it is substantial.

Klemens closed by noting his team works with Node.js, JavaScript, and Vue.js – and that they are looking for strong talent in Vienna. If this kind of engineering detail excites you, you’ll feel right at home tackling challenges like the one in this session.

More Tech Talks

More Tech Lead Stories

More Dev Stories