Logo jobs.at Recruiting GmbH

jobs.at Recruiting GmbH

Established Company

How we optimize our Front End

Description

Jürgen Ratzenböck von jobs.at zeigt in seinem devjobs.at TechTalk an welchen Schrauben das Linzer Unternehmen dreht, um die Performance ihrer Webseite zu optimieren.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In How we optimize our Front End, Jürgen Ratzenböck explains how Jobs.at tunes frontend performance and why speed drives engagement and SEO, guided by user-centric metrics like Core Web Vitals and evaluated with tools such as PageSpeed Insights and Lighthouse. He shares practical techniques: resource hints (DNS prefetch, preconnect, prefetch/prerender, preload), production minification via Webpack/Laravel Mix, deferring JavaScript and using critical CSS to avoid render-blocking, and removing unused styles with PurgeCSS while minding AJAX/SPA pitfalls. He also covers code splitting per route and dynamic imports in Vue to load only needed chunks and prevent duplicated shared bundles, giving teams concrete steps to cut load times and improve interactivity and stability.

Front-End Performance That Matters: Resource Hints, Critical CSS, and Code Splitting at jobs.at

Why performance is business-critical at jobs.at

In “How we optimize our Front End,” Jürgen Ratzenböck (jobs.at Recruiting GmbH) shared a concrete, hands-on approach to making the jobs.at platform faster. As Head of Technology, he works across backend and frontend, with a focus on PHP and the Laravel framework on the server side and heavy use of Vue.js for single-page application products in the frontend.

jobs.at is a small team in Linz operating a fast, easy-to-use job platform, with a large inventory of listings across industries and municipalities in Austria. The talk referenced around 58,000 active jobs and 830k+ sessions per month. At that scale, page speed and interactivity are pivotal—both for user retention and search visibility.

Ratzenböck’s motivation was clear: tolerable waiting time on the web has “decreased dramatically.” People are online nearly everywhere, often on mobile, and they bounce quickly if a site feels slow. To keep users engaged, the website must be fast so they stay, have a good experience, and come back—ultimately yielding higher conversion rates. Google is the second key driver: search engine optimization depends a lot on web performance. While the exact ranking recipe is unknown, Google explicitly emphasizes fast sites.

From TTFB to Core Web Vitals: metrics that reflect user perception

A central thread in the talk was how to decide when a site is “fast.” Traditional metrics across the request-response lifecycle still matter:

  • Time To First Byte (TTFB) as a signal of server/back-end performance
  • DOMContentLoaded and onload to understand when rendering is formally done

But Google has pushed user-centric performance metrics that track how the load feels to humans:

  • First Contentful Paint—when the user first sees relevant content painted
  • Interactivity—when the site becomes usable

The emphasis now converges on “Core Web Vitals,” which, as noted in the talk, are slated to affect search rankings “starting in June.” They cover three pillars:

  • Loading
  • Visual stability
  • Interactivity

For teams, this reframes performance goals: not just “when is it technically finished,” but “when does it feel fast?”

The right tools: measure, analyze, improve

Ratzenböck highlighted three complementary tools for diagnosis and iteration:

  • PageSpeed Insights—a staple for quick checks
  • Google Lighthouse—deeper diagnostics, including SEO, best practices, and PWA checks
  • webpagedesk.org—fine-grained waterfall analysis to see DNS resolution, TCP handshake, TLS negotiation, and more

The recommendation was straightforward: these tools are “really great” and provide “a lot of hints where you are and what you can do.” Every optimization should start with measurement—and end with verification.

Resource hints: small attributes, big impact

A focal point of the session was resource hints. As Ratzenböck put it:

Resource hints are underestimated but very powerful.

Instead of waiting until a resource is encountered in the critical path, the browser can be told to perform preparatory work that reduces latency later, without blocking rendering. The key techniques discussed:

DNS Prefetch

  • Purpose: Perform the DNS lookup for a domain early, ahead of the actual request
  • Effect: When the resource is eventually needed, the request can start faster
  • Use cases: Especially useful for third-party tools or dynamically constructed resource URLs
  • Character: Lightweight, runs in the background

Preconnect

  • Purpose: Go beyond DNS—also initiate the TCP handshake and TLS negotiation in advance
  • Effect: Eliminates roundtrips before the actual resource requests start
  • Use cases: Like DNS prefetch, ideal for third-party hosts and dynamic URLs

Prefetch

  • Purpose: Anticipate likely navigation and prefetch resources, caching them in the browser
  • Idea: If users on page A are very likely to go to page B, fetch the assets early
  • Result: Near-instant navigation because assets are already in cache
  • Note: Prefetch targets the next page, not the current one

Prerender

  • Purpose: Go all-in—download and execute the entire next page ahead of time
  • Upside: Navigation can feel instant
  • Caution: “Be cautious,” Ratzenböck warned. This consumes significant resources (bandwidth, CPU, memory) and only makes sense if the user’s next step is highly predictable. Otherwise, it can backfire

Preload

  • Clarification: Not formally a resource hint, but similar in spirit
  • Purpose: Prime critical assets for the current page—for example, fonts you know a CSS file will need later
  • Benefit: Critical resources are ready in time, rather than waiting for CSS discovery

The core message: With just a rel attribute here and there, you can shave off latency at the right spots. But be judicious with prefetch/prerender to avoid wasting client resources.

Minification and bundling: the baseline that still pays off

Another pillar is reducing the weight of shipped assets. Ratzenböck emphasized minifying JavaScript and CSS in production as a de-facto standard. Bundle size is critical; cutting bytes speeds up download and parsing.

Modern build tooling makes this trivial to adopt:

  • Webpack
  • Rollup
  • Laravel Mix (a thin wrapper around Webpack)

In many setups, enabling production minification and bundling takes only one or two configuration lines—and it works out of the box.

Removing render-blockers: defer, async, and critical CSS

One common Lighthouse/PageSpeed finding is “render-blocking resources.” CSS and JavaScript can halt rendering until they are fetched and, for JS, executed. Ratzenböck’s advice was to distinguish what truly needs to run immediately from what can wait.

  • JavaScript: Where immediate execution is not required, mark it as deferred. The defer attribute tells the browser to download the script but delay execution until the DOM is fully built
  • Effect: The initial above-the-fold content is not unnecessarily blocked

For CSS, the emphasis is on critical CSS—just enough styling to render the initial viewport quickly.

  • Approach: Load critical CSS first; pull in the remainder asynchronously
  • Optional: Inline the critical CSS (jobs.at is not doing this at the moment), but it can be a viable option
  • Caution: Watch for flash of unstyled content (FOUC). The first paint must be styled, or users will see a flicker from blank to fully rendered

The mantra: prioritize initial rendering, delay nonessential work.

Trimming CSS with PurgeCSS: frameworks are big, usage is small

Frameworks such as Tailwind or Bootstrap ship a lot of styles—most projects use only a slice. The team saw “quite some bad coverage” in Lighthouse on certain pages, so they introduced PurgeCSS.

  • How it works: During the build, PurgeCSS matches the classes used in templates against the CSS files and removes anything unused on that page
  • Payoff: Significantly smaller CSS bundles
  • Setup: Easy to wire up with Webpack (and generally with other bundlers)

Two gotchas from jobs.at’s experience are worth calling out:

1) Additional HTML loaded via AJAX

  • Issue: PurgeCSS has no visibility into runtime-loaded fragments at build time. Classes used only in AJAX-loaded content might be purged
  • Symptom: Parts of the UI appear unstyled
  • Implication: Be careful and configure the tool so runtime-needed classes stay intact

2) Single-page apps (e.g., Vue.js) with conditional rendering

  • Issue: Classes used only on certain runtime conditions might appear “unused” during the build
  • Symptom: Missing styles when those paths are taken in production
  • Implication: “Be careful not to delete things you still need at runtime”—tune configuration accordingly and account for conditional paths

Bottom line: PurgeCSS is powerful, but you must treat dynamic content thoughtfully.

Code splitting: load only what this page needs

Ratzenböck shared a personal shift in thinking about JavaScript delivery:

I used to think it was best to have everything in one JS file. In reality—especially as JS grows—it’s much better to split into chunks and load only the pieces required for the current page.

He outlined several practical options:

  • Server-side orchestration: jobs.at implemented an asset manager that includes the specific JavaScript tag based on the active HTTP controller—effectively “one JS file per route”
  • Webpack: Split bundles using the platform’s built-in capabilities (e.g., SplitChunks)
  • Vue.js: Use dynamic module imports so components load only when the user navigates to that route (the talk mentioned a “customer login” component as an example)

One more caution: keep an eye on shared bundles to avoid duplication. With many module imports, it’s possible to ship the same code twice unless deduped.

The upside: code splitting reduces initial JavaScript burden and speeds up time-to-interactive—especially helpful for SPAs and larger multi-page frontends.

A practical playbook: applying the jobs.at approach

Translating the session into a practical plan yields a straightforward sequence that can be applied across many stacks:

1) Measure and set targets

  • Use PageSpeed Insights and Lighthouse for an initial baseline
  • Turn to webpagedesk.org to read waterfalls and pinpoint DNS/TCP/TLS and other stalls
  • Prioritize user-centric metrics: “how soon do we show something?” and “how soon is it usable?”

2) Prime the network with resource hints

  • dns-prefetch for third-party hosts you know you’ll hit
  • preconnect where TLS/handshake latency is a known bottleneck
  • prefetch only for highly probable navigation targets
  • prerender only when the next page is near-certain; otherwise you risk burning bandwidth/CPU
  • preload for critical assets on the current page (notably fonts)

3) Keep bundles lean

  • Turn on minification in production (Webpack/Rollup/Laravel Mix)
  • Favor split bundles over “one big file”

4) Reduce render blocking

  • Defer noncritical JavaScript with defer
  • Load critical CSS first—inline if needed—then fetch the rest asynchronously
  • Guard against FOUC; the initial viewport must be styled from the start

5) Trim CSS

  • Add PurgeCSS to the build
  • Account for dynamic content (AJAX) and SPA conditional rendering in the configuration

6) Embrace code splitting

  • Deliver route/page-specific chunks (server- or build-driven)
  • Lazy-load components in SPA routers
  • Deduplicate shared bundles

7) Re-measure and iterate

  • Validate improvements in Lighthouse and waterfall views
  • Watch trade-offs (e.g., prefetch/prerender versus bandwidth/CPU use)

What stood out: small tweaks, compounding gains

A pattern across these tactics is their accessibility. Many are “quick wins,” as Ratzenböck framed resource hints—“it’s just this one rel attribute and then it’s done.” These are especially attractive for teams because they require modest effort and deliver clear improvements in first paint and interactivity.

Equally important is the discipline to separate “must run now” from “can wait”—for JS, CSS, and data. Aligning your delivery pipeline with that distinction avoids blocking the critical rendering path.

SEO impact: performance is not a vanity metric

The talk underscored that performance is not just developer hygiene—it has direct SEO implications. Google clearly says fast websites matter, and Core Web Vitals emphasize that priority. Notably, the talk mentioned that these signals would begin affecting rankings “in June,” underscoring the urgency: performance is part of product and growth strategy.

Stack context: Laravel, Vue.js, and SPA products

The stack context matters for applicability. jobs.at uses PHP/Laravel on the backend and relies heavily on Vue.js for SPA products. The techniques discussed are particularly impactful where JavaScript payloads are large and the UI is assembled dynamically:

  • Resource hints reduce network overhead on multi-host setups
  • Defer/async and code splitting lower the initial JS burden and improve time-to-interactive
  • PurgeCSS cuts CSS bloat—especially relevant for large utility-first frameworks

While not stack-specific, these tactics integrate cleanly with popular bundlers (Webpack, Rollup, Laravel Mix).

Manage trade-offs consciously: when optimizations can backfire

The session balanced enthusiasm with pragmatism:

  • Prefetch/prerender: Use only when the user’s next step is predictable; otherwise, you risk wasting client resources
  • PurgeCSS: Don’t forget runtime-only classes (AJAX, conditional rendering); missing styles in production are a high cost
  • Shared bundles: Duplicate dependencies are subtle but expensive; inspect build output and network traces

Being deliberate about these trade-offs ensures net-positive outcomes and avoids regressions.

Conclusion: a focused toolkit, not over-engineering

“How we optimize our Front End” by Jürgen Ratzenböck (jobs.at Recruiting GmbH) outlined a clear, actionable path to faster delivery:

  • Treat user-centric metrics seriously (fast first paint, early interactivity, visual stability)
  • Combine measurement tools (PageSpeed Insights, Lighthouse, webpagedesk.org)
  • Reduce network startup costs (DNS prefetch, preconnect)
  • Accelerate likely navigation (prefetch; prerender only when warranted)
  • Prioritize the critical path (defer for JS, critical CSS first)
  • Save bytes (minification, PurgeCSS)
  • Load only what’s needed (code splitting, lazy loading, deduped shared bundles)

From our DevJobs.at editorial vantage point, the biggest takeaway is how many wins come from simple, targeted changes—a rel here, a defer there, a Purge step in the build—that add up to a noticeably faster experience. For a platform with tens of thousands of listings and substantial traffic, this is not mere technical tuning; it’s product strategy. Performance keeps users on the site, improves the likelihood of return visits, and supports search visibility.

In short: a fast first impression, stable visuals, and early interactivity are the currency of modern web experiences—and this talk’s toolkit delivers precisely the levers to earn it.

More Tech Talks

More Tech Lead Stories