NETCONOMY
Load Performance in Single Page Applications
Description
Christian Oberhamberger von NETCONOMY erläutert in seinem devjobs.at TechTalk die Dos and Don'ts von Load Performance in Single Page Applications.
By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.
Video Summary
In Load Performance in Single Page Applications, Christian Oberhamberger explains that SPAs shift rendering to the client, easing backend load but making initial bundle size critical for public sites. He outlines techniques like pre-rendering (SSR/SSG), (partial) rehydration, chunk splitting, critical style inlining, and lazy loading, then emphasizes measuring with Lighthouse and Core Web Vitals: use appropriate hardware, stable reference pages, the GitHub docs, block tag managers, a clean Chrome profile, and run at least five times. He concludes by urging teams to set up continuous performance monitoring (e.g., with Grafana) to track trends and guide improvements.
Load Performance in Single Page Applications: Practical measurement, do’s and don’ts, and always-on monitoring from “Load Performance in Single Page Applications” by Christian Oberhamberger (NETCONOMY)
Why load performance is the make-or-break for SPAs today
In “Load Performance in Single Page Applications,” Christian Oberhamberger (Frontend Architect and Chapter Lead at NETCONOMY) distills a decade of frontend evolution into one practical theme: measure what matters, then optimize where it counts. From our DevJobs.at vantage point, the through-line is clear. As touch points moved closer to users, SPAs pushed rendering to the client. That brings smoother interactions without full page loads—but it puts a spotlight on the initial load.
The setup goes like this: on-prem moved to the cloud, monoliths shifted to microservices, and rendering slid toward user devices. With SPAs, the user’s device renders the UI so the backend doesn’t spend CPU cycles on dynamic HTML. That’s great for backend load and overall fluidity. The trade-off: shipping a larger JavaScript bundle on first paint. For back-office tools, that can be fine. On the public internet—especially when you want to be searchable and attract traffic—load performance becomes critical.
This is the context for everything that follows in the session: don’t guess. Measure. Then apply targeted techniques, and keep measuring.
Architecture and build strategies that move the needle
Christian Oberhamberger lists a non-exhaustive set of tactics that can improve SPA performance. It’s an intentionally short roll call that signals how broad the toolbox actually is:
- Pre-rendering: Server-Side Rendering (SSR) and Static-Site Generation (SSG)
- Rehydration and partial rehydration
- Chunk splitting
- Critical style inlining
- Lazy loading
Two framing notes accompany this list. First, these topics often sound more complex than they are; frameworks can help. Second, the list could easily continue for “a couple more slides.” The real message: think in systems. You’re balancing what you render when, and how much work the client does upfront versus later.
“Only the initial load for the whole touch point could be a problem … because now we have to ship a big JavaScript bundle to the user.”
For us, that line captures the core design constraint. The biggest win often lives at the very beginning of the user journey. That’s where the bundle, the CSS strategy, and any render-on-the-server choices come together.
Measurement first: Why Lighthouse sits at the center
If you want to improve performance, you need to know where you’re starting—and whether any change works. Oberhamberger’s recommendation is straightforward: use Lighthouse.
Why Lighthouse?
- It’s developed and maintained by Google.
- It sets meaningful standards across websites, with Core Web Vitals called out explicitly.
- It’s open source.
- It offers a CLI.
- It’s accessible via Chrome DevTools, PageSpeed Insights, and more.
But that same accessibility hides pitfalls. Many teams use Lighthouse in ways that yield misleading results. The solution in the session is a pragmatic set of do’s and don’ts that turns “run the test” into “get a trustworthy signal.”
Do’s: How to get consistent, actionable results
1) Run on adequate hardware
A beefy developer laptop often produces better scores—even with Lighthouse throttling in place. Conversely, if your machine is busy (for example, in a video call with screen share enabled), your results will swing the other way.
Practical cue from the talk: scroll down the Lighthouse report and find the CPU benchmark. Lighthouse measures your machine’s power, and that benchmark relates to your result. Treat it as context for every run.
“Scroll down your Lighthouse sheet … you will find a CPU benchmark there … and that benchmark relates to your result as well.”
2) Check your code as much as your content
Content changes can easily mask or override the impact of code improvements. Picture this: you measure, make changes, deploy days later—and three new images showed up via the CMS. Your numbers reflect content, not code.
The fix is simple and effective: pick something that doesn’t change as your reference. Oberhamberger suggests the Imprint page (Impressum) as a good candidate.
3) Read the docs in the Lighthouse GitHub repository
While there’s a lot of Lighthouse documentation scattered across the web, the GitHub repository is where developers find details on how the score is calculated, how Lighthouse works, and what you can do on your machine for more accurate runs. The session singles this out as a worthwhile read.
Don’ts: Avoid the traps that skew your data
1) Don’t measure what is not part of your code
Tag managers, analytics, and even heatmap tools sit on top of your application. They’re useful for the overall picture. But if the goal is to understand and improve your own code, block them when you measure.
The session points to DevTools’ built-in network request blocking. You can do the same with the CLI. One limitation called out explicitly: this won’t work for PageSpeed Insights.
“If you want to find out how you can improve your own code, don’t measure tag managers.”
2) Don’t use your main Chrome profile
Set up a dedicated Chrome profile for performance measurements. Give it a punchy color so you never mix it up. Disable all plugins. This helps both your top-level measurements and your deep-dive analysis.
3) Don’t measure only once
Every page load differs—network conditions fluctuate, especially on the public internet. One run is never representative. The recommendation in the session is “at least five runs.” Over time, the signal becomes even more reliable.
“The recommended amount are five runs at least.”
From snapshot to time series: Set up performance monitoring
If performance is relevant, make it visible over time. That’s the closing message in the session. Oberhamberger describes how they measure changes and display them in a dedicated environment. They use Grafana for that.
What does this buy you?
- A dedicated environment for measurement—no worries about your personal browser profile or CPU load.
- Background runs that don’t interrupt your day.
- More data to compare across runs and time windows, including response times.
- Access to a lot of Lighthouse data you can work with to improve page speed performance.
For us at DevJobs.at, this turns performance from an event into a routine. It’s the difference between reacting to a number and managing a trend.
A practical workflow teams can adopt (grounded in the session)
Based on Christian Oberhamberger’s guidance, here’s a compact, repeatable workflow:
- Choose a stable reference page:
- Use a page that rarely changes (the Imprint/Impressum is a strong pick) to isolate code effects.
- Separate your measurement environment:
- Create a dedicated Chrome profile, disable extensions, and make it visually distinct.
- Block third-party layers (when focusing on your code):
- Exclude tag managers, analytics, and heatmaps using DevTools request blocking or the CLI. Note: this cannot be done in PageSpeed Insights.
- Capture machine context:
- Note the Lighthouse CPU benchmark; treat it as part of the test conditions.
- Run multiple times:
- Execute at least five runs per measurement point; expect variance.
- Apply improvements incrementally:
- Pull from the named tactics—SSR/SSG, rehydration (including partial), chunk splitting, critical style inlining, lazy loading.
- Establish monitoring:
- Automate runs over time; visualize results (the session references Grafana).
- Compare and iterate:
- Use Lighthouse data to guide ongoing page speed improvements.
The logic is straightforward: isolate variables, control the environment, repeat the test, and observe the trend.
SPA context: Why the bar moved—and where to act
The session starts with the architectural shifts of the last years. With SPAs, user devices do the rendering work once done by the server. It’s a net win for backend efficiency and UX—until the initial bundle cost gets in the way. That’s where the list of techniques comes in. It’s not a single silver bullet but a collection of levers, each chipping away at the upfront cost or reshaping when and how work is done.
Notably, the session draws a line between internal back-office applications and public websites. The former can tolerate heavier initial loads; the latter cannot, especially when visibility and traffic matter. It’s a useful framing for prioritization.
Use Lighthouse deliberately: Accessible doesn’t mean “drop-in”
Because Lighthouse is everywhere—DevTools, PageSpeed Insights, and a CLI—teams often treat it as a quick check. The session cautions against that. Accessibility is a feature, but measurement still depends on:
- Recognizing your machine’s impact (CPU benchmark and current load).
- Stabilizing what you test (content versus code).
- Cleaning the testbed (separate profile, disable plugins, block non-code layers).
Nail these basics, and Lighthouse becomes a reliable foundation for decision-making.
Quotes and messages that stick
- “Dynamically rendering a touch point is expensive.”
- “With SPAs, the user’s own device does that … which is kind of neat.”
- “Only the initial load … is the problem now.”
- “Frameworks can help; these approaches sound more complex than they are.”
- “Lighthouse is open source, has a CLI, and gives meaningful standards—check the Core Web Vitals.”
- “At least five runs.”
- “Set up performance monitoring … we are using Grafana for that.”
Common pitfalls—addressed head-on in the session
- Single-run conclusions: The talk recommends at least five runs to account for network variance.
- Measuring everything at once: When the goal is your code, block tag managers and analytics; otherwise the signal is muddied.
- Using your main browser profile: Extensions and settings bleed into results; a dedicated, clean profile avoids that.
- Mixing content changes with code changes: Use a stable page (e.g., the Imprint) to isolate code effects.
- Skipping the primary docs: The Lighthouse GitHub repository is where scoring and mechanics are documented in depth.
Each of these isn’t theoretical. They’re the real-world gotchas that derail meaningful optimization.
What we at DevJobs.at learned
- Performance is a process: measure, interpret, improve, and monitor—continuously.
- SPAs shift work to the client: great for UX after load, risky at initial load.
- Tooling is necessary but not sufficient: Lighthouse’s power shows up when the test conditions are controlled.
- Monitoring wins over snapshots: trends beat one-off numbers.
A short, actionable checklist
- Put SSR/SSG, rehydration (including partial), chunk splitting, critical style inlining, and lazy loading on the table.
- Use Lighthouse deliberately: adequate hardware, a dedicated profile, blocked third-party layers, and multiple runs.
- Define a stable reference page for code-focused baselines.
- Set up monitoring; the session calls out Grafana for this purpose.
Conclusion
“Load Performance in Single Page Applications” by Christian Oberhamberger (NETCONOMY) translates modern SPA reality into a disciplined approach: measure well, optimize what matters, and keep measuring. SPAs hand the render baton to the client; the initial load becomes the crucible. The way forward isn’t guesswork—it’s repeatable measurement, separation of concerns (code versus third-party layers), and long-term monitoring in a dedicated environment.
For teams running public-facing touch points, this framing is essential. The tools exist, and the strategies are named. The difference is operational: making measurement a routine, not a one-off; choosing stable baselines; and watching the trendline rather than chasing a single score. The session delivers the guardrails to do exactly that—clear, actionable, and ready for engineering teams to apply today.