Workplace Image TGW Logistics Group

Speed Up Your Pipeline

Description

Thomas Svoboda von TGW Logistics Group teilt in seinem devjobs.at TechTalk einige Ansätze, wie man Azure DevOps Pipelines optimieren könnte.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In “Speed Up Your Pipeline,” Thomas Svoboda explains how to accelerate Azure DevOps pipelines by making them scenario-aware, using variables and conditions to skip code analysis, installer creation, and NuGet push/promotion except for official releases. He shows .NET build tactics—combining solutions, merging/removing projects, and generating NuGet packages only when needed—that cut build time from ~10 minutes to ~6.5 and then ~4 minutes; he also advises fixing slow unit tests, running only critical test buckets when fast feedback is required, and parallelizing independent jobs while minimizing data sharing. Viewers can apply these patterns to shorten feedback cycles and tailor the pipeline to the most frequent use cases.

Speed Up Your Pipeline: Practical Azure DevOps Optimizations from Thomas Svoboda (TGW Logistics Group)

Why this session matters

At DevJobs.at we repeatedly hear the same story: CI/CD pipelines begin lean and purposeful, then grow as teams add code quality gates, artifact packaging, and release mechanics. The result is slow feedback, frustrated developers, and reduced iteration speed.

In “Speed Up Your Pipeline” by Thomas Svoboda (TGW Logistics Group), the advice is refreshingly pragmatic. Rather than chasing silver bullets, he demonstrates how to model the pipeline around actual use cases, switch work on or off with variables and conditions, and reserve heavyweight tasks for the scenarios that truly need them. It’s Azure DevOps done with intent.

The baseline: four phases—and how they bloat

Thomas frames the typical pipeline as four phases:

  • Setup: check out the repository, initialize variables, install required tools.
  • Build: produce the artifacts needed downstream.
  • Tests: run automated tests against those artifacts.
  • Publish: publish artifacts and test results.

The problem is familiar: pipelines “grow really fast.” New checks, steps, and features creep in until the feedback cycle is long and the whole team slows down. The way out is not a single tweak; it’s aligning the pipeline with real scenarios and their needs.

Start with scenarios, not tools

Thomas walks through a real pipeline that had expanded far beyond the basic four phases. It included:

  • Code quality analysis wrapped around the core phases.
  • Building .NET projects into NuGet packages during the build and then pushing/promoting those packages to an internal feed in a dedicated phase.
  • Creating and bundling an installer at the end from all build artifacts.

This configuration is exactly what you want for an official release—but those happen “a few times a month.” Meanwhile, two other scenarios are much more frequent:

1) Manual builds for developers

  • Goal: get an installer from a feature branch quickly for integration or performance testing.
  • Priority: speed and availability of the installer.
  • De-prioritized: code quality analysis (because it will be evaluated in the pull request anyway).

2) Pull requests

  • The most frequent scenario.
  • The four standard phases are present, and they are wrapped in “extensive code quality analysis.”
  • Goal: strong quality signal with reasonable runtime.

This leads to a strategic choice: separate pipelines per scenario, or a single pipeline that is scenario-aware and adapts at runtime. Thomas opts for the latter.

Make the pipeline scenario-aware with variables and conditions

The adaptive behavior hinges on variables and conditions. Thomas shows how to:

  • Detect the source branch and whether the run is a PR or a feature branch build.
  • Derive a custom flag that decides whether to perform code analysis.

Armed with that, you can conditionally execute or skip heavy tasks. A concrete example from the talk is the code quality step (referred to as “Sonocube analysis”):

  • The task is guarded by a condition.
  • When the condition resolves to false, the analysis is skipped entirely.

The same principle applies to other expensive steps:

  • Installer phase: eight steps totaling about two minutes. With a false condition, the entire phase is skipped.
  • NuGet push/promotion: only necessary in the official release scenario; skipping it saves around four minutes in roughly 95% of runs.

The guiding idea: pay for heavyweight steps only when they deliver value in that specific scenario.

Attack the unskippable: speeding up the build

There’s one phase you cannot skip in a build pipeline: the build itself. Thomas shares two impactful measures from a .NET stack:

1) Consolidate solutions and projects

  • Starting point: four .NET solutions; the largest had over 100 projects. Total build time ~10 minutes.
  • Actions: merge the four solutions into one, consolidate smaller projects, remove deprecated ones.
  • Outcome: down from ~10 to ~6.5 minutes.

2) Generate NuGet packages only when needed

  • Starting point: official releases require NuGet packages for all projects. Enabling package generation in the build is straightforward, but Microsoft notes it increases build time.
  • Actions: gate package generation on a build parameter (an output path). When the parameter is not set, don’t generate packages. Exclude unit test projects altogether.
  • Outcome: from the already reduced ~6.5 minutes down to about 4 minutes.

The logic is simple and effective: turn on packaging only for the runs that need it (e.g., official releases), and never package test projects. The cumulative impact is significant.

Tests: make them fast and scenario-appropriate

Build acceleration is only half the story. Thomas then tackles automated tests with two practical recommendations:

1) Identify and fix slow tests

  • In Visual Studio or Rider you can group test results by duration.
  • Guideline: “A single unit test should never exceed 100 milliseconds.”
  • Action: rework tests that don’t meet that bar. In large suites with thousands of tests, shaving milliseconds off individual cases adds up.

2) Categorize tests by severity

  • Bucket tests into severity levels (e.g., critical vs. extended).
  • For scenarios that require very fast feedback (like PRs), run only the critical tests.
  • When time is less constrained (e.g., dedicated pipelines), run extensive test suites such as performance tests.

This isn’t “test less.” It’s test the right things at the right time for the right scenario.

Parallelization: powerful—if your dependencies allow it

Parallelizing jobs across multiple agents sounds like a panacea. Thomas cautions that it often isn’t. Highly interdependent steps or heavy data sharing introduce overhead that can nullify gains.

Two examples illustrate when parallelization shines:

Example 1: Two independent stages on different agents

  • Setup: two stages with one job each. One runs on an agent on TGW’s build server; the other runs in Azure.
  • Property: they are completely independent.
  • Trick: set “depends on empty array at stage two” so both stages start immediately when the pipeline starts.
  • Outcome: previously ~7 minutes (two ~3.5-minute stages in sequence), now ~3.5 minutes when run in parallel.

Example 2: Three stages with shared artifacts

  • Setup: stage 1 downloads artifacts from previous pipelines and places them in a shared folder.
  • Dependency: stages 2 and 3 both depend on stage 1 but not on each other.
  • Trick: set the dependency in stage 2 and stage 3 to the name of stage 1.
  • Outcome: stages 2 and 3 run in parallel. In this case, the absolute gain was small because the pipeline was already fast—but the pattern remains valuable.

Principle: parallelize when steps are independent and data sharing is light. Otherwise, decouple first—or keep it sequential by design.

A practical blueprint: from analysis to implementation

Distilling Thomas’s talk into an actionable flow, we recommend the following steps:

1) Inventory your scenarios

  • Identify the real use cases: official release (rare), developer-triggered manual builds (a few times a day), and pull requests (most frequent)—this was Thomas’s trio.
  • For each scenario, capture goals and must-haves (e.g., is an installer required? should code quality run now or later?).

2) Make behavior conditional

  • Surface the variables that matter: source branch, PR context, derived flags like “run code analysis.”
  • Guard expensive phases with conditions: code quality, installer creation, NuGet push/promotion.
  • Goal: skip what’s not needed in the majority of runs (Thomas cited around 95% for the NuGet push/promotion).

3) Slim down the build

  • Consolidate solutions and projects; remove deprecated code.
  • Gate packaging behind a build parameter and exclude test projects from packaging.
  • Expected results (from Thomas’s experience): ~10 → ~6.5 → ~4 minutes.

4) Focus your tests

  • Group by duration, bring individual unit tests under 100 ms.
  • Define severity buckets: “critical” for fast feedback, “extended/performance” for longer-running pipelines.

5) Parallelize with intent

  • Identify truly independent units and distribute them across agents.
  • Model dependencies explicitly (e.g., “depends on empty array at stage two” or “stages 2 and 3 depend on stage 1”).
  • Be realistic about data volumes and copy overhead.

Common pitfalls this approach avoids

  • Running everything in every run: letting rare release needs dominate daily PR feedback loops. Fix: scenario-based activation/conditions.
  • Enabling NuGet packaging by default: convenient but expensive. Fix: only when a parameter is set (such as an output path), and never for test projects.
  • Overloading hot paths with non-critical tests: Fix: honor severity buckets; keep fast lanes fast.
  • Parallelizing tightly coupled steps: leads to IO/synchronization overhead. Fix: decouple first or keep sequential execution.

What stood out most

  • “There is no one-size-fits-all.” Context matters. A time-saver for one team may not matter for another.
  • “Focus on the most frequent use case and make it fast as hell.” That single prioritization lens aligns engineering effort with maximum impact—typically the PR path.
  • Small, smart changes compound: skipping the installer (about two minutes), skipping NuGet push/promotion (around four minutes in ~95% of runs), consolidating the build (~10 → ~6.5 → ~4 minutes). These add up to notably shorter cycle times.

Conclusion: treat your pipeline like a product

“Speed Up Your Pipeline” by Thomas Svoboda (TGW Logistics Group) is a case study in product thinking applied to CI/CD: define your audiences (scenarios), deliver just enough functionality (conditions), remove bloat (consolidation), and optimize for the most frequent path (fast feedback where it matters most).

Azure DevOps makes all of this possible with the tools you already have. The real change is in how you design and reason about your pipeline. Ask the questions Thomas posed and answer them honestly:

  • Which scenario runs most often—and how do we make it as fast as possible?
  • Which steps are expensive—and in which runs are they unnecessary?
  • Where are we inflating the build—and how do we slim it down?
  • Which tests are truly critical—and which belong in longer-running pipelines?
  • Where does parallelization genuinely help—and where does it just add overhead?

Translate those answers into variables, conditions, and a clean stage/job graph, and the speed gains will follow. Or, to echo Thomas’s closing thought: there is no universal recipe—but there is one reliable rule. Find your scenarios. Optimize the most frequent one. Make it “fast as hell.”

More Tech Lead Stories

More Dev Stories