evon GmbH
Scheduling Optimization in Practice
Description
Michael Missethan von evon teilt in seinem devjobs.at TechTalk ein paar theoretische Grundgedanken zur Logik von Terminplanung und zeigt ein interessantes Beispiel einer Optimierung.
By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.
Video Summary
In Scheduling Optimization in Practice, Michael Missethan (evon GmbH) explains what scheduling is and why it’s hard: combinatorial explosion, cascading dependencies, NP-hardness, hidden constraints, and uncertainty. He outlines a practical workflow from precise problem/constraint definition and mathematical modeling to heuristic algorithms and evaluation, and demonstrates with a manufacturing example where a greedy schedule improves 22 to 21 hours, while a small deliberate delay achieves an optimal 16-hour plan. Viewers come away ready to formalize constraints rigorously and apply iterative, heuristic methods to produce high-quality, feasible schedules.
Scheduling Optimization in Practice: A Practical Playbook from Michael Missethan (evon GmbH)
Why this session matters for engineers
At DevJobs.at, we watch many talks on optimization. Few are as crisp and grounded as “Scheduling Optimization in Practice” by Michael Missethan of evon GmbH. He structures the topic around three questions: What is scheduling? Why is it hard? And how do we still get good solutions in real-world settings?
The talk’s core message is refreshingly pragmatic: scheduling is everywhere—school timetables, manufacturing lines, personal routines—and it’s hard both in theory and in practice. Yet there is a repeatable way to build better schedules. That combination of realism and mathematical rigor makes this session especially useful for engineering teams.
Speaker, company, and context
Michael Missethan is a mathematician and software developer at evon GmbH, with a strong focus on algorithmic optimization problems. evon works in software development and services, building among other things the automation platform XamControl. According to Missethan, evon was founded in 2009; the headquarters is in St. Trobrecht-an-der-Rab, east of Graz, with around 95 employees.
This blend—mathematical precision and practical software delivery—shapes the entire talk: get the problem definition right, translate it into a clean model, and only then design algorithms that make sense for the real constraints.
What is scheduling—and what does optimization mean here?
“Roughly speaking, it’s just planning or deciding what to do, when and where under certain constraints.” With that definition, Missethan sets the scope: scheduling is the assignment of tasks/jobs to resources (machines, rooms, people) across time and space while respecting constraints.
Typical objectives include:
- Minimizing cost or completion time (makespan)
- Balancing workload, for instance in timetabling so that teachers carry comparable loads
The everyday analogy is deliberate: we all make schedules—shopping, gym, lunch. The engineering difference is that we spell out goals and constraints explicitly and evaluate how well a solution meets them.
The manufacturing example: four tasks, multiple machines, strict sequences
To make things concrete, Missethan uses a manufacturing scenario. There are four tasks, each consisting of jobs that must run in a fixed order on specific machines, each for a given duration.
An example task:
- 3 hours on machine 1
- 2 hours on machine 2
- 4 hours on machine 3
Crucially, jobs within a task have to follow one another immediately. The aim is to schedule all tasks without machine conflicts and to minimize the total completion time (makespan).
Missethan first presents a non-optimal schedule. It starts with task 3 on machine 2, switches after three hours to machine 1, then proceeds with task 2, and so on. In that arrangement, task 1 starts quite late, task 4 even later, and the overall duration is 22 hours.
“If you would like to have a small puzzle, just pause the video and try to come up with a better schedule.”
Even this small instance shows how nontrivial improvements can be—and how much harder it is to find the best schedule.
Why scheduling is hard: theory and practice
Theoretical reasons
1) Combinatorial explosion: With 13 tasks, there are already 13! possible orderings—an enormous number with 33 digits. Even modern computers would take “billions of years” to enumerate them all.
2) Interdependencies: A single change triggers a chain reaction. Missethan uses the train analogy: one delay causes another, and so on. In scheduling terms, a local tweak can force shifts across machines and tasks.
3) Complexity theory: “Most of the scheduling problems are so-called NP-hard.” In other words, no efficient algorithm is known that always finds the optimal solution. For engineers, this is more than academic: it sets realistic expectations about what to optimize for and how to measure success.
Practical reasons
1) Hidden or complex constraints: Not all constraints are obvious before optimization. In manufacturing, for example, machines may overheat if used too long without a break—an effect you might never see with weak baseline schedules, but which appears once you push utilization.
2) Uncertainty and change: Machines break, availability shifts. Scheduling must be ready for disruptions—and models should be designed to adapt as conditions change.
3) Knowing how to optimize: “We don’t know how to optimize.” Without a precise definition of objectives and constraints, teams may optimize for the wrong thing—or produce plans that look good on paper but can’t be executed in practice.
The bottom line: scheduling is hard—in idealized theory and in messy reality. And yet, as Missethan emphasizes, “In practice we can still do something and we can still find some reasonable solutions.”
A practical workflow: from scoping to evaluation
Missethan lays out a process we recognize from successful optimization projects. Notably, he highlights the steps before algorithm design—and cautions against over-indexing on the “fun” part.
1) Gather data and define the problem rigorously
- Enumerate all jobs/tasks
- Specify objectives clearly
- Collect and make explicit all constraints
A key observation: many constraints are tacit in operations but nowhere written down. If you forget them during modeling, you’ll end up with a schedule that cannot be executed.
2) Translate into a mathematical model
- Convert the verbal description into formulas
- Define decision variables, parameters, and data structures
This is the foundation of the algorithmic work. A precise model brings clarity and surfaces trade-offs that are easy to miss in informal discussions.
3) Design algorithms for approximations and heuristics
- Since optimal solutions are often out of reach, rely on heuristics or approximation techniques
- Aim to get “as close to the optimum as possible”
Depending on the context, simple heuristics can produce strong baselines—or act as stepping stones for iterative improvements.
4) Evaluate: feasibility and performance
- Check feasibility (are all constraints satisfied?)
- Measure performance (how well are objectives achieved?)
In practice, this process is a cycle, not a line. Evaluation informs model and data; new insights lead to refined heuristics.
Don’t over-focus on algorithms
Missethan notes that teams often fixate on step three. Mathematically, it’s the most challenging—and alluring. But it’s dangerous if the first steps are weak. Define objectives imprecisely, and you may end up with an optimized schedule for unintended objectives. Forget constraints, and you’ll produce an infeasible plan.
The greedy heuristic: early-as-possible—and its limits
As a lightweight starting point, Missethan presents the greedy heuristic: “We try to schedule each task as early as possible.” The steps:
1) Schedule task 1 as early as possible
2) Schedule task 2 as early as possible
3) Task 3 can’t start before hour 10 because machines are occupied
4) Schedule task 4 as early as possible
This yields a 21-hour makespan—an improvement over the initial 22-hour schedule.
“Our greedy algorithm improved our schedule at least by one hour.”
But greedy’s weakness is fundamental: it is locally optimal at each step but may be globally suboptimal. A small, counterintuitive change can drastically improve the overall outcome.
The aha moment: a one-hour delay saves five hours overall
Repeating the process with one modification—delaying task 2 by one hour—creates a larger gap on machine 2 between 5 and 8. Task 3 can start much earlier in that gap; task 4 then moves earlier as well. The result: a 16-hour schedule.
“This counterintuitive delay of one hour improved the schedule at the end by five hours.”
This illustrates two essential truths that apply broadly:
- Local optimality does not guarantee global optimality.
- Small changes can trigger large cascades—positive or negative.
For us, this was the most memorable moment of the session: a crisp, quantitative demonstration of why naive early-start strategies can fail in tightly coupled systems.
Practical takeaways for engineering teams
Several actionable principles emerge from the talk. None of them are silver bullets; all are operational:
1) Put problem definition first: Make objectives and constraints explicit. Avoid “unintended objectives” by documenting metrics and priorities.
2) Model before you optimize: Once variables, dependencies, and restrictions are clear, algorithm work pays off. The model is the shared language for comparing solutions.
3) Expect unknowns: Some constraints appear only after optimization (e.g., thermal limits when running machines longer). Iteration is a feature, not a flaw.
4) Use local heuristics as baselines: Greedy often gives a fast, decent baseline. But don’t defend it dogmatically—probe the schedule for small delays that unlock larger benefits.
5) Evaluate twice: First for feasibility, then for performance. No feasibility, no delivery; weak performance, no progress.
6) Embrace the cycle: Evaluation refines the model and data; new information reshapes heuristics. Treat this loop as part of the plan.
7) Set realistic ambitions: NP-hardness isn’t academic trivia; it’s product reality. Be explicit about the quality targets you’re aiming for and the compute/complexity you’re willing to spend.
A minimal mental model for scheduling projects
We found it useful to compress the workflow into a project-ready checklist—aligned with the session:
- Context: Which resources, tasks, and dependencies? Which sequences are strict, which are flexible?
- Objectives: Primary goal (e.g., makespan), secondary goals (e.g., balanced workload). Set priorities.
- Constraints: Hard (physical/contractual) vs. soft (preferences). Document, version, and validate.
- Data quality: Completeness, freshness, sources. Mark uncertainties.
- Model: Variables, constraints, objective function. Write it down clearly so everyone can reference it.
- Algorithm: Start with a simple heuristic (e.g., greedy), then iterate deliberately.
- Evaluation: Automate feasibility checks; establish performance metrics.
- Iteration: Feed back evaluation into model and data; make changes transparent.
This mirrors the talk’s workflow and meshes well with how software teams actually operate.
What the talk doesn’t promise—and why that’s valuable
Notably, Missethan refrains from selling a miracle algorithm. There’s no shortcut to guaranteed optimality. Instead, he sets expectations: “Finding the best solution is typically out of range,” so we focus on good approximations via sound process.
That restraint is what makes the talk valuable. It equips teams with the right questions and helps them invest where returns are highest: rigorous scoping, clean modeling, and iterative improvement—not premature hunts for perfect optimality.
Getting started: concrete first steps
If you want to act on the session immediately, these steps are lightweight and fully consistent with the talk:
- Build a constraint checklist: What rules already exist? What tacit practices should be formalized?
- Prioritize objectives: When goals conflict, which one wins? Make that explicit.
- Create a baseline: Use a greedy heuristic to generate a first schedule and record its makespan.
- Run “what if” tweaks: Nudge individual tasks (e.g., +1 hour delay) and measure cascading effects.
- Automate feasibility checks: Every schedule revision gets tested against known constraints.
- Set a review cadence: Regularly revisit results and refine model/constraints—the cycle is part of the plan.
Conclusion: Realistic rigor beats wishful thinking
“Scheduling Optimization in Practice” by Michael Missethan (evon GmbH) delivers exactly what the title promises: practice. A crisp vocabulary, a solid workflow, and a vivid example that exposes greedy’s limits while showing how small changes can yield big wins. With a systematic approach, you can make substantial improvements—even if the global optimum remains out of reach.
Perhaps the most actionable line for teams facing scheduling problems: a small, deliberate intervention—like delaying one task by an hour—can save five hours overall. Accept that dynamic, model carefully, evaluate relentlessly, and iterate. That was the strongest message we took from this session.
More Dev Stories
evon GmbH Sahra-Marie Kreidl, Front End Developer bei evon
Sahra-Marie Kreidl von evon erzählt im Interview darüber, wie sie zum Programmieren gekommen ist, was ihr an der Arbeit im Front End gefällt und gibt Tipps für Anfänger.
Watch nowevon GmbH Lorenz Pretterhofer, Application Engineer bei evon
Lorenz Pretterhofer von evon gibt im Interview Einblicke in die vielfältigen Tätigkeiten als Application Engineer, wie er zu dem Job gekommen ist und was meiner Meinung nach für Neueinsteiger wichtig ist.
Watch nowevon GmbH Luca Voit, QA Engineer bei evon
Luca Voit von evon spricht im Interview über seinen Weg zum QA Engineering, wie sein Arbeitsalltag aussieht und was seiner Meinung nach wichtig für den Anfang ist.
Watch now