Logo Bosch-Gruppe Österreich

Bosch-Gruppe Österreich

Established Company

Jasmin Grabenschweiger, Data Scientist bei Bosch

Description

Data Scientist bei Bosch Jasmin Grabenschweiger spricht im Interview über ihren Werdegang und gibt Tipps für Neueinsteiger und Einblicke in den Data Science Alltag mit Beispielen.

By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.

Video Summary

In "Jasmin Grabenschweiger, Data Scientist bei Bosch," Speaker Jasmin Grabenschweiger traces her path from a business-oriented school through a mathematics degree and a quantitative master’s in production/logistics to a PhD and her current role as a Data Scientist at Bosch. She works with large engineering and manufacturing datasets in mobility, uses Spark, builds ML classifiers for screw-tightening curves, and deploys models via MLflow into automated pipelines while collaborating closely with domain experts in a Scrum team. Her advice: enjoy data and mathematical problem solving, stay open to new technologies, learn on the job, and expect real-world data to defy textbooks; a formal degree helps but isn’t strictly required.

From Combinatorial Optimization to Industrial Data Science: Jasmin Grabenschweiger (Bosch‑Gruppe Österreich) on Spark, Screwdriving Curves, and Agile ML

What we heard in “Jasmin Grabenschweiger, Data Scientist bei Bosch” (Bosch‑Gruppe Österreich)

From our DevJobs.at editorial seat, listening to “Jasmin Grabenschweiger, Data Scientist bei Bosch” at Bosch‑Gruppe Österreich, one thing became immediately clear: her path is deeply quantitative and unapologetically practical. She moves fluently between mathematics, programming, real‑world manufacturing, and close collaboration with process experts—inside an agile setup that favors short feedback loops.

Jasmin works with industrial‑scale data from product development and manufacturing in the Mobility context. Big data frameworks like Spark are part of her daily toolkit for processing and streaming. Her tasks range from data preprocessing and sensor signal analysis to machine‑learning‑driven use cases deployed via MLflow as automated prediction pipelines. A vivid example: classifying screwdriving curves from the assembly line so process experts can act quickly and confidently.

From a business‑oriented school to a mathematics degree

Jasmin starts her story with a higher‑level secondary education focused on economic professions. It wasn’t very scientific or technical by design. It was enjoyable, she says, but ultimately “too little mathematics” for her taste. That desire for more math—more formal, quantitative problem‑solving—became the compass for what followed.

She chose to continue her education and completed a Bachelor’s in Mathematics at the University of Vienna. That set the foundation: analytical thinking, modeling, proofs, and comfort with abstraction. Crucially, it hints at a mindset: approaching problems mathematically and being eager to translate insights into working solutions.

A quantitative Master’s: production, logistics, and combinatorial optimization

For her Master’s, Jasmin moved toward business administration yet focused on production and logistics—“very quantitative,” as she emphasizes. The heart of that work: mathematical models, in particular combinatorial optimization problems. This is where theory, algorithms, and software begin to interlock.

Computer science played an important role too. She implemented solving algorithms in Python or C++. That point matters: it explains why the later move into practical data science felt organic. Implementing optimization routines hones efficiency, data‑structure awareness, and the path from model idea to robust implementation.

From Master’s thesis to doctoral work: delivery logistics as a proving ground

Jasmin’s Master’s thesis lives in delivery logistics. It’s a domain rich in combinatorial structure—routes, time windows, capacities, disruptions. She notes that “this continues in a doctoral thesis,” meaning deeper work in the same or adjacent problem spaces.

For us, this is not a departure from practice; it’s an intensification of a stance. The stance is to cast problems as models, apply algorithms, and assess results against real‑world requirements.

Today: Data Scientist at Bosch with Engineering and Manufacturing (Mobility) data

“Now I’m at Bosch as a Data Scientist,” Jasmin says. Her data comes from product development and production—Engineering and Manufacturing—in the Mobility area. This is where theory meets the day‑to‑day of industrial reality. Data streams are large, processes complex, quality demands high.

The natural consequence: “Big‑data‑capable frameworks” are a must. In her context, that’s “mostly Spark,” which she uses for “processing and streaming the data.” From our vantage point, this is a healthy reality check: industrial data science often means distributed processing, resilient pipelines, and seamless integration with the existing engineering and manufacturing ecosystem.

A wide task spectrum: preprocessing, sensor signals, and ML

Jasmin outlines a broad task portfolio: algorithms for data preprocessing, analysis of sensor signals, and—more recently—machine learning approaches “which could also be labeled AI.” The spread is telling: industrial data science rarely starts with the model. It starts with the data—with the measurement setup, with the nature of signals, and with the question of what a curve is trying to tell us about the underlying process.

That also explains why domain knowledge—of procedures, machinery, and production steps—is indispensable. Models are only as good as the understanding of the data feeding them.

A concrete case: classifying screwdriving curves in manufacturing

Things become especially tangible with “screwdriving curves”—the torque/path profiles captured during assembly. “Things can go wrong in the screwdriving process,” Jasmin explains. When they do, the curve exhibits certain characteristics. Process experts want those cases categorized or classified. Her team is building a classification model for that purpose.

The logic is quintessentially industrial: once curves are automatically mapped into meaningful classes, experts can decide more quickly where action is needed—whether a component should be replaced or a process step fine‑tuned. Automated classification creates speed and consistency, both of which translate directly into quality and efficiency on the shop floor.

Start with the data: quality, structure, relationships

Before training comes groundwork: “At the beginning there’s a lot of work looking at data quality or finding structures and related data.” That may sound unglamorous, but it’s essential. Without a handle on data quality and structure, any model choice is blind.

Only after understanding which signals are related and how reliable they are does it make sense to move on. Then “you train a model”—with the caveat that you need “good training data.” That’s often the bottleneck: training data isn’t automatically good. It needs curation, plausibility checks, and sometimes fresh collection.

From training to operations: MLflow and automated prediction pipelines

The trained model “is deployed via MLflow.” Next, it “runs in an automated prediction pipeline,” so that new screwdriving curves are classified automatically. For practitioners, there’s a dashboard where results are visible. Process experts “can derive measures or draw conclusions about the process”—including concrete steps like replacing components when appropriate.

This end‑to‑end view underscores how production‑readiness is taken seriously: from data ingestion and model training to deployment and integration into workflows that decision‑makers actually use.

Domain expertise is non‑negotiable: requirements and process understanding

Jasmin stresses close collaboration with process experts repeatedly: “They are very important to us and we work very closely with them.” At the beginning of a use case comes “requirements engineering,” so the team “understands the problem at all.” Just as crucial is understanding “the processes behind the data.”

This exchange does not end after initial scoping. It continues throughout the lifecycle—because models are only useful insofar as they fit living practice. For us, this is one of the session’s key lessons: industrial data science is a team sport with strong contributions from the domain side.

Agile in a Scrum team: short iterations, fast response to change

Jasmin works in a Scrum team and develops in an agile fashion—“in short iterations or so‑called sprint cycles.” Depending on where you come from, that may feel “a bit unfamiliar at first,” but it quickly became her “natural flow.” The payoff is obvious: “react quickly” when requirements change—and they do, especially in collaboration with process experts.

Agility here isn’t a buzzword; it’s a response to real‑world volatility. Processes evolve, insights from analyses shift priorities, new sensors or changes in production reshape the data landscape. Sprints supply structure and feedback frequency.

A common surprise: real data doesn’t behave like the textbook

One line sticks: “What often surprises people is that data often doesn’t behave like the textbook.” In textbooks, datasets are tidy, problem statements neatly scoped, labels pristine. In practice, measurements are incomplete, signals noisy, histories uneven, and context knowledge distributed.

Jasmin doesn’t frame that as a downside—quite the opposite: “That makes it super exciting.” For data scientists, this is the thrill: reality is rich and messy, forcing us to build solutions that are robust rather than fragile.

Stay open, keep learning, refresh the theory

Technologies “change quite often.” It’s essential to “always be open to new things.” Jasmin moves across “different domains”—today screwdriving curves, tomorrow something else entirely. Openness isn’t a soft extra; it’s a core requirement for the learning curve.

At the same time, she recommends “sometimes taking a step back” and “diving into theoretical foundations again.” Online courses on e‑learning platforms are a practical way to do that. The rhythm—learn on the job, then refresh theory—keeps your toolset sharp.

Getting in: degrees help, curiosity matters more

A striking, encouraging point: “A degree is not strictly necessary.” The right school‑level training “can also be sufficient.” What matters most is “interest in technical‑analytical questions,” enjoyment in working with data, and curiosity about “what you can do with the data,” “what you can find out with it,” and what you can “automate.” That motivation is what carries you through the inevitable challenges of real projects.

For us, this is good news for talent from diverse educational paths: the emphasis is less on diplomas and more on the ability to think in data, build solutions, and keep them running in production.

Takeaways from “Jasmin Grabenschweiger, Data Scientist bei Bosch”

From the Bosch‑Gruppe Österreich session, we distilled a set of practical principles:

  • Data first: Understand data quality, structure, and relationships before choosing models.
  • Iterate: Requirements change; sprints keep learning loops short.
  • Embed domain knowledge: Process experts are co‑authors of effective solutions.
  • Treat production as a first‑class concern: Deployment (e.g., via MLflow) and automated pipelines are part of the job.
  • Embrace messy reality: Not everything is tidy; robustness beats elegance.
  • Keep learning: Stay open to new tech and refresh theoretical foundations deliberately.
  • Follow your curiosity: Motivation to explore and automate matters as much as formal credentials.

Why the screwdriving‑curve use case is so instructive

This classification case bundles many hallmark elements of industrial data science:

  • A recurring process step (screwdriving) produces rich time series (curves).
  • Failure modes display characteristic patterns that algorithms can spot.
  • Data volume justifies big‑data tools like Spark, including streaming.
  • Labels and training data become the choke point—quality decides outcomes.
  • Deployment matters: Only with automated prediction pipelines and dashboards do insights translate into action.
  • Domain feedback closes the loop: Measures are derived, processes improved.

This is exactly why manufacturing‑oriented data science is compelling: the loop from data to decisions is short and tangible. Each improvement shows up in quality and efficiency.

Culture: collaboration by default

Jasmin speaks naturally about teamwork: Scrum, close alignment, complementary competencies. In such settings, excellence rarely happens in isolation. The interface between data and domain is a shared responsibility—and the best solutions live where domain expertise, data understanding, and software engineering reinforce one another.

For data scientists, that’s an invitation to treat communication and translation as part of the craft: clarifying requirements, making assumptions explicit, presenting results in a way that invites feedback—and keeping that loop alive.

Practical impulses for aspiring data scientists

Translating her insights into action suggests a compact playbook:

  1. Begin at the source: Know the sensors, how data is generated, and the operational context.
  2. Diagnose the dataset: Quality, gaps, outliers, dependencies—understand before you model.
  3. Think about operations early: Plan for deployment (e.g., via MLflow) and monitoring from day one.
  4. Work with domain pros: Requirements, labeling, validation—none of it happens in a vacuum.
  5. Embrace iteration: Prove value quickly, adjust, and deepen where it matters most.
  6. Maintain your foundations: Targeted theory refresh will improve your decisions.

These aren’t steps toward textbook perfection; they’re steps toward robustness. And robustness is what you need when the data isn’t textbook‑tidy.

Conclusion: A path that embraces reality

“Jasmin Grabenschweiger, Data Scientist bei Bosch” sketches a straightforward yet open‑ended journey into industrial data science. From mathematics through combinatorial optimization and hands‑on algorithmics to production‑adjacent applications with Spark, MLflow, and agile teamwork. The message is crisp: enjoy math and data, respect domain expertise, and keep learning. That combination is where good solutions come from.

The fascination lies in the tension between theory and workshop floor, between curves and consequences, between models and measures. For anyone who seeks that tension, industrial data science is an excellent place to be—and Jasmin’s story is a motivating reference.

Learn more about the Bosch group: www.bosch.de

More Tech Talks

More Tech Lead Stories

More Dev Stories