HID Global GmbH
Stephan Puri-Jobi, Software Test Lead von HID Global
Description
Software Test Lead bei HID Global Stephan Puri-Jobi gibt im Interview Einblicke in den Aufbau des Unternehmens und des agilen Testing Teams, mit welchen Technologien gearbeitet wird und wie das Recruiting im Team gestaltet ist.
By playing the video, you agree to data transfer to YouTube and acknowledge the privacy policy.
Video Summary
In “Stephan Puri-Jobi, Software Test Lead von HID Global,” Stephan Puri-Jobi describes how his 15-person test and QA team operates in SAFe with short dailies for visibility and fast unblocking, three-week sprints with retros that deliberately drive change, and careful sprint/test planning with buffers. The team runs fully automated tests (C#/.NET, NUnit, TestRail; CI/CD via Jenkins with Groovy and Python; Jira, Git) against smart-card credentials treated as a black box, emphasizing negative/security scenarios and a reset strategy to keep long pipelines stable. Hiring emphasizes clear job descriptions and brief, friendly interviews focused on capability and team fit/knowledge sharing, while onboarding supports talent with training materials, mentoring, beginner tasks, reviews, and ample ramp-up time.
Inside HID Global’s 100% Automated QA: Stephan Puri-Jobi on SAFe, CI/CD with Jenkins, and C# Testing for Secure Smart Cards
What we learned from the session with Stephan Puri-Jobi (HID Global GmbH)
In the session “Stephan Puri-Jobi, Software Test Lead von HID Global” (Speaker: Stephan Puri-Jobi, Company: HID Global GmbH), the Software Test Lead opens the door to a quality organization built entirely on automation—from test design to a resilient CI/CD pipeline. His team verifies security-critical smart card credentials without a single manual test, under hard constraints: black-box devices, long-running test suites, complex standards, and the need to systematically exercise error conditions to ensure no security information ever leaks.
“So far, we don't have one single manual test.”
From our DevJobs.at editorial vantage point, this is a blueprint for modern QA at scale: how a 15-person team works in SAFe, drives continuous improvement, onboards people into a highly technical domain, and hardens a pipeline to withstand hours-long suites and adversarial test cases. It also explains why this environment is an attractive destination for engineers who love building quality through code.
The team’s mission: Secure, reliable smart card credentials
Stephan Puri-Jobi’s team tests credentials—smart cards you might encounter in banking or at hotels. The mandate is reliability and security. The twist: the team must treat every card as a black box. There’s no peeking inside, no debug connection, no introspection beyond the card’s observable behavior.
“We are testing a piece of plastic. We don't have any debug connections… We really have to test it from the outside.”
The sharp end of the problem lies in negative scenarios: incorrect use, wrong commands, the right command with wrong data, wrong flows, and other misuse patterns. The team’s test strategy has to reflect this reality. The goal is unwavering: even under adversarial conditions the card must not malfunction or leak anything of security value.
“We have to ensure that in the negative case… the card just does not malfunction and worst case would reveal any security… to the attacker.”
That ethos informs everything—from how test cases are specified, to how they are executed, to how the pipeline recovers when something goes wrong mid-run.
Team size and roles: 15 specialists, clear responsibilities
At 15 people, this is a well-staffed QA group with crisp role separation:
- Test specification: deriving and documenting test cases from standards (e.g., ISO) and product specifications.
- Test implementation: translating specifications into executable C#/.NET code with NUnit.
- Test automation and CI/CD: maintaining Jenkins pipelines, Groovy and Python scripts, integrations, and test environment setup.
It’s a deeply technical space: protocols, standards, sequences, states—turning formal requirements into deterministic test code. This is engineering-heavy QA, not manual exercise of GUIs.
SAFe in practice: short dailies, brave retros, realistic planning
The team is organized under SAFe with three notable practices:
1) Daily stand-ups: concise and unblocking
The focus is alignment and early blocker removal—also avoiding duplicated work.
“We really try to keep this short… to identify early that if somebody is blocked by something so that we can help him out immediately.”
2) Retrospectives: change one thing every sprint
Sprints are three weeks. Every retro yields a concrete change—small, measurable, and tracked for impact in the next sprint.
“We always find something to change… We try to monitor this and track and see how this influences our way of work.”
It’s continuous improvement applied to operations: form a hypothesis, ship the change, observe, keep or revert.
3) Capacity planning with buffers
The team plans work to near capacity but never at 100%. Small buffers for bugs and unforeseen tasks are non-negotiable.
“Everybody tries to fill himself up to almost the capacity… Of course, we need to have small buffers always for bugs…”
This keeps the pipeline stable and the team responsive.
The tooling backbone: C#, NUnit, TestRail, Jenkins, Groovy, Python
The stack is clear and cohesive:
- Test environment: C#/.NET
- Test runner: NUnit
- Test specification: TestRail as the source of truth
- Integration: helper scripts to pull TestRail information into the C# test implementation
- CI/CD: Jenkins driven by Groovy and Python scripts
- Agile tooling: Jira to plan and track work in the SAFe context
- Version control: Git
“Our team is quite lazy. They don't want to do much. They prefer programming things, and this is perfectly fine.”
That tongue-in-cheek line captures the team’s mindset: automate everything repeatable. There is no manual fallback.
“So far, we don't have one single manual test.”
For engineers who identify as SDETs or automation-first QA professionals, this is a strong signal: code is the medium of quality here.
Designing tests for a black-box device: positive, negative, and stateful
Smart cards, from the outside, behave like state machines. The team has to drive the device into specific states, and not only the happy ones. Negative paths are paramount:
- Wrong commands or sequences
- Correct commands with incorrect data
- Misuse scenarios with no protective boundary (“You can do whatever you want with this card.”)
Security-related test suites must deliberately exercise what an attacker would do—and then assert that nothing sensitive is exposed and the device remains sound.
CI stability through resets: isolate failures, keep running
A full suite currently takes about three and a half hours, and coverage will only increase. An early failure must not invalidate the entire run. The team’s answer is a thorough reset strategy:
“Having a reset strategy in place to be able to reset the card and in fact, the whole test environment… and this failing test is then isolated and all subsequent tests are still executed.”
Key aims of that strategy:
- Return the test environment to a known-good, reproducible state
- Safely reset the card’s state
- Keep already executed, passing tests valid
- Isolate failing tests and prevent cascading failures
In a black-box world without debug hooks, this resilience is essential to get meaningful, long-running CI signals.
Hiring philosophy: clarity, short interviews, team contribution
Two hiring tenets stand out in how Stephan Puri-Jobi runs the process:
1) Clear job descriptions: precise expectations help attract candidates who actually match the role.
2) Short, pleasant interviews: technical questioning is included but not designed to be trick-laden. Team contribution and knowledge-sharing are decisive.
“Somebody with a big knowledge on technology is, of course, very valuable… but he has to be part of the team at the end… if somebody cannot contribute to the team and can't give his know-how to the others, then this… slows us down.”
In other words: the team doesn’t optimize for lone experts. It optimizes for engineers who lift others.
Onboarding and mentoring: time, materials, and beginner tasks
Onboarding is structured and realistic about domain complexity: ISO standards, dense specifications, multiple interacting systems. New hires are given time, dedicated materials, and a mentor for the first weeks. Early “beginner tasks” help them see cause-and-effect in the code and the system.
“A new employee gets this time. This is quite important to us.”
Core elements of the approach:
- Training materials prepared for newcomers
- A named mentor to answer questions directly
- Starter tasks to explore the system safely
- Reviews and shared ownership of ramp-up across the entire team
The intention is pragmatic: the faster newcomers truly become part of the team, the sooner they contribute and make everyone’s workload lighter.
“The sooner the new one is part of the team, the sooner he can contribute and help us then make our lives easier.”
Collaboration ethos: transparency, early help, shared responsibility
Daily check-ins prevent drift. Retros instill change. Mentoring and reviews make learning a team sport. The message is consistent: identify blockers early, help immediately, and ensure knowledge flows. The measure of success is how resilient the pipeline becomes as the suite grows and how quickly newcomers can ship useful tests.
Why engineers will want to join this team
For QA engineers, SDETs, and test automation specialists, this environment offers meaningful challenges and modern practices:
- 100% automation: no manual tests—own the end-to-end in code with C#/.NET, NUnit, Jenkins, Groovy, and Python.
- Security-critical domain: design for adversarial behavior in a black-box context and validate that nothing leaks.
- Mature agile execution: SAFe flows with short dailies, measurable retros, and buffer-aware planning.
- Learning built-in: mentoring, training materials, and explicit time to reach proficiency in a complex domain.
- Operational agency: change one thing per sprint and measure its impact—continuous improvement with guardrails.
- CI/CD resilience: reset strategies, test isolation, and long, reproducible runs.
- Clarity of roles and expectations: separation of specification, implementation, and automation responsibilities.
If you enjoy building quality through software, this is a place to have outsized impact.
Practices worth emulating
Several patterns in this session are broadly applicable across engineering teams:
- Take black-box constraints seriously: drive states via interfaces; separate triggers from observations.
- Prioritize negative testing: security-critical products must exercise malicious paths, not just happy ones.
- Reset before retest: a robust reset strategy enables long, reliable CI runs and actionable signals.
- One change per sprint: small, measurable process tweaks beat sweeping overhauls.
- Always plan with buffers: sprint plans at full utilization undermine QA stability.
- Automate by default: scripts over manual steps—“lazy” in the best engineering sense.
- Onboarding is a team sport: mentors, materials, starter tasks, and reviews accelerate ramp-up.
The hard problems this team leans into
The team doesn’t shy away from core difficulties and has built explicit responses:
- Long run times (about 3.5 hours per full run, and growing): pipeline resilience, test isolation, and recoverability.
- No internal visibility into the card: orchestration of states via interfaces and precise observation of external behavior.
- Adversarial scenarios: systematic, automated negative testing to guarantee non-disclosure of security information.
- Heavy standards and specifications (e.g., ISO): structured onboarding, mentoring, and allocated time to learn.
Conclusion: A modern, security-first test culture at scale
The session “Stephan Puri-Jobi, Software Test Lead von HID Global” showcases a QA organization that fuses security, automation, and collaborative culture. With clear roles, SAFe routines, a focused toolchain (C#/.NET, NUnit, TestRail, Jenkins, Groovy, Python), and an uncompromising black-box test strategy, this is a compelling environment for engineers who want to ship quality through code.
“Our team is quite lazy. They don't want to do much. They prefer programming things, and this is perfectly fine.”
That line captures a deeper truth: when teams automate relentlessly, they create time to think, improve, and secure. In a field where reliability and confidentiality are non-negotiable, that’s exactly the kind of “laziness” the industry needs.
More Tech Talks
HID Global GmbH Importance of secure coding
Bassem Taamallah von HID Global spricht in seinem devjobs.at TechTalk darüber, wie sich die Praktiken von Secure Coding auf die Qualität der Produkte auswirkt.
Watch nowHID Global GmbH Testing on all Levels
Stephan Puri-Jobi von HID Global beleuchtet in seinem devjobs.at TechTalk das Thema Testing und was es insbesondere bei Secure Devices zu beachten gibt.
Watch now
More Dev Stories
HID Global GmbH Anna Dziarkowska, Software Engineer bei HID Global
Anna Dziarkowska von HID Global erzählt im Interview wie sie zum Software Engineering gekommen ist, mit welchen Themen sie sich aktuell als Software Engineer beschäftigt und was sie Anfängern empfehlen würde.
Watch nowHID Global GmbH Stephan Puri-Jobi, Software Test Lead bei HID Global
Stephan Puri-Jobi von HID Global redet im Interview über seine Anfänge im Programmieren bis hin zum Software Testing, was seine Arbeit beinhaltet und gibt Tipps für Neueinsteiger.
Watch now