Job
- Level
- Experienced
- Job Field
- Data, Test/QA
- Employment Type
- Full Time
- Contract Type
- Permanent employment
- Salary
- from 64.554 € Gross/Year
- Location
- Graz
- Working Model
- Onsite
Job Summary
In this role, you will develop innovative approaches for AI system validation, analyze technologies and risks, and prototype scalable solutions for digital trust services.
Your role in the team
- The AI Trust Innovation Technologist strengthens SGS's Digital Research & Ventures capabilities by actively building, testing, and analysing AI systems to develop credible independent validation and monitoring services.
- The role combines deep AI engineering expertise with venture-oriented innovation, evaluating emerging technologies and startups while translating hands-on experimentation into scalable Digital Trust validation solutions.
- Analyze emerging AI technologies and real-world AI system architectures (e.g., LLM-based systems, ML pipelines, multimodal systems) to identify where independent validation, testing, or monitoring by SGS is technically feasible and valuable.
- Technically assess AI risks - including robustness failures, bias/fairness issues, explainability limits, data integrity risks, cybersecurity vulnerabilities, and misuse scenarios (e.g., deepfakes, hallucinations) - and translate them into potential validation or monitoring service opportunities.
- Develop, prototype, and evaluate AI validation approaches (e.g., adversarial testing, dataset validation, interpretability methods, provenance/watermarking) to assess technical feasibility and scalability for Digital Trust services.
- Interpret AI regulations and standards (e.g., EU AI Act, ISO/IEC AI standards, NIST AI RMF) and translate their technical implications into viable validation, monitoring, or independent evaluation approaches.
- Engage with universities, AI research labs, startups, and technology leaders to track cutting-edge AI system developments and explore collaboration, experimentation, and validation opportunities.
- Assess AI startups, tools, and platforms for technical maturity, architectural soundness, evaluation robustness, and strategic fit with SGS's AI Trust ambitions.
- Provide technical insight and hands-on validation input for AI-related build-buy-partner-invest evaluations, including assessment of model architectures, evaluation methodologies, and system scalability.
- Contribute expert insight to Digital Trust marketing, thought leadership, and internal education on AI trust issues.
- Work cross-functionally with business lines, M&A, R&D, innovation teams, and IT to technically assess AI systems, prototype validation approaches, and support early-stage AI trust initiatives.
- Build and experiment with AI systems directly to deeply understand system behaviour, validation challenges, and potential service design implications.
This text has been machine translated. Show original
Our expectations of you
Education
- Advanced degree (Master's or PhD) in AI, Machine Learning, Data Science, Computer Science, or a related field.
Qualifications
- Ability to translate deep technical understanding of AI system behaviour into scalable independent validation, monitoring, or assurance service opportunities.
- Ability to technically analyse and diagnose AI system behaviour and identify validation, robustness, or monitoring gaps.
- Understanding of major AI regulations and standards and their technical implications for AI system validation and monitoring approaches.
- Innovative, systems-oriented thinker able to translate technical AI validation challenges into scalable Digital Trust service concepts.
- Collaborative, hands-on, and comfortable working across research, engineering, business, and venture-building environments.
- Strong experimental mindset with the ability to rapidly prototype and test AI validation concepts.
- Familiarity with AI lifecycle management and MLOps concepts (e.g., monitoring, drift detection, retraining pipelines) is an advantage.
Experience
- 2-5 years of hands-on experience building, deploying, and validating AI/ML systems in production environments, startup environments, or advanced R&D settings.
- Strong understanding of AI validation and trust challenges (robustness, bias, explainability, AI security, misuse), grounded in practical AI system development experience.
- Practical experience applying AI evaluation and validation methods (e.g., interpretability techniques, dataset validation, adversarial testing, red teaming, provenance or watermarking mechanisms).
- Experience working in research-driven, startup, or advanced AI R&D environments, with the ability to translate research concepts into working prototypes.
- Experience assessing AI startups, tools, or research from a technical architecture, evaluation, and system maturity perspective.
This text has been machine translated. Show original
What we offer
- Opportunity to work with a global leader in inspection, verification, testing, and certification.
- Collaborative and inclusive work environment.
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
- The minimum gross annual salary for this position is EUR 64,554 (based on 14 monthly payments) according to the applicable collective agreement.
- A higher salary may be possible depending on your qualifications and experience.
This text has been machine translated. Show original
Topics that you deal with on the job
Job Locations
This is your employer
SGS
Graz
The SGS Group is the world's leading company in the areas of testing, verification and certification. We are considered to be a global benchmark for quality and integrity. With more than 95,000 employees, we operate a network of over 2,400 offices and laboratories worldwide.
Description
- Company Type
- Established Company
- Working Model
- Onsite
- Industry
- Science, Research