Job
- Level
- Experienced
- Job Field
- IT, Data, DevOps
- Employment Type
- Full Time
- Contract Type
- Permanent employment
- Salary
- from 60.000 € Gross/Year
- Location
- Vienna
- Working Model
- Hybrid, Onsite
Job Summary
In this role, you will develop ML systems on the Google Cloud Platform, implement MLOps practices, and work on Generative AI projects, utilizing Python for model management and real-time system integration.
Job Technologies
Your role in the team
- Join our Data & AI Tribe at Magenta and help shape a truly AI-driven culture on our Google Cloud Platform.
- In dieser Rolle entwerfen, implementieren und warten Sie ML-Systeme, setzen MLOps-Best Practices um und unterstützen hybride Cloud-Deployments.
- You will contribute to impactful AI projects - including Generative AI and Agentic AI - leveraging your Python skills and experience with ML platforms, experiment tracking, and model lifecycle management.
- Design, implement, and maintain tools and reusable components that power our machine learning initiatives in the cloud.
- Scale up training and batch inference with ML frameworks (such as xgboost, scikit-learn) using Vertex AI, and implement real-time inference and GenAI systems.
- Contribute to internal Python libraries used across various data teams, packaged with pixi, and implement changes to our cookiecutter-based project templates.
- Implement and champion MLOps principles to ensure seamless integration, deployment, and monitoring of ML models using Gitlab CI for CI/CD and Dagster for workflow orchestration.
- Define and implement architecture for sending model outputs to various target systems, such as CRM tools (Marketing Cloud) or back-end services.
- Integrate and fine-tune cutting-edge Generative AI models for real-world applications.
This text has been machine translated. Show original
Our expectations of you
Qualifications
- Do you have a proven ability to train, evaluate, and deploy ML models across diverse data types and production environments (batch or real-time) using tools like FastAPI, Docker, Kubernetes & Vertex AI, and are you familiar with Generative AI technologies, including LLM integration (e.g., LangChain), RAG pipelines, and prompt engineering?
- Do you have strong collaboration and system design skills, with the ability to simplify complex architectures and work effectively in agile, cross-functional teams?
Experience
- Do you have several years of experience in machine learning with end-to-end project ownership and strong Python and software engineering skills?
- Are you hands-on experienced in designing, building, and operating ML platforms (preferably on GCP), including model lifecycle management, experiment tracking, and orchestration tools?
- Do you have a very good understanding of MLOps practices, including CI/CD automation, observability, and monitoring for ML systems, and are you ideally experienced in developing on Google Cloud Platform?
This text has been machine translated. Show original