Job
- Level
- Senior
- Job Field
- IT, Data, DevOps
- Employment Type
- Full Time
- Contract Type
- Permanent employment
- Salary
- 70.000 to 75.000€ Gross/Year
- Location
- Vienna
- Working Model
- Hybrid, Onsite
Job Summary
In this role, you design and implement scalable real-time and batch data pipelines using Databricks on Azure, collaborating with various teams and driving continuous improvements.
Job Technologies
Your role in the team
- Our Real-Time Data Engineering team builds low-latency, high-throughput pipelines that power operational and analytical use cases as well as user-facing features.
- As we expand our ecosystem by integrating Databricks alongside our existing Cloudera infrastructure, this role plays a key part in shaping a modern, scalable data foundation.
- Joining a small, collaborative, and international team, you'll contribute to solutions that enable innovation across the business - all in an environment that values humor, curiosity, continuous learning, and accountability without blame.
- Primarily design and implement robust, scalable, and low latency streaming and batch data pipelines using Databricks on Azure.
- Occasionally develop some real-time data pipelines using Cloudera tools such as NiFi, Kafka, Spark, Kudu, and Impala.
- Collaborate with data scientists, analysts, engineers, product managers, and third-party partners to deliver reliable, high-quality data products.
- Contribute to continuous improvements in infrastructure, automation, observability, and deployment practices.
- Document as needed and share knowledge with other team members.
- Be part of a monthly on-call rotation (1 week per month) to support the stability of our real-time data infrastructure.
This text has been machine translated. Show original
Our expectations of you
Qualifications
- Strong knowledge of the Azure cloud ecosystem, including Azure Data Lake and Azure DevOps.
- Proven expertise in real-time data technologies such as Kafka and Spark Structured Streaming, plus solid SQL skills (including query tuning and optimization).
- Proficiency in Python or Scala for large-scale data engineering tasks.
- Familiarity with the medallion architecture and Delta Lake best practices.
- Familiarity with CI/CD workflows for data engineering (Azure DevOps, GitHub Actions, or similar) is an advantage.
- Knowledge of data governance, schema evolution, and GDPR/data privacy in pipelines is beneficial.
- Understanding of observability tools for data pipelines (logging, alerting, metrics) adds value.
- Fluent English (spoken and written) with excellent communication skills (clear, concise, and audience-aware).
Experience
- Several years of data engineering experience building batch and real-time data pipelines.
- 3+ years of hands-on experience with Databricks is essential.
- Experience developing data pipelines with the Cloudera stack is a plus.
- Experience in online gaming, entertainment, or e-commerce industries is a nice bonus.
This text has been machine translated. Show original
Benefits
Health, Fitness & Fun
Food & Drink
Work-Life-Integration
Job Locations
Topics that you deal with on the job
This is your employer
Greentube Internet Entertainment Solutions GmbH
Raaba-Grambach, Wien
Greentube is a global leader in online and mobile gaming, offering innovative solutions that set the standard for the industry. With more than two decades of experience, Greentube has become the go-to provider for gaming excellence.
Description
- Language
- English
- Company Type
- Established Company
- Working Model
- Hybrid, Onsite
- Industry
- Internet, IT, Telecommunication
Dev Reviews
by devworkplaces.com
Total
(1 Review)3.5
Workingconditions
4.3Engineering
3.0Culture
4.0Career Growth
3.0