Job
- Level
- Experienced
- Job Field
- Data, Database
- Employment Type
- Full Time
- Contract Type
- Permanent employment
- Salary
- from 3.843 € Gross/Month
- Location
- Linz
- Working Model
- Hybrid, Onsite
Job Summary
In this role, you will design data models, build robust ETL pipelines, optimize data structures in cloud environments, and ensure data integrity according to governance standards.
Job Technologies
Your role in the team
- Design and implement effective data models and table structures across various storage systems, including relational databases, NoSQL stores, data warehouses, and data lakes.
- Build, maintain, and optimize robust data pipelines (ETL/ELT) to ingest, transform, and load data from production systems and external sources.
- Use workflow orchestration tools to schedule, automate, and monitor data pipelines, ensuring their reliability and performance.
- Define and implement data quality standards and processes (e.g., bronze, silver, gold tiering), including handling missing values and ensuring data integrity, accuracy, and completeness.
- Establish and enforce data governance policies and procedures, manage data lineage and metadata, implement access controls and encryption, and support compliance with data privacy regulations (e.g., GDPR, CCPA).
- Implement and manage scalable data platforms (data warehouses, data lakes) to support efficient analytics, feature engineering, and model training for AI applications.
- Conduct statistical analyses and evaluations of datasets, and develop dashboards or monitoring systems to track pipeline health and data quality metrics.
- Collaborate closely with AI Engineers, AI Software Engineers, QA Engineers, and Data Analysts to understand data requirements and deliver reliable, high-quality data solutions.
This text has been machine translated. Show original
Our expectations of you
Qualifications
- Strong proficiency in SQL and Python for data manipulation, automation, and pipeline development.
- Familiarity with big data tools and frameworks such as Apache Spark, Kafka, or Hadoop.
- Solid understanding of data modeling, ETL/ELT development, and data warehousing concepts.
- Proficiency with version control systems (e.g., Git).
- Excellent problem-solving skills and high attention to detail.
Experience
- Proven experience as a Data Engineer, including designing and building data pipelines and infrastructure.
- Hands-on experience with cloud platforms (e.g., GCP, Azure) and their respective data services (e.g., BigQuery, Azure Data Factory, Databricks).
- Experience with data quality management and data governance principles.
This text has been machine translated. Show original
Benefits
Health, Fitness & Fun
Work-Life-Integration
Job Locations
Topics that you deal with on the job
This is your employer
TeamViewer GmbH
As the leading provider of remote connectivity solutions, TeamViewer enables its users to connect everything, everywhere, and anytime.
Description
- Company Size
- 50-249 Employees
- Language
- English
- Company Type
- Established Company
- Working Model
- Hybrid, Onsite
- Industry
- Internet, IT, Telecommunication
