Job
- Level
- Experienced
- Job Field
- BI, Data
- Employment Type
- Full Time
- Contract Type
- Permanent employment
- Salary
- 3.724 to 4.346€ Gross/Month
- Location
- Linz
- Working Model
- Hybrid, Onsite
Job Summary
In this role, you will develop data pipelines and build robust storage solutions to prepare data for analysis and AI initiatives, ensuring that data is reliable and of high quality.
Job Technologies
Your role in the team
- We are seeking a skilled and detail-oriented Data Engineer to join our growing team.
- In this role, you will be responsible for designing, building, and maintaining the systems that collect, store, and prepare data for analysis across the organization.
- Your work will serve as the foundation for our data-driven decision-making and AI initiatives, making you a key enabler for our analytics and AI teams.
- Design and implement effective data models and table structures across various storage systems, including relational databases, NoSQL stores, data warehouses, and data lakes.
- Build, maintain, and optimize robust data pipelines (ETL/ELT) to ingest, transform, and load data from production systems and external sources.
- Use workflow orchestration tools to schedule, automate, and monitor data pipelines, ensuring their reliability and performance.
- Define and implement data quality standards and processes (e.g., bronze, silver, gold tiering), including handling missing values and ensuring data integrity, accuracy, and completeness.
- Establish and enforce data governance policies and procedures, manage data lineage and metadata, implement access controls and encryption, and support compliance with data privacy regulations (e.g., GDPR, CCPA).
- Implement and manage scalable data platforms (data warehouses, data lakes) to support efficient analytics, feature engineering, and model training for AI applications.
- Conduct statistical analyses and evaluations of datasets, and develop dashboards or monitoring systems to track pipeline health and data quality metrics.
- Collaborate closely with AI Engineers, AI Software Engineers, QA Engineers, and Data Analysts to understand data requirements and deliver reliable, high-quality data solutions.
This text has been machine translated. Show original
Our expectations of you
Qualifications
- Strong proficiency in SQL and Python for data manipulation, automation, and pipeline development.
- Vertrautheit mit Big Data Tools und Frameworks wie Apache Spark, Kafka oder Hadoop.
- Solid understanding of data modeling, ETL/ELT development, and data warehousing concepts.
- Proficiency with version control systems (e.g., Git).
- Excellent problem-solving skills and high attention to detail.
Experience
- Proven experience as a Data Engineer, including designing and building data pipelines and infrastructure.
- Hands-on experience with cloud platforms (e.g., GCP, Azure) and their respective data services (e.g., BigQuery, Azure Data Factory, Databricks).
- Experience with data quality management and data governance principles.
This text has been machine translated. Show original
Benefits
Health, Fitness & Fun
Work-Life-Integration
Job Locations
Topics that you deal with on the job
This is your employer
TeamViewer GmbH
As the leading provider of remote connectivity solutions, TeamViewer enables its users to connect everything, everywhere, and anytime.
Description
- Company Size
- 50-249 Employees
- Language
- English
- Company Type
- Established Company
- Working Model
- Hybrid, Onsite
- Industry
- Internet, IT, Telecommunication
