Hadoop
3 job openings found.
voestalpine AG
In this role, you design efficient ETL processes, develop Power BI solutions, and shape the data architecture for cloud and on-prem solutions with a focus on scalability and performance.
paiqo GmbH
In this role, you will develop tailored machine learning models using Python for customer business needs, implement them in processes, analyze data, present findings, and advise clients on their technical ecosystems.
TeamViewer GmbH
In this role, you will design data models, build robust ETL pipelines, optimize data structures in cloud environments, and ensure data integrity according to governance standards.
Receive new Hadoop Jobs in Austria by email.
Jobworld KG
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, optimize data pipelines, and support Data Engineers in selecting appropriate services.
In this role, you administer and optimize the Big Data environment on Apache Hadoop, manage services like Kafka and Flink, while advising Data Engineers and implementing IaaC via Terraform.
In this role, you will administer our Big Data environment based on Apache Hadoop, optimize services like Kafka and Flink, conduct troubleshooting and advise the team on the use of Hadoop services.
Raiffeisen Informatik GmbH
In this role, you will develop scalable data pipelines for processing large data volumes, build Data Warehouses in MS SQL Server and AWS, and integrate data from various sources into high-quality data products.
In this role, you will analyze data from pre-systems, model and implement scalable ELT/ETL processes, and work in a big data environment using technologies such as Apache Hadoop and SQL.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, manage services like Kafka and Flink, and advise Data Engineers on the best Hadoop services for business use cases.
Simon-Kucher & Partners
In this role, you process and structure complex datasets to enable data-driven decisions. You automate ETL processes and create meaningful data visualizations for stakeholders.
In this role, you will develop complex data processing pipelines, define modern data architectures, and work with cloud technologies, particularly Microsoft Azure, to deliver data-driven solutions across various industries.
In this role, you will administrate our Big Data environment based on Apache Hadoop in the AWS cloud, optimize systems, manage Kafka and Flink, and advise Data Engineers on suitable Hadoop services.
In this role, you will administer our Big Data environment on Apache Hadoop, optimize data streams using Kafka and Flink, and advise our Data Engineers on suitable service selections.
In this role, you will administrate our Big Data environment based on Apache Hadoop in the AWS Cloud, manage Kafka, Flink, and other services, and optimize workloads through advising and troubleshooting.
Computer Futures
In this role, you will design robust data pipelines and develop scalable ETL solutions to enhance data quality and support data-driven decision-making in a fast-paced environment.