Hadoop
4 job openings found.
TeamViewer GmbH
In this role, you will develop data pipelines and build robust storage solutions to prepare data for analysis and AI initiatives, ensuring that data is reliable and of high quality.
Infineon Technologies Austria AG
In this role, you will provide 3rd level support for production data management and continuously improve our reporting platforms using technologies like Oracle and Hadoop, collaborating closely with various stakeholders.
voestalpine AG
In this role, you design efficient ETL processes, develop Power BI solutions, and shape the data architecture for cloud and on-prem solutions with a focus on scalability and performance.
paiqo GmbH
In this role, you will develop tailored machine learning models using Python for customer business needs, implement them in processes, analyze data, present findings, and advise clients on their technical ecosystems.
Receive new Hadoop Jobs in Austria by email.
Jobworld KG
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS Cloud, optimize operations, and advise colleagues on selecting suitable services for business needs.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, optimize data pipelines, and support Data Engineers in selecting appropriate services.
In this role, you administer and optimize the Big Data environment on Apache Hadoop, manage services like Kafka and Flink, while advising Data Engineers and implementing IaaC via Terraform.
In this role, you will administer our Big Data environment based on Apache Hadoop, optimize services like Kafka and Flink, conduct troubleshooting and advise the team on the use of Hadoop services.
Raiffeisen Informatik GmbH
In this role, you will develop scalable data pipelines for processing large data volumes, build Data Warehouses in MS SQL Server and AWS, and integrate data from various sources into high-quality data products.
In this role, you will analyze data from pre-systems, model and implement scalable ELT/ETL processes, and work in a big data environment using technologies such as Apache Hadoop and SQL.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, manage services like Kafka and Flink, and advise Data Engineers on the best Hadoop services for business use cases.
Simon-Kucher & Partners
In this role, you process and structure complex datasets to enable data-driven decisions. You automate ETL processes and create meaningful data visualizations for stakeholders.
In this role, you will develop complex data processing pipelines, define modern data architectures, and work with cloud technologies, particularly Microsoft Azure, to deliver data-driven solutions across various industries.
In this role, you will design data models, build robust ETL pipelines, optimize data structures in cloud environments, and ensure data integrity according to governance standards.
In this role, you will administrate our Big Data environment based on Apache Hadoop in the AWS cloud, optimize systems, manage Kafka and Flink, and advise Data Engineers on suitable Hadoop services.