Cloudera CDH
1 job opening found.
Accenture GmbH
In this role, you will develop scalable AI and data solutions, integrate innovative applications into existing systems, and design modern data architectures for data-driven transformations.
Receive new Cloudera CDH Job in Austria by email.
Jobworld KG
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS Cloud, optimize operations, and advise colleagues on selecting suitable services for business needs.
In this role, you develop innovative data models and productive data pipelines for companies, lead interdisciplinary teams, and consult at the highest level to realize data-driven business strategies.
Greentube Internet Entertainment Solutions GmbH
In this role, you will develop real-time data pipelines using Cloudera tools like Kafka and Spark. You will optimize and automate processes, document your work, and contribute to the migration of pipelines to Databricks.
In this role, you will develop scalable AI and data solutions, integrate innovative applications, design modern data architectures, and contribute to efficiency improvements in operational operations.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, optimize data pipelines, and support Data Engineers in selecting appropriate services.
In this role, you administer and optimize the Big Data environment on Apache Hadoop, manage services like Kafka and Flink, while advising Data Engineers and implementing IaaC via Terraform.
In this role, you will develop innovative data models and implement productive data pipelines in cloud and on-premise environments to support data-driven business strategies for clients.
In this role, you will administer our Big Data environment based on Apache Hadoop, optimize services like Kafka and Flink, conduct troubleshooting and advise the team on the use of Hadoop services.
VERBUND AG
In this role, you manage Linux systems and administer ORACLE and PostgreSQL databases. You automate processes with Ansible, oversee cloud integration, and ensure the readiness of security-critical services.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, manage services like Kafka and Flink, and advise Data Engineers on the best Hadoop services for business use cases.
In this role, you will develop data-driven business models and lead interdisciplinary teams to implement complex data pipelines both in the cloud and on-premise while realizing innovative solutions.
In this role, you will administrate our Big Data environment based on Apache Hadoop in the AWS cloud, optimize systems, manage Kafka and Flink, and advise Data Engineers on suitable Hadoop services.
In this role, you will develop real-time data pipelines using NiFi, Kafka, and Spark, optimize workflows, document processes, and collaborate on migrating pipelines to Databricks.
In this role, you design and implement scalable real-time and batch data pipelines using Databricks on Azure, collaborating with various teams and driving continuous improvements.