NiFi
0 job openings found.
Check the selected filters or try different search criteria.
Receive new NiFi Jobs in Austria by email.
Jobworld KG
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS Cloud, optimize operations, and advise colleagues on selecting suitable services for business needs.
Greentube Internet Entertainment Solutions GmbH
In this role, you will develop real-time data pipelines using Cloudera tools like Kafka and Spark. You will optimize and automate processes, document your work, and contribute to the migration of pipelines to Databricks.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, optimize data pipelines, and support Data Engineers in selecting appropriate services.
In this role, you administer and optimize the Big Data environment on Apache Hadoop, manage services like Kafka and Flink, while advising Data Engineers and implementing IaaC via Terraform.
In this role, you will administer our Big Data environment based on Apache Hadoop, optimize services like Kafka and Flink, conduct troubleshooting and advise the team on the use of Hadoop services.
In this role, you will analyze data from pre-systems, model and implement scalable ELT/ETL processes, and work in a big data environment using technologies such as Apache Hadoop and SQL.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, manage services like Kafka and Flink, and advise Data Engineers on the best Hadoop services for business use cases.
In this role, you will administrate our Big Data environment based on Apache Hadoop in the AWS cloud, optimize systems, manage Kafka and Flink, and advise Data Engineers on suitable Hadoop services.
In this role, you will develop real-time data pipelines using NiFi, Kafka, and Spark, optimize workflows, document processes, and collaborate on migrating pipelines to Databricks.
In this role, you design and implement scalable real-time and batch data pipelines using Databricks on Azure, collaborating with various teams and driving continuous improvements.
In this role, you will administer our Big Data environment on Apache Hadoop, optimize data streams using Kafka and Flink, and advise our Data Engineers on suitable service selections.
In this role, you will administrate our Big Data environment based on Apache Hadoop in the AWS Cloud, manage Kafka, Flink, and other services, and optimize workloads through advising and troubleshooting.
In this role, you will develop robust and scalable streaming and batch data pipelines using Databricks on Azure and Cloudera, collaborate closely with various teams, and contribute to the enhancement of the data infrastructure.