Apache Flink
2 job openings found.
Dynatrace GmbH
In this role, you will develop scalable algorithms for telemetry data analysis, work on prototypes for user insights, and optimize AI models for pattern recognition in user behavior.
In this role, you will develop AI-driven prototypes and algorithms for Digital Experience Monitoring, analyze complex data, and present insights to enhance user experience.
Receive new Apache Flink Jobs in Austria by email.
In this role, you will develop AI-driven solutions for digital experience monitoring by designing algorithms for large-scale data processing and prototyping with generative AI to extract user insights.
In this role, you develop AI-powered solutions for digital experience monitoring by designing algorithms, performing data analyses, and prototyping features for user insights.
In this role, you will develop AI-driven solutions for digital experience monitoring by designing algorithms for large datasets and prototyping user insights with LLMs and generative AI.
Jobworld KG
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS Cloud, optimize operations, and advise colleagues on selecting suitable services for business needs.
In this role, you will develop scalable algorithms and processing pipelines for large datasets to enhance AI capabilities in Digital Experience Monitoring and generate user insights.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, optimize data pipelines, and support Data Engineers in selecting appropriate services.
In this role, you administer and optimize the Big Data environment on Apache Hadoop, manage services like Kafka and Flink, while advising Data Engineers and implementing IaaC via Terraform.
In this role, you will develop scalable AI solutions for Digital Experience Monitoring by building data pipelines for telemetry, designing algorithms, and deriving insights from user behavior.
In this role, you will administer our Big Data environment based on Apache Hadoop, optimize services like Kafka and Flink, conduct troubleshooting and advise the team on the use of Hadoop services.
In this role, you will analyze data from pre-systems, model and implement scalable ELT/ETL processes, and work in a big data environment using technologies such as Apache Hadoop and SQL.
In this role, you will administer our Big Data environment based on Apache Hadoop in the AWS cloud, manage services like Kafka and Flink, and advise Data Engineers on the best Hadoop services for business use cases.
Finmatics GmbH
In this role, you will develop the data infrastructure and build ETL/ELT pipelines to integrate product data, transaction data, and CRM insights into an analytics-ready environment while continuously optimizing it.
In this role, you will administrate our Big Data environment based on Apache Hadoop in the AWS cloud, optimize systems, manage Kafka and Flink, and advise Data Engineers on suitable Hadoop services.