Pyspark
6 job openings found.
Raiffeisen Gruppe
In this role, you develop ML/AI systems and orchestrate LLMs with agent frameworks while implementing RAG, establishing LLMOps, and conducting evaluations to optimize performance and costs.
Erste Bank
In this role, you design data pipelines for HR sources while ensuring data quality. You collaborate with Data Scientists to enable analytics and implement best practices for scalability and automation.
adesso Austria GmbH
In this role, you advise clients on data technologies, design tailored solutions in Cloud Data Engineering, and take on responsibilities as a technical lead for projects using Azure, Databricks, and Python.
Evolit Consulting GmbH
In this role, you design cloud-based data platforms, implement ETL/ELT pipelines, build dashboards, and optimize data quality while collaborating with Data Scientists and Analysts.
NETCONOMY
In this role, you will develop modern data solutions on Google Cloud Platform, implement efficient ETL processes, orchestrate workflows, and optimize BigQuery queries for your data architectures.
Marktguru
In this role, you will build robust PySpark/Python/SQL pipelines on Databricks, implement LLM and computer vision applications, and optimize workflows while collaborating closely with data scientists.
Receive new Pyspark Jobs in Austria by email.
APG - Austrian Power Grid AG
In this role, you will develop data-driven products on Databricks, design ETL/ELT pipelines, create interactive dashboards and optimize data processes in collaboration with data scientists and analysts.
Project Adventures GmbH
In this role, you will develop tailored data engineering solutions in the cloud, take technical responsibility for projects, and support your team through mentoring and knowledge sharing.
In this role, you will develop robust data pipelines using PySpark/Python/SQL on Databricks, enhance data architecture, and implement LLM-based applications while collaborating closely with data scientists.
Novasign
In this role, you will develop scalable gRPC and REST APIs and modern React interfaces, work on data pipelines, and implement best design practices within a cross-disciplinary team.
In this role, you will design scalable data solutions, build efficient ETL processes, work hands-on with Databricks and Google BigQuery daily, and ensure data quality and integration with cloud services.
In this role, you will develop and implement ETL/ELT pipelines on the Databricks platform, optimize data processes, and create interactive dashboards while collaborating closely with Data Scientists.
Sclable Business Solutions GmbH
In this role, you design reliable data solutions, implement transformation pipelines, and optimize data processing in collaboration with scientists. You also support data strategy development and monitor data security.
In this role, you will develop powerful data pipelines and ETL/ELT processes using Databricks and Google BigQuery, work on cloud integrations, and foster data modeling in the team to solve complex data challenges.
PwC Österreich
In this role, you will implement innovative AI solutions, develop predictive modeling and generative AI like chatbots, and design automated processes and data solutions within interdisciplinary teams.