Capgemini, a diverse global collective of thinkers and entrepreneurs, seeks a Senior Data Engineer to reimagine technological possibilities. Discover the future you want with Capgemini.
In this role you can expect to have the responsibilities:
- Develop solutions using Azure Data Factory and Databricks.
- Implement CI/CD pipelines with DevOps.
- Work extensively with Databricks notebooks using SQL, PySpark, and Python.
- Utilize Azure Data Lake and cloud storage accounts.
- Handle Delta lakehouse and delta live tables.
- Execute various data ingestion methods, including API, file-based, and database ingestion.
- Integrate Git for ADF pipeline versioning and collaboration.
- Manage source control in DevOps using Git, including branching, merging, and pull requests.
This role comes with the following benefits:
- Access to premier learning platforms and certifications.
- Minimum of 40 hours of training per year.
- Recognized as one of the World’s Most Ethical Companies.
- Commitment to diversity and inclusion.
- Support for disabilities and neurodivergent candidates.
- Commitment to carbon neutrality by 2025.
This role requires you to have:
- Experience with Azure Data Factory and Databricks.
- Proficiency in CI/CD pipelines using DevOps.
- Knowledge of Databricks notebooks and Python.
- Skilled in Azure Data Lake and cloud storage.
- Understanding of Delta lakehouse and delta live tables.
- API-based, file-based, and database data ingestion methods.
- Git integration for ADF pipeline versioning.
- Experienced in SQL queries and concepts.
- Development of Oracle procedures and packages.
- Data modelling and building materialised views.
- Familiarity with VI editor and Linux commands.
- Shell scripting experience.
You would benefit from having:
Capgemini Australia adheres to ISO9001, ISO27001, and ISO14001 standards, ensuring the delivery of secure and compliant solutions.
#J-18808-Ljbffr