Candidate Requirement
Looking for an experienced Spark and Scala developer able to create an end-to-end data pipeline.
Associate should be self-driven, can work with minimal guidance and guide the team technically.
Job Details:
- Expertise in Spark framework with scala language and Pyspark
- Expertise in ETL and Data Warehousing and cloud concepts
- Good work experience in SQL in hive and impala
- Basic understanding in Spark, Hive and Impala architecture along with cloud integration concepts
- Knowledge in shell scripting and git commands
- Able to design and create data flows diagrams and do data modelling.
- Expertise in data profiling and decision making.
- Able to understand the Architecture and design end-to-end data flow.
- Excellent logical and analytical, troubleshooting, and problem-solving skills.
- Excellent working knowledge of version control system like GitHub and branching strategies.
- Able to understand and implement the spark optimization techniques and implement the same in the code.’
- Communicate effectively and concisely with multiple stakeholders.
- To monitor and support the year end activities, if needed
- Ability to co-ordinate and collaborate with cross functional teams.
- Well versed with security and compliance aspects in Cloud.
- Good to have Big data related certifications and health care domain knowledge
Work Location
Chennai, Noida, Gurugram, Mumbai