Data Engineer (Pyspark, AWS, DataBricks) - (Citizen or GC only) - W2 Must have: Pyspark, databricks, datalake, AWS, Python Responsible for: Creating the pipeline and loading the data in enterprise platform. Doing data lake and databricks. Migrating from legacy platform to the new platform. Candidates Accomplishments will help in moving to the next step. Job Description: Expert in Python and Core Java and design technique as well as experience working across large environments with multiple operating systems/infrastructure for large-scale programs (e.g., Expert Engineers) starting to be firm-wide resources working on projects across client. Is multi-skilled with expertise across software development lifecycle and toolset. May be recognized as a leader in Agile and cultivating teams working in Agile frameworks. Sought out as coach for at least one technical skill. Strong understanding of techniques such as Continuous Integration, Continuous Delivery, Test Driven Development, Cloud Development, resiliency, security. Stays abreast of cutting edge technologies/trends and uses experience to influence application of those technologies/trends to support the business; may give speeches and outside the firm, writes articles. Additional Skills: AWS, Databricks