Job Title: AI Specialist
Location: Charlotte, NC (hybrid 1-2 days on-site) or Remote
Duration: 12 months
Required Qualifications:
Bachelor’s degree in computer science, Information Systems or related discipline; Or 8 years of prior equivalent work-related experience in lieu of a degree.
Work experience in addition to degree: 3-4 years.
Role: DevOps Support
• Complete understanding of Infrastructure as Code, infrastructure deployments, and implementation of the CI/CD pipeline.
(Required) Skills Needed:
• Ability to write Terraform code
• Deeper understanding of AWS Services
• Ability to write Concourse Pipelines
• Understanding of Big Data pipeline
• Understanding of SageMaker pipeline
• Python coding language
• Deep understanding of CI/CD
• GIT
• Bash scripting
• Knowledge of AWS Security best practices
Day-To-Day Tasks
• Debug Terraform errors
• Debug Concourse errors
• Proactively update pipelines based on changes made by other Duke organizations
• Version management of Terraform platform and AWS provider
• Migrate repository to Github and update pipelines to point to new repository
Additional Job-specific knowledge, skills or abilities:
• Working knowledge of building data pipelines for ingestion & transformation.
• Good SQL programming skills. Working knowledge of programming in different languages.
• Knowledge in using and building CI/CD pipelines.
• Good understanding of Architectural patterns in developing secure AI.
• Good team player and understanding of Agile process.
• Previous machine learning experience in model training and predictions.
• Experienced working in cloud technologies.
Preferred Qualifications:
• Support, collaborate & work along data scientists to ensure optimal data/model delivery
• Good working experience in ETL (SSIS OR Sqoop/Spark)
• Good working experience in Python AND/OR PySpark.
• Expert SQL knowledge (All types of Joins, CTE’s, Indexes, Stored Procedures, SQL performance)
• Should be able to work with the MLOps to build data pipelines for model executions.
• Experience in data ingestion from various data sources like SQL Server, Oracle, Hive etc.
• Good experience in Architectural and security patterns.
• Knowledge in building basic machine learning models (Classification & Regression)
• Knowledge in docker/MLOPS and its orchestrations.