Title: Lead Data Scientist
Role Description: This is a full-time role for a Lead Data Scientist at Birdeye.
Primary Skills: AI/Data Science, Machine Learning, Deep Learning, Statistics, GenAI/LLM, Natural Language Processing (NLP), Cloud platform (Azure/AWS/GCP), MLOps, CI/CD, Kubernetes, Dockers/Containers, Python, R
Roles & Responsibilities:
- Develop and drive Data Science strategies and roadmaps to address business challenges, leveraging advanced Statistical analysis, ML/DL, or GenAI to provide actionable insights and solve complex problems.
- Apply frameworks such as LangChain / Llama Index for efficient indexing, retrieval, and chaining of language models to enhance contextual understanding and response generation of applications
- Communicate findings and recommendations and data-driven insights to PMs and executives.
- Lead and mentor a team of data scientists, and guide best practices and technical expertise.
- Collaborate with cross-functional teams including data scientists, engineers, and product managers to define data-driven strategies and integrate data science solutions into products.
- Continuously evaluate and improve the performance of existing models and algorithms.
- Keep abreast of the latest trends in data science, integrating innovative technologies into existing processes to drive continuous improvement and operational efficiency.
Qualifications:
- Master's or PhD degree in Computer Science, Software Engineering, or a related technical field.
- Minimum of 7-9 years of experience in the Data Science domain and minimum of 2-3 years of experience in leading a team of Data Scientists.
- Experience in designing strategic prompt engineering to solve complex problems using LLM/GenAI models
- Expertise in fine-tuning the open-source and licensed GenAI/LLM models using PEFT approaches
- Practical hands-on fine-tuning/transfer learning/optimization of the Transformer architecture-based Deep Learning models.
- Experience working with multiple LLMs including Azure Open AI, LLAMA, Claude, Gemini, etc.
- Experience in NLP tools/libraries such as Word2Vec, TextBlob, NLTK, SpaCy, Gensim, CoreNLP, BERT, GloVe
- Strong knowledge of Statistical methods/models and experimental design.
- Experience in AWS/Azure/GCP cloud and deploying the APIs using the latest frameworks like FastAPI/gRPC.
- Experience in Docker for deploying the containers. Deployment of the ML models on the Kubernetes clusters.
- Should have been involved in end-to-end delivery of scalable, optimized, enterprise AI solutions.
- Proficiency in programming languages such as Python, R, SQL, and AI frameworks like Tensorflow, PyTorch, Keras
- Proficient in building data visualizations (Tableau, PowerBI, dLooker, etc.)
- Experience with MLOps practices for maintaining model life-cycle.
- Experience in NoSQL/SQL databases.
- Familiarity with Big Data technologies like Hadoop, Spark, Kafka.
- Domain knowledge in Online Reputation management or experience in a product-based company (added advantage).
- Expertise in delivering end-to-end analytical solutions covering multiple technologies & tools to numerous business problems.
- Strong project management skills, with the ability to manage multiple projects simultaneously.
- Excellent communication and presentation skills, with the ability to explain complex technical concepts to non-technical stakeholders.
- Familiarity with agile development methodologies and experience working in a fast-paced, dynamic environment.
Interested candidates, please send their resumes to
Regards, Iqbal Kaur