Job Openings Lead Generative AI Engineer-Player Coach

About the job Lead Generative AI Engineer-Player Coach

Job Title:-Lead Generative AI Engineer / Player-Coach

Purpose: Own our Generative AI technical vision. You will rapidly prototype and lead a dedicated team of two engineers to launch our company's first intelligent search and content automation systems.

Role Summary

We're looking for a hands-on Gen AI pioneer who can architect, code, and mentor. This is a "player-coach" role where you'll be building foundational systems while guiding your team. You will partner daily with product and engineering leadership to transform business goals into cutting-edge, shippable LLM-powered solutions.

Key Responsibilities

  • Architect & Build RAG Systems: Design, develop, and deploy sophisticated Retrieval-Augmented Generation (RAG) systems to power our next-generation search and discovery experience.
  • Develop & Fine-Tune LLMs: Lead the development of advanced generative models for nuanced tasks like automated content creation, summarization, and metadata enrichment.
  • Own the Gen AI Stack: Select, provision, and optimize our stack, leveraging managed services like Azure OpenAI or AWS Bedrock, or self-hosting models on GPU infrastructure. You will establish best practices for repo structure, CI/CD, and model/prompt versioning.
  • Implement LLMOps: Embed robust observability using tools like OpenTelemetry and Prometheus. This includes tracking standard metrics (latency, cost, accuracy) and specialized monitoring for hallucination, toxicity, and data drift.
  • Lead & Mentor: Hire, coach, and develop ML talent. Set the standard for high-quality code, rigorous experimentation, and rapid iteration within the Gen AI domain.

Must-Have Skills

  • Production LLM Experience: 5+ years in Python with demonstrable success in productionizing LLM applications using modern frameworks like DSPY, LangChain, LlamaIndex, or Hugging Face Transformers.
  • RAG Expertise: Deep, practical knowledge of RAG architecture, including advanced prompt engineering, chunking strategies, and proficiency with vector databases (e.g., Pinecone, Weaviate, Milvus).
  • Cloud Proficiency: Expertise with managed LLM services (Azure OpenAI Service or AWS Bedrock). Strong foundational cloud skills in either Azure or AWS for compute orchestration (AKS/EKS), serverless functions, and storage.
  • MLOps Acumen: Solid experience with Docker, CI/CD pipelines (e.g., GitHub Actions, Argo), and model registries.
  • Leadership & Communication: Proven ability to lead small, highly technical teams and clearly communicate complex concepts to stakeholders.

Nice-to-Have Skills

  • Experience with agentic workflows (e.g., AutoGen, CrewAI).
  • Familiarity with multi-modal models (text, image, etc.).
  • Knowledge of advanced LLM fine-tuning techniques (e.g., LoRA, QLoRA).
  • Strong SQL skills (especially with ClickHouse) and a keen eye for inference cost optimization.