Job Openings
AI Gateway Engineer - Platform Integrations
About the job AI Gateway Engineer - Platform Integrations
AI Gateway Engineer Platform Integrations
About the Role
We are looking for an experienced AI Engineer to architect, implement, and optimize AI-powered solutions driving our next-generation products and platforms.This hands-on role focuses on integrating, deploying, and scaling machine learning models, collaborating closely with product and engineering teams, and ensuring robust, reliable, and secure AI services across the organization.
Key Responsibilities
- Model Development- Design, train, evaluate, and deploy machine learning and deep learning models for diverse business challenges.
- Prompt Management- Build systems to manage, enrich, and filter prompts, maintaining context and handling sensitive content for optimal model performance.
- Content Moderation- Implement processes to detect and filter harmful or inappropriate model outputs.
- Integration- Develop APIs and supporting systems to serve AI models to client applications, integrating with internal and third-party platforms.
- Data Protection/Masking- Apply rigorous data protection, masking, and encryption strategies to safeguard sensitive information and ensure compliance.
- Model Context Protocol (MCP)- Integrate and support workflows leveraging MCP for contextual management of model requests and outputs.
- Performance Optimization- Optimize AI service delivery for scalability, reliability, and low latency through modern deployment and caching strategies.
- Security and Compliance- Collaborate with security teams to enforce access controls, privacy, and compliance across AI systems.
- Monitoring and Observability- Implement logging, monitoring, and tracing for model performance, operational analytics, and cost tracking.
- Collaboration- Work with ML engineers, software developers, product managers, and DevOps teams to integrate AI capabilities into the broader system landscape.
- Documentation- Create comprehensive documentation, guides, and internal tooling to support adoption and maintenance of AI systems.
About You
Experience:
- 3-6 years designing, implementing, and maintaining AI/ML models in production (Python, TensorFlow, PyTorch, etc.).
- Hands-on experience with Model Context Protocol (MCP) or similar context management workflows.
- Experience implementing prompt management, data protection, and content moderation for AI-powered apps.
- Proven record building scalable data and ML pipelines.- Familiarity with cloud-native architectures (AWS, GCP, or Azure).
Technical Skills
- Proficiency in model development, training, and deployment using modern ML/DL frameworks.
- Strong API integration and RESTful service design expertise for AI model delivery.
- Experience with data engineering, caching (Redis), and monitoring tools.- Ability to integrate external APIs (e.g., third-party LLM/AI services) and design abstraction layers.
- Experience with AI Gateway platforms (Kong, Mosaic, TrueFoundry, Portkey) and LLM API integrations.
Mindset
- You value clean code, clear documentation, and thoughtful testing.
- You're comfortable working in fast-paced, early-stage environments.
- You communicate clearly and collaborate effectively.
Nice to Have
- Familiarity with observability stacks (OpenTelemetry, Datadog, Prometheus).
- Experience building multi-tenant platforms or scaling AI apps for large user bases.
- Experience with the Go programming language.
Other Details
Location: Remote (preferred in LATAM region)
Length: 1+ Year
Company: Gaming Giant