About the job MLOps Engineer / AI Infrastructure Specialist OC-16B
Technologies: Kubernetes, AWS SageMaker, MLflow
Locations available: All LatAm
Oceans Code Experts is looking for talented individuals that are ready for the next step in their career, we offer a collaborative professional environment as full of rewarding experiences as it is of challenges.
A MLOps Engineer / AI Infrastructure Specialist at Oceans can expect to work on multiple projects, work with a cross-functional team, and are transparent about time and tasks to help clients understand the progress of their projects.
Candidates must LOVE helping people, solving business problems, and pushing themselves to slay the next beast of a project.
Job Summary
We’re looking for a seasoned MLOps Engineer / AI Infrastructure Specialist to drive the deployment, scalability, and automation of AI/ML pipelines. If you're passionate about building robust machine learning infrastructure and working at the intersection of AI and DevOps, this is your opportunity to make an impact.
Job Responsibilities
- Design, implement, and maintain scalable MLOps pipelines for model training, evaluation, and deployment.
- Automate workflows using CI/CD tools such as GitLab, Jenkins, or GitHub Actions.
- Manage and optimize containerized environments using Docker and orchestrate deployments with Kubernetes.
- Collaborate with data scientists and engineers to streamline experimentation and operationalize ML models.
- Deploy, monitor, and manage models using cloud platforms like AWS SageMaker, Azure ML, or Vertex AI.
- Ensure infrastructure reliability and performance, including logging, versioning, and automated rollback.
- Maintain punctuality and consistency in remote work environments, particularly for meetings and team coordination.
Job Requirements
- Great English proficiency (B2+ Written and spoken)
- 8+ years of experience as a MLOps Engineer / AI Infrastructure Specialist
- Impeccable punctuality (schedules are flexible but being in time for meetings is crucial
- Proficient in Python and experienced in deploying ML models using TensorFlow and/or PyTorch.
- Deep hands-on experience with containerization (Docker) and orchestration (Kubernetes).
- Strong background in implementing and maintaining CI/CD pipelines.
- Proven experience working with cloud-based ML platforms like AWS SageMaker, Azure ML, or Vertex AI.
Nice to have
- Experience with workflow orchestration tools like Kubeflow or Airflow, and data platforms like Databricks.
- Monitoring and infrastructure as code tools such as Prometheus, Grafana, and Terraform.
- Familiarity with data versioning tools like DVC or LakeFS.
Position Type and Expected Hours of Work
This is a full-time consultancy, with up to 40 weekly hours during regular business times. We operate under a flexible core hours policy to accommodate various schedules, allowing consultants to perform during their peak productivity times. Additionally, we offer the flexibility to work remotely.