Acerca del puesto Sr Software Engineer – Machine Learning
Senior ML Engineer
About the Company
Our client is a technology company building advanced infrastructure powered by data, machine learning, and AI systems to solve complex problems at scale.
They are developing large-scale data pipelines, machine learning models, and agentic systems that enable automated workflows and intelligent decision-making. The team is focused on rapidly taking solutions from concept to production.
They are looking for a Senior Software Engineer who wants to work close to the core of the technology stack, building and operating robust AI/ML systems.
This position is open to candidates in: Colombia, Argentina, Brazil.
What will you do in this role?
You will be responsible for designing, building, and operating machine learning systems, large-scale data pipelines, and AI agents that power core product capabilities.
This role involves working across the full lifecycle of ML systems — from data ingestion and distributed processing to model deployment and monitoring.
Your main responsibilities will include:
-
Designing and building distributed data pipelines to process large datasets.
-
Developing and deploying machine learning models in production environments.
-
Implementing agent-based AI systems that interact with external services and internal infrastructure.
-
Designing scalable architectures for data processing and ML training pipelines.
-
Establishing observability for pipelines, models, and agents (metrics, tracing, alerting).
-
Evaluating modeling approaches and optimizing cost vs. performance trade-offs.
-
Collaborating with product and customer teams to build solutions that drive business impact.
-
Iterating quickly from prototype to production-ready systems.
This role requires strong technical ownership and the ability to build reliable systems end-to-end.
What should you bring?
We are looking for engineers with strong experience in production machine learning systems, distributed data processing, and scalable infrastructure.
Ideally you have:
-
Strong experience with Spark and SQL in distributed data environments.
-
Experience building and deploying machine learning systems in production.
-
Experience working with large and complex datasets.
-
Experience designing training, deployment, and monitoring pipelines for ML models.
-
Experience working with cloud services across data, compute, and ML.
-
Ability to design clear software architectures and well-documented systems.
-
Strong technical communication and collaboration skills.
Languages: Python, Scala
Tools / Frameworks: Spark, AWS (Sagemaker / Bedrock), Kubernetes
Nice to Have
-
Experience building products from 0 to production, especially in startup environments.
-
Experience working with large geospatial datasets and indexing strategies.
-
Experience building AI agents that operate at scale.
-
Experience with fine-tuning, distilling, or self-hosting LLM models.
-
Background in traditional ML with messy datasets and strong evaluation methodologies.
-
Experience with CI/CD, containerization, and infrastructure as code.
What you will receive
-
Contractor agreement
-
Compensation in USD
-
Fully remote work
-
Opportunity to work on cutting-edge AI and ML infrastructure
-
Highly technical environment with strong engineering ownership
-
The chance to build and scale systems that move from idea to production quickly
Submit your resume and join a process that can change your life.
Best regards,
T-mapp Team