Job Openings AI Engineer (LLM Integration) OC-14B

About the job AI Engineer (LLM Integration) OC-14B

Technologies: Python, LLM APIs, Vector Databases

Locations available: All LatAm

Oceans Code Experts is looking for talented individuals that are ready for the next step in their career, we offer a collaborative professional environment as full of rewarding experiences as it is of challenges.

An AI Engineer (LLM Integration) at Oceans can expect to work on multiple projects, work with a cross-functional team, and are transparent about time and tasks to help clients understand the progress of their projects.

Candidates must LOVE helping people, solving business problems, and pushing themselves to slay the next beast of a project.

Job Summary
Join an innovative AI-driven team as an AI Engineer (LLM Integration), where youll work on cutting-edge language model applications and shape the future of intelligent systems. This is your chance to make a direct impact by developing scalable AI solutions using the latest tools and frameworks.

Job Responsibilities

  • Design and implement end-to-end LLM-powered applications using Python, FastAPI, and LangChain.
  • Integrate and optimize AI models from providers such as OpenAI, Anthropic, and Mistral.
  • Deploy scalable AI services using cloud infrastructure (AWS Lambda, GCP Vertex AI, Azure OpenAI).
  • Collaborate with cross-functional teams to build robust APIs and intelligent workflows.
  • Leverage vector databases (e.g., Pinecone, FAISS, Weaviate) to power semantic search and retrieval systems.
  • Maintain clean codebases using Git, Docker, and basic CI/CD practices.
  • Participate in agile processes, ensuring timely and active engagement in team meetings.

Job Requirements

  • Great English proficiency (B2+ Written and spoken)
  • 5+ years of experience as an AI Engineer (LLM Integration)
  • Impeccable punctuality (schedules are flexible but being in time for meetings is crucial
  • Deep proficiency in Python, with hands-on experience using FastAPI and LangChain.
  • Familiarity with LLM APIs such as OpenAI, Anthropic, or Mistral.
  • Experience deploying applications in cloud environments like AWS, GCP, or Azure.
  • Solid understanding of vector databases (e.g., Pinecone, FAISS, Weaviate).
  • Familiar with version control and containerization tools (Git, Docker) and basic CI/CD workflows.

Nice to have

  • Experience with LlamaIndex, Hugging Face, Redis, or Streamlit.
  • Front-end integration skills using TypeScript / Next.js.
  • Knowledge of Knowledge Graphs and Semantic Search concepts.

Position Type and Expected Hours of Work
This is a full-time consultancy, with up to 40 weekly hours during regular business times. We operate under a flexible core hours policy to accommodate various schedules, allowing consultants to perform during their peak productivity times. Additionally, we offer the flexibility to work remotely.