About the job AI Product Engineer
About hireworks
hireworks is building a community of top talent in Bulgaria/Colombia and unlocking unparalleled access to positions at leading U.S. based companies. As your employer, hireworks will ensure you have a seamless interview, onboarding and employee experience - providing ongoing support and resources along the way. Established in 2023, hireworks is forging corp to corp relationships with leading U.S. based organizations looking to grow their teams with best in class talent out of Bulgaria/Colombia. Working with hireworks means unlocking access to a network of local peers and mentors and career opportunities through our client network.
The Opportunity
As an AI Product Engineer at Turnstile, you'll be at the forefront of integrating cutting-edge AI capabilities into our platform. You'll help shape how we leverage modern LLMs and agent-based systems to transform the future of revenue management, working alongside our early team. Using state-of-the-art AI frameworks and fullstack technologies, you'll architect and ship intelligent features that solve real customer problems while delivering experiences that feel like magic. You'll help establish engineering practices and AI development patterns that support a culture of experimentation, collaboration, and excellence. More than that, you'll join a tight-knit team of repeat founders and seasoned operators in defining what AI-native SaaS looks like.
In this role, we'll ask you to:
Partner with product management and designers to integrate AI capabilities into our core product
Design LLM-powered features using modern frameworks and context engineering techniques
Build and iterate on AI agents that automate complex workflows and decision-making processes
Navigate the challenges of non-deterministic systems: handling hallucinations, optimizing for latency, and ensuring reliability
Translate nuanced business logic into effective prompts, Typescript APIs and agent architectures
Be able to switch between frontend integration, backend orchestration, and AI model interactions
Ship AI features with velocity while maintaining quality and measuring real-world performance
Build instrumentation and evaluation systems to continuously improve AI outputs
We'd love to hear from you if:
You have a history of building production applications, with demonstrated experience integrating LLMs or AI capabilities
You're proficient with modern LLM frameworks (e.g. Vercel’s AI SDK, LangChain, OpenAI Agents SDK) or have built custom orchestration layers
You've worked with LLM providers (OpenAI, Anthropic, Gemini) and understand their APIs, limitations, and cost structures
You have hands-on experience with prompt engineering, few-shot learning, and retrieval-augmented generation (RAG)
You understand the practical challenges of building with AI: managing context windows, handling rate limits, and debugging non-deterministic behavior
You've built frontends using popular UI frameworks (e.g. React, Angular) and know how to design AI-powered user experiences
You've built modern APIs using REST or GraphQL that interface with AI services
You're experienced with modern storage such as PostgreSQL, DynamoDB, or vector databases like Pinecone or Weaviate
You'd excel working in a dynamic and high-trust environment alongside an experienced team
Bonus points for experience with:
Building multi-agent systems or autonomous AI workflows
Function calling / tool use patterns with LLMs
Event-driven architectures: AWS Lambdas, queuing systems and asynchronous processing for AI workloads
Code repository management, build systems and CI/CD pipelines
Common publicly-available SaaS APIs (e.g. Twilio, Segment.io, Docusign, Salesforce)
AI observability and evaluation tools (LangSmith, Weights & Biases, Braintrust)
Responsible AI practices: managing bias, privacy considerations, and content filtering