About the job Data Engineer - Microsoft Fabric
Role Description: Data Engineer - Microsoft Fabric
Who We Are
Provoke is a global consulting firm building AI-native solutions that transform how work gets done. Founded on a culture of innovation, growth, and curiosity, we partner with global clients to design and deploy agentic AI embedded directly into workflows so teams move faster, think smarter, and scale with purpose.
Overview
We are seeking a Data Engineer to join Microsoft Fabric data engineering engagements. This is a hands-on technical role: you will build modern data platforms for clients and deliver scalable solutions across the full data engineering lifecycle.
This role is ideal for someone who is excited to be in a fast-paced consulting environment, is passionate about Gen AI and Agentic AI, stays current as the space evolves, and brings excellent communication skills to client-facing work.
We're looking for
- Experience in consulting or professional services environments or similar client-facing roles
- Versatile people who thrive on variety and challenge
- Excited about working in a fast-paced environment.
- Innate problem solvers who want to grow in a flexible, collaborative culture.
- Takes initiative, pushes boundaries, motivated to innovate.
- Individuals with a growth mindset who want to use their learning and relationship-building skills.
- Proactively using Gen AI and Agents within their development process to improve quality and speed
Key Responsibilities
- Deliver Microsoft Fabric solutions including Lakehouses, Data Pipelines, Notebooks, and Semantic Models
- Build and maintain data pipelines and transformations using SQL, Python, and or PySpark
- Translate defined business and technical requirements into working data solutions
- Contribute to development sprints by completing assigned stories end-to-end including development, testing, and validation
- Debug and troubleshoot data issues across pipelines and datasets
- Follow established data engineering best practices, coding standards, and CI CD processes
- Collaborate with senior engineers, analysts, and client stakeholders to deliver high quality outcomes
- Collaborate with Data Analysts and BI teams to ensure Fabric Semantic Models and Power BI datasets meet reporting needs
Required Skills & Experience
- 1 to 2 years of experience in data engineering, software engineering, or a related field including internships
- Proactively using Gen AI and Agents at every step of the SDLC to improve quality and speed
- Hands-on proficiency with Microsoft Fabric: Lakehouse, Warehouse, Data Factory pipelines, Notebooks, and OneLake
- Strong PySpark and SQL skills
- Experience designing medallion (bronze/silver/gold) architectures and applying Delta Lake patterns
- Familiarity with Power BI and Fabric Semantic Model integration, including DirectLake connectivity
- Experience with Azure DevOps or GitHub for CI/CD, version control, and deployment automation of Fabric workloads
- Ability to take a task from requirements to completed implementation with guidance as needed
- Clear communication skills and ability to collaborate effectively in a team and with the client
Desirable Qualifications
- Microsoft certifications: DP-600 (Fabric Analytics Engineer), DP-203 (Azure Data Engineer), or equivalent
- Experience with dbt or other transformation frameworks within Fabric or Azure-based stacks
- Exposure to data governance tooling (Microsoft Purview, Unity Catalog, or similar)
- Background in data mesh or domain-oriented data ownership models
Working Hours
US time zone alignment with PT. Flexibility to accommodate client schedules is part of the role.
Location
US. Travel 25-50% required.