Job Openings Head of Data Engineering

About the job Head of Data Engineering

We're partnering with a fast-scaling, innovation-led company at the forefront of EV manufacturing, IoT telematics, financial systems, and geospatial platforms to hire a Head of Data Engineering. This is a leadership role that combines strategy, architecture, and team building driving enterprise-wide data transformation.

Location: Kenya
Reporting To: Chief Technology Officer (CTO)
Industry: Electric Vehicles | Big Data | AI/ML | Digital Platforms

Role Overview

As Head of Data Engineering, you will own the vision, design, and development of next-gen data platforms. You'll lead a talented team across geographies to build scalable, reliable data pipelines that power analytics, AI models, and business intelligence all while embedding best-in-class engineering practices and data governance.

Key Responsibilities

Strategic Leadership

  • Define and drive the data engineering roadmap in alignment with digital strategy
  • Build and scale a high-performing team in India and other hubs
  • Foster a data-driven, quality-first engineering culture

Architecture & Delivery

  • Design and build robust, high-throughput data pipelines (PySpark, Spark, Hadoop)
  • Architect scalable platforms for structured and unstructured data
  • Implement efficient ETL, transformation, and integration workflows
  • Develop and maintain data lakes/warehouses with a focus on cost-performance balance

Governance & Optimization

  • Enforce data quality and validation frameworks
  • Optimize systems for performance, cost, and reliability
  • Partner with cybersecurity and compliance for regulatory alignment (e.g., GDPR, DPDP)

Collaboration & Mentoring

  • Translate business needs into data engineering solutions
  • Work closely with analytics, product, and software teams
  • Mentor engineers; establish engineering best practices and DevOps for data pipelines

What Were Looking For

  • 8+ years in Data Engineering with 2+ years in leadership
  • Strong expertise in PySpark, Spark, Hadoop ecosystem
  • Hands-on with Python (preferred), Scala, or Java
  • Deep experience with AWS/GCP/Azure and native data services
  • Proven success in building data lakes, warehouses, and real-time pipelines
  • Familiar with orchestration tools (Airflow, Prefect, etc.)
  • Strong leadership, stakeholder management, and communication skills

Bonus if you have:

  • Containerization (Docker, Kubernetes)
  • Kafka/event-driven pipelines
  • AI/ML pipeline integration experience
  • SQL optimization and RDBMS tuning

Why Apply?

This is your chance to shape the data future of a tech-led, mission-driven organization that's transforming mobility, sustainability, and digital ecosystems. If you're passionate about leading innovation, scaling systems, and mentoring talent, wed love to connect.