About the job Senior Data Engineer - Apache Airflow, PySpark, Python, SQL
Senior Data Engineer Design & Build Scalable Cloud Data Solutions
Why this is a great move for you:
Technical Leadership: This is more than a development role. It offers the chance to lead technical teams, provide mentorship, and give technical design assistance, making a direct impact on project success and team growth.
Work with a Modern Data Stack: You will get to work hands-on with a variety of modern tools and platforms, including Apache Airflow, PySpark, major cloud providers, and diverse data formats like Avro, Parquet, and Delta, ensuring your skills remain at the forefront of the industry.
This position is integral to designing, building, and optimising the data solutions that drive business decisions. You will be responsible for creating efficient data pipelines, developing database schemas, and ensuring the reliability and performance of the entire data infrastructure. This role offers the opportunity to lead technical teams and implement cutting-edge projects in a dynamic, Agile environment.
What you will be doing:
ETL/ELT Pipeline Development: You will build and maintain efficient data pipelines using tools like Apache Airflow and PySpark.
Data Modelling and Design: Your responsibilities will include developing database schemas and dimensional models (Kimball/Inmon) for both relational and NoSQL databases.
Data Warehousing: You will actively participate in the development and maintenance of data warehouses, data lakes, and data lakehouses.
Database Management: You will work with a variety of database systems, including Azure SQL, PostgreSQL, Google BigQuery, and NoSQL systems like MongoDB.
API Integration: You will develop and maintain APIs for seamless data integration, working with REST and microservices architectures.
Team Collaboration: You will provide technical design and coding assistance to team members to ensure the successful delivery of project milestones.
What you will need to bring:
- A bachelor's degree in Computer Science, Data Science, or a related field.
- 5+ years of progressive experience in data engineering and cloud computing.
- Strong proficiency in Python and SQL.
- Hands-on experience with Apache Airflow and PySpark.
- Working knowledge of cloud platforms such as Azure, GCP, or AWS.
- Proven experience leading technical teams and mentoring team members.
- A problem-solving mindset with a focus on continuous improvement and innovation.
Come and help businesses unlock real insights with a modern tech stack and awesome colleagues- apply now!