Job Openings
Senior Data Engineer - Hybrid (100k-130k)
About the job Senior Data Engineer - Hybrid (100k-130k)
Position: Senior Data Engineer
Work Arrangement:
- First month: Full onsite
- Succeeding months: 1 week onsite (3rd week of the month, may vary depending on IT Manager), 3 weeks work from home
Work Schedule: Day shift, flexible hours
Location: Ermita, Manila
Perks & Benefits:
- Free lunch during onsite work
- Allowances and other benefits
- HMO coverage upon joining
- HMO extended to four (4) free dependents upon regularization
- Salary Range: PHP 130,000 (negotiable)
Role Overview:
We are looking for a Senior Data Engineer to design, build, and maintain scalable data pipelines and architectures. You will be responsible for transforming raw data into clean, structured datasets that support business intelligence, analytics, and reporting. This is a new role, offering an opportunity to shape our data infrastructure from the ground up.
Key Responsibilities:
- Build, maintain, and optimize ETL/ELT pipelines that extract, transform, and load data from various sources into data warehouses, lakehouses, and databases.
- Implement physical data models to structure data for optimal storage and analysis, ensuring data integrity and performance.
- Develop processes to transform raw data into analytics-ready formats using modern data tools and techniques.
- Collaborate with data analysts, scientists, and business stakeholders to understand data requirements and deliver reliable and scalable data solutions.
- Ensure data quality and governance, including security, compliance, and lineage tracking.
- Automate data workflows and deploy monitoring tools to detect anomalies and maintain uptime.
- Work with Azure Data Factory, Microsoft Fabric, and other Azure-based services for seamless data integration and orchestration.
Qualifications:
- Bachelors degree in Computer Science, Information Technology, Mathematics, or a related field.
- Minimum of 6 years experience in Data Engineering, with a strong track record of delivering enterprise-level solutions.
- Extensive hands-on experience in ETL, ELT, data warehousing, and data modeling.
- At least 3 years of experience on the Microsoft Azure platform, with expertise in:
- Azure Data Factory
- Microsoft Fabric
- Other Azure Data Services
- Strong programming skills in PySpark or Python and T-SQL scripting.
- Experience with performance optimization, data partitioning, and managing large-scale datasets.
Tech Stack Keywords:
PySpark, Azure, Data Factory, Microsoft Fabric, ETL, ELT, Data Modeling, T-SQL, Data Warehouse