Data Engineer
Job Description:
Position: Data Engineer (B2B / Freelancer Contract)
Location: Remote open to candidates based in the European Union
Role Overview
We are looking for an experienced Data Engineer to join our team and contribute to the development, maintenance, and optimization of complex data ingestion and ETL pipelines within a modern cloud-based data ecosystem. The role focuses on ensuring the scalability, reliability, and performance of data workflows, leveraging technologies such as Azure Synapse and Python. You will collaborate closely with architects, developers, and operations teams to deliver efficient, secure, and compliant data solutions.
Key Responsibilities
-
Design and implement new functionalities in Python-based ETL pipelines with a focus on scalability and performance optimization.
-
Proactively maintain and optimize existing data pipelines for efficiency, reliability, and cost-effectiveness.
-
Plan and manage the sizing and capacity of cloud service components, ensuring proper resource utilization.
-
Participate in the setup and configuration of new service components in collaboration with architects and modelers.
-
Oversee operations and maintenance of related software, server infrastructure, and databases, including Azure Synapse and Power BI.
-
Maintain service-related documentation according to internal standards, ensuring accuracy and clarity.
-
Implement monitoring and alerting systems to ensure SLA compliance and system stability.
-
Continuously document, maintain, and optimize processes, configurations, and workflows across environments.
-
Collaborate with cross-functional teams to coordinate releases, upgrades, and deployments.
-
Support and participate in service-related risk assessments and audits, ensuring compliance with governance frameworks.
Required Skills & Experience
-
Bachelors degree in Computer Science, Information Technology, or a related field.
-
Minimum 5 years of experience in data engineering.
-
At least 2 years of hands-on experience with Python for data processing (Polars, Pandas, PySpark).
-
Minimum 2 years of experience with Microsoft Azure services.
-
Strong expertise in building and managing pipelines using Azure Synapse or similar technologies.
-
Proficiency in SQL and familiarity with relational and non-relational databases.
-
Knowledge of Big Data technologies such as Apache Spark.
-
Experience working with software-as-a-service (SaaS) models.
-
Excellent communication and collaboration skills with a strong customer focus.
-
Experience with Delta Lake and/or Power BI.
-
Familiarity with DevOps processes and DataVault 2.0 methodologies.
-
Knowledge of data governance frameworks (e.g., Informatica) is a plus.
-
Excellent command of English, both written and spoken.
Preferred Qualifications
-
Experience in cloud-based enterprise data environments.
-
Strong problem-solving and performance tuning abilities in large-scale systems.
-
Ability to design automation strategies for deployment and maintenance of data pipelines.
Contract Type
-
Engagement Model: B2B / Freelancer
-
Location: Fully Remote (EU-based profiles only)
Required Skills:
Informatica Data Engineering ETL Apache Spark Pandas Data Processing Operations Collaboration Compliance Clarity Profiles Spark Pipelines Architects Big Data Apache Azure Scalability Microsoft Azure DevOps Components Reliability Optimization Developers Infrastructure Automation Information Technology Power BI Databases Computer Science Documentation Software Maintenance Design Engineering SQL Python English Science Communication