About the job Senior Data Engineer - Hybrid Porto (3 days/week office)
Senior Data Engineer (Python/Spark/Kafka) - Hybrid Porto (3 days/week office)
ABOUT THE OPPORTUNITY
An international technology-driven organization is looking for a Senior Data Engineer to join a highly skilled data team focused on building scalable and modern data platforms. This opportunity is ideal for professionals who enjoy working on complex enterprise data ecosystems, designing high-performance pipelines, and collaborating with cross-functional teams in an innovative and data-centric environment.
You will play a key role in the development and optimization of cloud-native data solutions, supporting strategic business initiatives through reliable, scalable, and secure data architectures. The position offers exposure to large-scale projects, modern big data technologies, and an international working environment where English is the primary communication language.
The role starts with an initial on-site onboarding period during the first month, followed by a hybrid model with 3 office days per week in Porto.
PROJECT & CONTEXT
The project focuses on the evolution of enterprise-grade data platforms, including data lakes, data warehouses, real-time streaming pipelines, and cloud-based analytics solutions. The team operates within Agile and DevOps methodologies, ensuring continuous integration and continuous delivery practices across the data engineering lifecycle.
As a Senior Data Engineer, you will design, develop, and maintain scalable ETL/ELT pipelines while ensuring data quality, governance, security, and performance optimization. You will collaborate closely with Data Scientists, Analysts, Product teams, and Engineering stakeholders to deliver robust and business-oriented data solutions.
The technical ecosystem includes Python, Apache Spark, Apache Kafka, Hadoop, SQL/NoSQL databases, AWS, Microsoft Azure, Google Cloud Platform (GCP), Tableau, and Power BI.
English communication is required on a daily basis in an international environment.
WHAT WE'RE LOOKING FOR (Required)
- Minimum 7 years of experience in Data Engineering, Data Architecture, or related areas
- At least 3 years of experience in a Senior role
- Strong experience with Python for data processing and automation
- Hands-on experience with Apache Spark, Hadoop ecosystem, and Apache Kafka
- Solid experience designing and maintaining ETL/ELT pipelines and distributed data processing architectures
- Experience with cloud platforms such as AWS, Microsoft Azure, and/or Google Cloud Platform (GCP)
- Strong knowledge of SQL, NoSQL, and columnar database technologies
- Experience with data warehousing, data lakes, and scalable cloud-native architectures
- Knowledge of data governance, data quality standards, and data security best practices
- Experience implementing CI/CD pipelines within Agile and DevOps environments
- Familiarity with data visualization tools such as Tableau and Power BI
- Strong communication and stakeholder management skills
- Ability to mentor junior engineers and contribute to technical leadership initiatives
- Bachelor's degree in Computer Science, Information Technology, Data Science, or related field
- Portuguese and English language skills required
- English level: minimum B2 (Upper-Intermediate)
NICE TO HAVE (Preferred)
- Experience with Java or Scala for data engineering solutions
- Experience with Looker or other BI/analytics platforms
- Exposure to real-time data streaming architectures and event-driven systems
- Knowledge of advanced database optimization strategies, partitioning, and indexing techniques
- Experience working in highly regulated enterprise environments
- Google Cloud Professional Data Engineer certification
- AWS Certified Big Data – Specialty certification
- Microsoft Certified: Azure Data Engineer Associate certification
- Master's degree in Computer Science, Data Science, or related field
- Previous experience in international and multicultural environments