Job Openings Cloud & Infrastructure Engineer (Data Platforms) - Porto (2 days/month on-site)

About the job Cloud & Infrastructure Engineer (Data Platforms) - Porto (2 days/month on-site)

ABOUT THE OPPORTUNITY

Join a global custom software solutions company with 40 years of experience building innovative technology for clients worldwide. This specialized role supports a major Dutch client's data platform initiative, offering the unique opportunity to architect and maintain cloud infrastructure specifically designed for modern data workloads—not general application hosting, but the robust foundation that enables Data Engineers to build scalable pipelines and analytics solutions. Operating in a collaborative, flat management structure where you're a valued team member rather than a number, you'll work on challenging projects bridging DevOps and Data Engineering disciplines. With a flexible hybrid model requiring only 2 days per month in the Porto office, you'll enjoy exceptional work-life balance while accessing knowledge sharing, social events, catered lunches, and a culture that emphasizes continuous learning, career growth, and recognition through awards and exceptional performance bonuses across 7 international hubs.

PROJECT & CONTEXT

You'll design and deploy the "plumbing" of the data ecosystem—creating scalable cloud infrastructure that allows Data Engineers to focus on pipelines and models without managing underlying servers. This is not a Data Engineering role but rather the infrastructure specialist who makes Data Engineering possible at scale. Your primary focus is Databricks platform deployment and configuration (ideally in Azure environments), using Infrastructure as Code (Terraform) to automate workspace provisioning, compute clusters, and data lake architectures handling massive-scale storage and high-throughput networking. You'll implement robust security and governance through IAM/Entra ID, Private Link/VPC networking, and Unity Catalog controls, while building CI/CD pipelines for infrastructure and data products. Responsibilities include establishing observability for cost, performance, and reliability of data workloads, creating self-service infrastructure patterns, and understanding modern data architecture patterns like Medallion architecture, Delta Lake, and Data Vault to better support data teams using PySpark, SQL, dbt, and Spark Structured Streaming.

WHAT WE'RE LOOKING FOR (Required)

  • Cloud Engineering Experience: 5+ years professional experience building and maintaining cloud infrastructure at scale
  • Data Infrastructure Background: Proven experience working on data engineering projects and understanding data workload requirements
  • Databricks Expertise: Hands-on experience with Databricks configuration, deployment, and workspace administration—this is essential
  • Terraform Mastery: Deep expertise creating Terraform modules, managing state, workspaces, and Infrastructure as Code best practices
  • Azure Proficiency: Strong experience with Azure Data Lake Storage Gen2 (ADLS Gen2), Microsoft Entra ID (Service Principals, Managed Identities), Virtual Networks, Private Link, and Network Security Groups
  • Containerization: Proficiency with Docker and Kubernetes (EKS, AKS, or self-managed) for managing data workloads
  • CI/CD Pipelines: Advanced proficiency with Git, Azure DevOps, and GitHub Actions for automating infrastructure deployment
  • Scripting Skills: Strong coding abilities in Python and Bash for automation and tooling
  • Cloud Security: Implementation experience with Least Privilege access, RBAC, encryption at rest/transit, and Secrets Management (Key Vault/Azure Secrets Manager)
  • Cloud Networking: Designing complex networks with VNETs/VPCs, Private Link/Endpoints, DNS resolution, and Firewalls for data traffic isolation
  • Data Architecture Understanding: Familiarity with modern data patterns including Medallion architecture, Delta Lake, and Data Vault
  • Observability: Experience configuring cost management, logging (CloudWatch/Azure Monitor), and alerting systems
  • Language: B2 English (Upper Intermediate) minimum - entire interview process conducted in English with solid proficiency required
  • Location: Based in Porto/Northern Portugal region with availability for 2 on-site days per month

NICE TO HAVE (Preferred)

  • AWS Skills: S3 configuration (lifecycle policies, intelligent tiering), IAM role management, VPC design, Lambda/Step Functions, Glue, Kinesis, EMR
  • Advanced Databricks: Unity Catalog configuration (metastores, catalogs, external locations), Cluster Policies, Instance Profiles, Databricks Workflows, Delta Live Tables (DLT), MLflow, Mosaic AI
  • Alternative IaC Tools: Experience with CloudFormation, Bicep, or Crossplane
  • Data Tools Familiarity: Understanding how Data Engineers use PySpark, SQL, dbt, and Spark Structured Streaming
  • Azure Data Factory: Support for ADF and Synapse integrations
  • Brazilian or Portuguese nationality (company preference)

Certifications (Advantageous):

  • Databricks Certified Data Engineer Professional (highly desirable)
  • AWS Certified Solutions Architect – Associate/Professional
  • Microsoft Certified: Azure Solutions Architect Expert or Azure Data Engineer Associate (DP-203)
  • HashiCorp Certified: Terraform Associate
  • Certified Kubernetes Administrator (CKA)
  • AWS Certified Data Engineer – Associate

Location: Porto, Portugal (Hybrid - 2 days/month on-site)