Job Openings Senior Technical Lead - Hybrid Lisbon (2 days office)

About the job Senior Technical Lead - Hybrid Lisbon (2 days office)

ABOUT THE OPPORTUNITY

Join the energy and utilities sector as a Technical Lead  and drive strategic technical initiatives, architectural design, and implementation of advanced data and AI solutions for critical energy infrastructure systems.

You'll be working for an organization operating in the Energy & Utilities market, where data-driven solutions power smart grids, energy optimization, predictive maintenance, and operational intelligence across critical infrastructure. As a highly experienced technical leader with 15+ years of expertise, you'll leverage multi-cloud platforms (AWS, Azure, GCP with Azure focus), modern data architectures, machine learning capabilities, and cutting-edge AI technologies including GenAI and Agentic AI to solve complex technical challenges in the regulated energy sector.

This role combines deep technical expertise with strategic thinking, requiring hands-on experience designing complex solutions while managing technical debt, documenting architectures, and coordinating infrastructure changes across global collaborative teams. You'll work at the intersection of data engineering, machine learning operations, DevOps practices, and advanced AI implementations, delivering analytical solutions that directly impact energy distribution, consumption optimization, and regulatory compliance.

Critical Requirements: This is a senior-level position requiring 15+ years of experience across platforms, languages, technologies, and frameworks with degree in Computer Science, Software Engineering, or related field. MANDATORY expertise in multi-cloud (AWS, Azure, GCP with Azure focus), lambda and medallion architectures, data technologies (ADF/Glue, Databricks, Azure ML/Sagemaker), serverless computing, storage solutions, DevOps, MLOps, and implementing ML/GenAI/AgenticAI solutions. Knowledge of Energy & Utilities market and GDPR essential. English B2+ required.

PROJECT & CONTEXT

You'll be driving strategic technical projects in the energy and utilities sector, where technology enables critical infrastructure operations, regulatory compliance, energy distribution optimization, and smart grid capabilities. The sector demands robust, secure, reliable solutions that operate under strict regulatory requirements including GDPR compliance for customer data protection and energy market regulations governing data handling and privacy.

Your multi-cloud expertise is fundamental - you'll work across AWS, Azure, and Google Cloud Platform with primary focus on Microsoft Azure, designing cloud-native solutions that leverage platform-specific capabilities while maintaining portability where appropriate. Understanding each cloud's strengths, services, pricing models, and integration patterns enables you to architect optimal solutions for energy sector requirements including high availability, disaster recovery, and regulatory compliance.

Lambda architectures and medallion data architectures guide your design approach - you'll implement lambda architecture patterns combining batch and stream processing for comprehensive data handling, design medallion architecture layers (bronze/silver/gold) for progressive data refinement and quality improvement, ensure data flows efficiently from raw ingestion through curated business-ready datasets, and balance real-time processing needs with batch analytics requirements specific to energy operations.

Working with data orchestration and ETL tools including Azure Data Factory and AWS Glue, you'll design and implement data pipelines that extract data from diverse energy systems (SCADA, IoT sensors, customer systems), transform data applying business rules and quality checks, and load into analytical platforms. Your hands-on experience ensures pipelines are reliable, maintainable, and performant at scale.

Databricks serves as the unified analytics platform - you'll leverage Databricks for distributed data processing using Spark, implement Delta Lake for reliable data lakes with ACID transactions, use MLflow for machine learning lifecycle management, and build collaborative analytics workflows. Understanding Databricks architecture, optimization techniques, and integration with cloud services enables sophisticated analytical capabilities.

Machine learning platforms including Azure ML and AWS SageMaker enable AI-driven solutions - you'll design ML pipelines for model training and deployment, implement MLOps practices for model versioning and monitoring, integrate ML models into production systems, and ensure models meet performance and reliability requirements for energy sector applications including demand forecasting, predictive maintenance, and anomaly detection.

Serverless computing with Azure Functions and AWS Lambda enables event-driven architectures - you'll design serverless solutions for real-time processing, implement function-based microservices, handle event triggers from IoT devices and systems, and optimize for cost and performance. Understanding serverless patterns and best practices enables scalable, cost-efficient solutions.

Data storage with Azure Data Lake Storage (ADLS) and AWS S3 provides foundation for data platforms - you'll design storage hierarchies and partitioning strategies, implement security and access controls, optimize for query performance and cost, and ensure compliance with data retention policies. Understanding storage patterns for different data types and access patterns enables effective data architecture.

Business intelligence and analytics using Power BI and Microsoft Fabric deliver insights to stakeholders - you'll design data models for reporting and analytics, create visualizations and dashboards for operational and executive audiences, implement row-level security and data governance, and leverage Fabric's unified platform for end-to-end analytics. Your ability to translate technical capabilities into business value ensures solutions drive decision-making.

Version control and collaboration with GitHub including Git workflows enables team coordination - you'll implement branching strategies, establish code review processes, configure CI/CD pipelines using GitHub Actions, and ensure proper version control across infrastructure code, data pipelines, and ML models. Understanding modern development workflows brings software engineering discipline to data and analytics projects.

DevOps and MLOps practices ensure operational excellence - you'll implement continuous integration and deployment for data pipelines and ML models, establish monitoring and alerting for production systems, automate infrastructure provisioning and configuration, and ensure reliability through proper testing, deployment strategies, and incident response procedures.

Advanced AI implementation with Machine Learning, Generative AI, and Agentic AI components represents cutting-edge capabilities - you'll design solutions incorporating classical ML for predictive analytics, leverage Generative AI for content generation or data augmentation, implement Agentic AI systems that can reason, plan, and execute tasks autonomously, and ensure AI solutions are integrated effectively into business processes while maintaining safety, reliability, and compliance.

Strategic project experience means you'll work on initiatives with significant business impact, long-term implications, and cross-organizational dependencies. Your ability to analyze critically, solve complex problems, work collaboratively with global teams, and deliver solutions aligned with strategic objectives ensures success on high-stakes projects.

Responsibilities include managing technical debt backlog - identifying, prioritizing, and addressing technical debt that impacts system maintainability and performance. You'll create and maintain technical documentation for applications, architectures, and processes, ensuring knowledge transfer and operational continuity. Requesting and coordinating infrastructure changes requires working with infrastructure teams, cloud providers, and stakeholders to provision resources, implement changes, and ensure proper configuration.

Core Tech Stack: Azure (primary), AWS, GCP, ADF/Glue, Databricks, Azure ML/SageMaker, Azure Functions/Lambda, ADLS/S3, Power BI, Microsoft Fabric, GitHub

Architecture Focus: Lambda architecture, medallion data architecture, multi-cloud solutions, serverless patterns, data platform design

AI/ML: Machine learning pipelines, MLOps, Generative AI, Agentic AI, predictive analytics

Domain: Energy & Utilities sector, GDPR compliance, regulatory requirements, critical infrastructure

Leadership: Strategic projects, technical debt management, solution design, global collaboration

WHAT WE'RE LOOKING FOR (Required)

Extensive Experience: MANDATORY - More than 15 years of experience across platforms, languages, technologies, and frameworks with proven track record delivering complex technical solutions - this demonstrates the deep expertise required

Educational Background: Degree in Computer Science, Software Engineering, or related technical field providing strong theoretical and practical foundation

Multi-Cloud Expertise: MANDATORY - Knowledge and hands-on experience with AWS, Azure, and Google Cloud Platform with primary focus on Azure cloud services, architecture, and best practices

Lambda Architecture: MANDATORY - Knowledge and experience implementing lambda architecture patterns combining batch and stream processing for comprehensive data handling

Medallion Architecture: MANDATORY - Expertise in designing medallion data architecture (bronze/silver/gold layers) for progressive data quality and refinement

Azure Data Factory: MANDATORY - Hands-on experience with Azure Data Factory for ETL/ELT orchestration, pipeline development, and data integration (or AWS Glue equivalent)

Databricks: MANDATORY - Production experience with Databricks for distributed data processing, Delta Lake, MLflow, and collaborative analytics

ML Platforms: MANDATORY - Experience with Azure ML and/or AWS SageMaker for machine learning model development, training, and deployment

Serverless Computing: MANDATORY - Hands-on experience with Azure Functions and/or AWS Lambda for serverless application development and event-driven architectures

Cloud Storage: MANDATORY - Experience with Azure Data Lake Storage (ADLS) and/or AWS S3 for data lake implementation and management

Power BI: MANDATORY - Proficiency in Power BI for business intelligence, data visualization, and reporting

Microsoft Fabric: MANDATORY - Knowledge of Microsoft Fabric unified analytics platform

GitHub & Git: MANDATORY - Experience with GitHub for version control and Git workflows including branching, pull requests, and collaboration

DevOps Practices: MANDATORY - Experience with DevOps practices including CI/CD, infrastructure automation, monitoring, and deployment strategies

MLOps Experience: MANDATORY - Hands-on experience with MLOps practices for machine learning lifecycle management, model versioning, monitoring, and deployment

ML/GenAI/AgenticAI: MANDATORY - Experience implementing analytical solutions with Machine Learning components, Generative AI capabilities, and Agentic AI systems

Complex Solution Design: Proven experience designing complex technical solutions considering architecture, scalability, security, performance, and business requirements

Energy & Utilities Knowledge: Knowledge of Energy & Utilities market including industry challenges, regulatory environment, and sector-specific requirements

GDPR Compliance: Understanding of GDPR (General Data Protection Regulation) requirements, data privacy principles, and compliance implications for technical solutions

Strategic Project Experience: Demonstrated experience working on strategic projects with significant business impact and organizational importance

Communication Excellence: Excellent written and oral communication skills for technical documentation, stakeholder engagement, and cross-team collaboration

Collaborative Working: Ability to work in collaborative environments with global teams across time zones, cultures, and organizational structures

Critical Analysis: Strong aptitude for critical analysis, evaluating trade-offs, and making informed technical decisions

Problem-Solving Orientation: Strong problem-solving orientation with systematic approach to complex technical challenges

English Proficiency: MANDATORY - B2 level (Upper Intermediate) or higher in English for communication, documentation, and collaboration

Work Authorization: Eligibility to work from Lisbon, Portugal with availability for hybrid work model (2 days per week in office)

NICE TO HAVE (Preferred)

Agile Methodology: Knowledge of Agile methodology including Scrum or Kanban frameworks for iterative project delivery

Jira & Confluence: Experience with Atlassian tools including Jira for project tracking and Confluence for documentation and collaboration

Portuguese Language: Portuguese language proficiency (oral and written) for local team communication - valued but not required

Spanish Language: Spanish language proficiency (oral and written) for regional collaboration - valued but not required

Additional Cloud Certifications: Certifications in AWS, Azure, or GCP demonstrating validated cloud expertise

Data Engineering Advanced: Deep data engineering expertise including stream processing, data quality frameworks, and data governance

Apache Spark: Advanced knowledge of Apache Spark for distributed computing beyond Databricks usage

Terraform or IaC: Experience with Infrastructure as Code tools like Terraform, ARM templates, or CloudFormation for infrastructure automation

Kubernetes: Knowledge of Kubernetes for container orchestration and microservices deployment

Python Programming: Strong Python programming skills for data engineering, ML, and automation

SQL Advanced: Expert-level SQL skills for complex analytics, query optimization, and database design

NoSQL Databases: Experience with NoSQL databases like Cosmos DB, MongoDB, or DynamoDB

Event Streaming: Knowledge of event streaming platforms like Kafka, Event Hubs, or Kinesis

IoT Solutions: Experience with IoT platforms and edge computing relevant to energy sector smart devices

Time Series Databases: Knowledge of time series databases like InfluxDB or TimescaleDB for sensor data

Security Best Practices: Deep understanding of cloud security, data encryption, identity management, and compliance

Cost Optimization: Skills in cloud cost optimization, resource management, and FinOps practices

Monitoring & Observability: Experience with monitoring tools like Azure Monitor, CloudWatch, Prometheus, or Grafana

Data Governance: Advanced data governance knowledge including metadata management, data lineage, and data quality frameworks

AI Ethics: Understanding of AI ethics, responsible AI principles, and bias mitigation

Large Language Models: Experience with LLMs including prompt engineering, fine-tuning, and RAG systems

Vector Databases: Knowledge of vector databases for semantic search and RAG implementations

API Development: Experience designing and implementing RESTful APIs or GraphQL services

Microservices Architecture: Deep understanding of microservices patterns and distributed systems design

Disaster Recovery: Experience designing disaster recovery and business continuity solutions

Performance Tuning: Advanced skills in performance optimization across data pipelines, queries, and applications

Leadership Experience: Previous technical leadership roles including team guidance or architectural leadership

Location: Lisbon, Portugal (Hybrid - 2 days per week in office)