Job Openings Sr. Data Architect - GOAPRDJP00000616

About the job Sr. Data Architect - GOAPRDJP00000616

Sr. Data Architect - GOAPRDJP00000616

Eleventh Floor
9942 - 108 Street
Edmonton

Primarily work remotely but must be available for on-site meetings as required (Meetings may occur up to 3-4 times per fiscal month, but the actual frequency will depend on the specific initiative and will be determined on an on-demand basis.)

Contract 4+ months

1 Opening/ 3 submissions

Standard Background Check Required

Project Name:
Data Management Platform Projects

Scope:

The Government of Alberta's modernization initiatives are shifting legacy systems to a cloud-native Azure Data Management Platform, alongside on-premises geospatial systems. This transformation requires a Data Architect to design, implement and manage scalable, secure, and integrated data solutions.

Ministries such as Environment and Protected Areas, Transportation and Economic Corridors, and Service Alberta rely on complex data from systems like ServiceNow, ERP platforms, and geospatial tools. The Data Architect will enable seamless ingestion, transformation, and integration of this data using Azure services including Data Factory, Synapse Analytics, Data Lake Storage, and Purview.

Azure Databricks will be used to support advanced data engineering, analytics, and machine learning workflows. The Data Architect will ensure that data pipelines are optimized for both batch and real-time processing, supporting operational reporting, predictive modeling, and automation.

Downstream systems will consume data via APIs and data services. The Data Architect will design and manage these interfaces using Azure API Management, ensuring secure, governed, and scalable access to data.
Security, governance, and compliance are critical. The Data Architect will implement role-based access controls, encryption, data masking, and metadata management to meet FOIP and other regulatory requirements.

As data volumes and complexity grow, the Data Architect will ensure the platform remains extensible, reliable, and future-ready, supporting new data sources, ministries, and analytical capabilities.

Duties
:

  • Design and implement scalable, secure, and high-performance data architecture on Microsoft Azure, supporting both cloud-native and hybrid environments.
  • Lead the development of data ingestion, transformation, and integration pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.
  • Architect and manage data lakes and structured storage solutions using Azure Data Lake Storage Gen2, ensuring efficient access and governance.
  • Integrate data from diverse source systems including ServiceNow, and geospatial systems, using APIs, connectors, and custom scripts.
  • Develop and maintain robust data models and semantic layers to support operational reporting, analytics, and machine learning use cases.
  • Build and optimize data workflows using Python and SQL for data cleansing, enrichment, and advanced analytics within Azure Databricks.
  • Design and expose secure data services and APIs using Azure API Management for downstream systems.
  • Implement data governance practices, including metadata management, data classification, and lineage tracking.
  • Ensure compliance with privacy and regulatory standards (e.g., FOIP, GDPR) through role-based access controls, encryption, and data masking.
  • Collaborate with cross-functional teams to align data architecture with business requirements, program timelines, and modernization goals.
  • Monitor and troubleshoot data pipelines and integrations, ensuring reliability, scalability, and performance across the platform.
  • Provide technical leadership and mentorship to data engineers and analysts, promoting best practices in cloud data architecture and development.
  • Other duties as needed

Must-Have's:

  • A college or Bachelor's degree in Computer Science or a related field of study.
  • Hands-on experience managing Databricks workspaces, including cluster configuration, user roles, permissions, cluster policies, and applying monitoring and cost optimization for efficient, governed Spark workloads - 3 years
  • Experience as a Data Architect in a large enterprise, designing and implementing data architecture strategies and models that align data, technology, and business goals with strategic objectives - 8 years
  • Experience designing data solutions for analytics-ready, trusted datasets using tools like Power BI and Synapse, including semantic layers, data marts, and data products for self-service, data science, and reporting - 4 years
  • Experience in Github/Git for version control, collaborative development, code management, and integration with data engineering workflows - 4 years
  • Experience with Azure services (Storage, SQL, Synapse, networking) for scalable, secure solutions, and with authentication (Service Principals, Managed Identities) for secure access in pipelines and integrations - 5 years
  • Experience in Python (including PySpark) and SQL, applied to developing, orchestrating, and optimizing enterprise-grade ETL/ELT workflows in a large-scale cloud environment - 6 years
  • Experience building scalable data pipelines with Azure Databricks, Delta Lake, Workflows, Jobs, and Notebooks, plus cluster management. Extending solutions to Synapse Analytics and Microsoft Fabric is a plus - 3 years

Nice-to-Have's:

  • Certification in The Open Group Architecture Framework (TOGAF).
  • Use of AI-Experienced in using AI for code generation, data analysis, automation, and enhancing productivity in data engineering workflows - 1 year
  • Direct, hands-on experience performing business requirement analysis related to data manipulation/transformation, cleansing and wrangling - 8 years
  • Experience and strong technical knowledge of Microsoft SQL Server, including database design, optimization, and administration in enterprise environments - 8 years
  • Experience building scalable ETL pipelines, data quality enforcement, and cloud integration using TALEND technologies - 2 years
  • Experience in data governance, security, and metadata management within a Databricks-based platform - 2 years
  • Skilled in building secure, scalable RESTful APIs for data exchange, with robust auth, error handling, and support for real-time automation - 3 years
  • Experience in Message Queueing Technologies, implementing message queuing using tools like ActiveMQ and Service Bus for scalable, asynchronous communication across distributed systems - 3 years
  • Experience working with cross-functional teams to create software applications and data products - 5 years
  • Experience working with ServiceNow- Azure-based Data Management Platform Integrations - 1 year