Job Openings Sr. Data Architect

About the job Sr. Data Architect


Job Opportunity: Sr. Data Architect


Key Responsibilities

Data Warehouse Architecture

Design and implement a next-generation data warehouse solution leveraging modern cloud data platforms (Snowflake, Redshift, or similar)
Architect dimensional models, data marts, and fact tables optimized for analytics and reporting
Establish data governance frameworks, data quality standards, and lineage tracking
Design multi-tenant data isolation strategies while enabling cross-tenant analytics where appropriate

Integration Framework

Design a unified integration framework for connecting with external systems (APIs, webhooks, file-based imports)
Build event-driven integration patterns leveraging Kafka/MSK for real-time data synchronization
Create standardized data exchange formats and schemas for interoperability
Design integration monitoring, error handling, and retry mechanisms
Establish patterns for bi-directional data synchronization

Analytics Framework

Architect a comprehensive analytics framework that enables self-service analytics and reporting
Design data models that support both real-time and batch analytical workloads
Build frameworks for metric definitions, KPI tracking, and business intelligence
Integrate with existing BI tools (Sisense) and enable new analytical capabilities
Design data access patterns that balance performance, cost, and security

Required Skills & Expertise

Core Competencies

Data Architecture: 7+ years designing and implementing large-scale data architectures for SaaS platforms
Data Warehousing: Deep expertise with modern cloud data warehouses (Snowflake, Redshift, BigQuery) including schema design, optimization, and cost management
ETL/ELT Pipelines: Extensive experience building scalable data pipelines using tools like Airflow, dbt, Estuary, Fivetran, or custom solutions
Multi-Tenant Systems: Proven experience designing data architectures for multi-tenant SaaS applications
Data Modeling: Strong background in dimensional modeling, star/snowflake schemas, and data normalization/denormalization strategies
Integration Architecture: Experience designing integration frameworks, API-based data exchange, and event-driven architectures


Technical Stack Knowledge

Cloud Platforms: AWS (required), with experience in RDS, MSK, S3, Lambda, and data services
Databases: PostgreSQL (operational), Snowflake or similar (analytical), with expertise in query optimization and indexing strategies
Streaming: Kafka/MSK for event streaming and real-time data pipelines
Data Tools: Experience with dbt, Airflow, Estuary, or similar data orchestration tools
Programming: Strong Python and SQL skills; Ruby experience a plus
Infrastructure as Code: Terraform, CloudFormation, or similar for data infrastructure provisioning


Additional Qualifications

Experience with data migration projects, especially customer data migrations in SaaS contexts
Knowledge of data governance, compliance (GDPR, CCPA), and security best practices
Experience with observability and monitoring tools for data pipelines (DataDog, CloudWatch, etc.)
Strong communication skills for collaborating with engineering, product, and business stakeholders
Ability to balance technical excellence with business requirements and timelines



Extra Points:


We highly value personal projects that highlight your technological prowess and ability to self-direct. We see these as shining examples of potential and initiative!
Excellent communication skills.
Desire to work in a multidisciplinary and cross-functional team.
 

Remuneration:

Compensation in US dollars as contractor.

This is a remote position, allowing you to work from anywhere.


If you are seeking a stimulating work environment, growth opportunities, and a team passionate about technology, look no further! Join 1950Labs and become a part of our success.

To apply, please submit your CV. We look forward to getting to know you and discussing how you can contribute to our team.