About the job Data Engineer with Snowflake
Job Title: Data Engineer with Snowflake
Location: Remote – Latin America
Type of Contract: Full-Time | Remote | Contractor
Salary Range: Market Rates
Language Requirements: Advanced English proficiency required; ability to work in U.S. time zones
We are seeking a skilled Data Engineer with strong Snowflake and cloud data engineering experience to join our growing team. You will play a key role in designing and maintaining scalable data infrastructure, privacy-focused data services, and high-performance ELT pipelines for one of the largest consumer insights providers in the U.S. Your work will directly impact secure data collaboration, governance compliance, and the efficient processing of massive data volumes.
Key Responsibilities
- Design and implement scalable data privacy features and secure multi-party collaboration solutions, including data clean rooms and query constraints
- Build, optimize, and maintain Snowflake-based data warehouse architectures, ELT pipelines, and high-performance data structures
- Develop and manage robust data pipelines using SQL, Python, AWS Lambda, Airflow, and Redshift
- Implement data governance, quality controls, and security measures to protect sensitive and financial data, including PII
- Create scalable solutions for processing and transforming large datasets exceeding 200B+ records
- Optimize Snowflake performance through advanced tuning, partitioning, Streams, SnowPipes, Views, and stored procedures
- Collaborate with cross-functional engineering and analytics teams to support data accessibility, reliability, and operational efficiency
- Support CI/CD initiatives and deployment automation using Jenkins and modern SDLC practices
Must-Have Qualifications
- 4+ years of experience as a Data Engineer or Software Engineer
- Strong proficiency in SQL and Python for large-scale data processing and transformation
- Hands-on expertise with the Snowflake ecosystem, including SnowPipes, Streams, Views, performance tuning, data modeling, and ELT pipelines
- Experience with AWS services, including Lambda, Airflow, and Redshift
- Solid understanding of data warehouse concepts, scalable data architecture, and storage optimization
- Experience implementing complex SQL stored procedures and managing enterprise-scale datasets
- Familiarity with CI/CD pipelines, Jenkins, and software development lifecycle best practices
- Ability to work effectively within U.S. business hours and collaborate with distributed teams
Preferred Qualifications
- Experience building and maintaining APIs using Node.js and Next.js
- Familiarity with Swagger/OpenAPI specifications
- Exposure to AI concepts and modern data collaboration technologies
- Experience with data visualization and analytics platforms
- Background working in privacy-focused or highly regulated data environments