Job Openings Data Engineer (Only Argentina & Brazil)

About the job Data Engineer (Only Argentina & Brazil)


Job Opportunity: Senior Data Engineer

What We Offer

Join our thriving high-tech development business that is rapidly expanding and supporting a diverse clientele across Europe and North America.

We believe in synchronicity, but also value flexibility. Were always eager to listen to your needs and accommodate them as much as possible.

Role: Senior Data Engineer Credit Platform Data Team

Job Summary

We are seeking a highly skilled and motivated Senior Data Engineer to join our Credit Platform Data team. In this pivotal role, you will design, build, and maintain scalable and reliable data pipelines and ETL processes that enable efficient data processing across the organization. You will work closely with product managers, analysts, and stakeholders to translate complex data needs into robust engineering solutions capable of handling large-scale workloads.

Responsibilities

  • Architect, develop, and maintain scalable data pipelines and ETL workflows for ingestion, transformation, and storage of large datasets.

  • Implement automated data quality checks and validation processes.

  • Collaborate with product managers, data analysts, and business stakeholders to gather requirements and translate them into technical specifications.

  • Monitor and optimize data systems for performance, scalability, and cost efficiency.

  • Diagnose and resolve data-related issues, providing root cause analysis and preventive solutions.

  • Maintain documentation of pipelines, ETL processes, and data architecture. Participate in design and code reviews.

  • Stay updated on emerging data engineering tools, technologies, and best practices.

  • Mentor junior data engineers and promote knowledge-sharing across the team.

Requirements

  • Bachelors degree in Computer Science, Engineering, or related field.

  • SQL: Expert-level proficiency, including complex queries and performance optimization.

  • Python: Strong experience with data processing libraries (e.g., Pandas) and automation.

  • PySpark: Proficiency with distributed data processing, DataFrames, RDDs, and performance tuning.

  • ETL: Deep understanding and hands-on experience building efficient ETL pipelines.

  • Data Modeling: Expertise in logical/physical modeling, schema design, normalization/denormalization.

  • RDBMS: Experience with Oracle, MySQL, or similar databases.

  • Data Warehousing: Knowledge of architectures and best practices.

  • Unix/Linux: Proficiency in scripting, workflow management, and system operations.

  • Shell scripting: Ability to automate tasks and support data pipelines.

  • Automation Testing: Experience in test frameworks for data quality and pipeline reliability.

  • Professional Experience: Minimum 3+ years as a Data Engineer or similar role.

Extra Points

  • Personal or open-source projects showcasing your technical depth and initiative.

  • Excellent communication skills.

  • Desire to work in a multidisciplinary, cross-functional team.

Remuneration

  • Compensation in USD as a contractor.

  • 100% remote position.

If youre looking for a stimulating work environment, growth opportunities, and a team passionate about technology, look no further!
Join 1950Labs and be part of our success.

To apply, please submit your CV we look forward to meeting you!