About the job Data Engineer
About Mindtech
Mindtech is your gateway to exciting and impactful tech projects. We specialize in end-to-end software outsourcing, linking Latin American talent with global opportunities. Our fast, cost-effective approach ensures that our clients receive exceptional service and innovative solutions. With a diverse team of over 70 skilled professionals across Latin America and the US, we are committed to delivering software that drives success.
About the Role
Were looking for a Data Engineer to design, build, and maintain scalable data infrastructure that powers decision-making across the business. Youll work closely with data scientists, analysts, and software engineers to ensure clean, reliable data pipelines and access to high-quality datasets for analytics, machine learning, and product features.
This role is ideal for someone who loves turning messy, complex data into clean, structured systemsand who thrives in environments where speed, quality, and scale go hand in hand.
Responsibilities
-
Design, implement, and maintain robust ETL/ELT pipelines using tools like Airflow, dbt, or similar.
-
Build and optimize data models in our data warehouse (e.g., BigQuery, Redshift, Snowflake).
-
Ensure data quality, lineage, and observability with appropriate monitoring and alerting.
-
Collaborate with product, engineering, and business teams to understand data needs and deliver solutions.
-
Implement best practices around data governance, security, and compliance (e.g., GDPR, HIPAA if applicable).
-
Automate data validation, anomaly detection, and data backfill processes.
-
Contribute to internal data platform development and tooling.
Requirements
Must-Have:
-
3+ years of experience as a data engineer or in a similar backend/data infrastructure role.
-
Proficient in Python or Scala and SQL.
-
Experience with orchestration tools (e.g., Airflow, Prefect).
-
Deep understanding of data warehousing concepts and distributed data systems.
-
Experience with cloud platforms (AWS, GCP, or Azure) and tools like S3, EMR, Lambda, etc.
-
Strong communication skills; ability to explain technical details to non-technical stakeholders.
Nice-to-Have:
- Experience with real-time data processing (e.g., Kafka, Spark Streaming,Flink).
- Familiarity with modern data stack tools like dbt, Fivetran, Snowplow, etc.
-
Exposure to DevOps practices (CI/CD, infrastructure as code).
-
Interest in or experience with machine learning pipelines.
Growth & Impact
In this role, youll have the opportunity to shape our data infrastructure from the ground up, scale systems for future growth, and directly impact the performance of business-critical decisions. As we grow, your career can evolve toward architecture, leadership, or specialized tracks (e.g., ML ops, platform engineering).