Job Openings Data Engineer (Onsite, Lahore, Remittance Salary)

About the job Data Engineer (Onsite, Lahore, Remittance Salary)

Requirements:

  • Bachelor's degree in Computer Science, Information Systems, Engineering, or related field.
  • 3 to 4 years of experience working as a Data Engineer in a production environment.
  • Hands-on experience with AWS Redshift, AWS Data Lake, and Snowflake.
    Strong SQL skills and understanding of database concepts.
  • Experience in building and maintaining data pipelines using AWS Glue, PySpark, or Airflow.
  • Familiarity with data warehousing concepts, dimensional modeling, and performance tuning.
  • Proficient with Python or Scala for data transformation tasks.
    Strong problem-solving and analytical skills.
  • Knowledge of data privacy, security, and compliance standards is a plus.
    Experience with data cataloging tools like AWS Glue Data Catalog or Apache Hive Metastore.
  • Exposure to streaming data frameworks like Kinesis, Kafka, or Spark Streaming.
  • Experience integrating BI tools (e.g., Power BI, Looker, Tableau) with Redshift or Snowflake.
  • Familiarity with Terraform or IaC tools for managing data infrastructure is a bonus.

Responsibilities:

  • Design, develop, and maintain scalable ETL/ELT pipelines for ingesting data from various sources into AWS Redshift, AWS Data Lake, and Snowflake.
  • Build data models and schemas optimized for analytics, reporting, and data science use cases.
  • Collaborate with data analysts, product teams, and software engineers to understand data requirements and deliver clean, well-organized datasets.
  • Manage and monitor scheduled jobs to ensure data reliability, quality, and consistency.
  • Implement data governance, cataloging, and security practices in accordance with organizational and compliance standards.
  • Optimize SQL queries and ETL jobs for performance and cost-efficiency.
  • Utilize AWS services like Glue, Lambda, S3, Athena, Redshift Spectrum, etc., to support data pipeline operations.
  • Perform data validation and quality checks on ingestion and transformation layers.
  • Troubleshoot and debug data issues across complex data pipelines and cloud environments.