Job Openings
Data Engineering Lead
About the job Data Engineering Lead
Responsibilities
- Data Architecture and Design:
- Design and implement scalable and efficient data architectures to support the organization's data processing needs
- Work closely with cross-functional teams to understand data requirements and ensure that data solutions align with business objectives
- ETL Development:
- Oversee the development of robust ETL processes to extract, transform, and load data from various sources into the data warehouse
- Ensure data quality and integrity throughout the ETL process, implementing best practices for data cleansing and validation
- Big Data Technologies:
- Stay abreast of emerging trends and technologies in big data and analytics, and assess their applicability to the organization's data strategy
- Implement and optimize big data technologies to process and analyze large datasets efficiently
- Cloud Integration:
- Collaborate with the IT infrastructure team to integrate data engineering solutions with cloud platforms, ensuring scalability, security, and performance
- Performance Monitoring and Optimization:
- Implement monitoring tools and processes to track the performance of data pipelines and proactively address any issues
- Optimize data processing workflows for improved efficiency and resource utilization
- Documentation:
- Maintain comprehensive documentation for data engineering processes, data models, and system architecture
- Ensure that team members follow documentation standards and best practices
- Collaboration and Communication:
- Collaborate with data scientists, analysts, and other stakeholders to understand their data needs and deliver solutions that meet those requirements
- Communicate effectively with technical and non-technical stakeholders, providing updates on project status, challenges, and opportunities
Qualifications:
- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field
- 6-8 years of professional experience in data engineering
- In-depth knowledge of data modeling, ETL processes, and data warehousing
- In-depth knowledge of building the data warehouse using Snowflake
- Should have experience in data ingestion, data lakes, data mesh and data governance
- Must have experience in Python programming
- Strong understanding of big data technologies and frameworks, such as Hadoop, Spark, and Kafka
- Experience with cloud platforms, such as AWS, Azure, or Google Cloud
- Familiarity with database systems like SQL, NoSQL, and data pipeline orchestration tools
- Excellent problem-solving and analytical skills
- Strong communication and interpersonal skills
- Proven ability to work collaboratively in a fast-paced, dynamic environment
Please apply on the Lifelancer platform at the below link for screening steps & quicker response.
https://lifelancer.com/jobs/view/b381a7ab133a710f462ade7559501b9b