Data Engineer with Python

 Job Description:

Partner Information

The PSA Group is active in port projects in Asia, Europe and America, with the most important flagships being the PSA Singapore Terminals and PSA Antwerp. As the preferred port operator at all world ports, the PSA Group handles approximately 250,000 containers per day.

Join us, and you'll have the opportunity to contribute to the development of advanced robots by engineering and organizing complex data pipelines!

Job Responsibilities

We are looking for creative thinkers and problem-solvers to design new and innovative solutions using the latest technologies available.

Your primary responsibilities would include understanding the business requirements and providing structured data using distributed computing in the cloud to address those requirements.

In collaboration with the rest of the team, you would be responsible for building and maintaining high-quality data pipelines that would service the needs of people around the world!

Job Description:

  • Enable the timely availability of good-quality data to facilitate data-oriented management decisions
  • Work with new data sources from around the world to provide data to our Global Data Analytics Platform built on Azure Cloud
  • Implement automated Data Quality measures to sieve out bad quality data for timely rectifications by data sources
  • Design, construct, and maintain Data Warehouse models to support PSA regional data analytics needs.
  • Create and maintain data pipelines to transform and load data from data lake into Data Warehouse models
  • Work and collaborate with PSA data analytics team at all stages of CRISP-DM Methodology
  • Align to PSA data analytics policy, standards, and best practices and document as required
  • Degree from a recognized university, preferably in Data Analytics or Computer Science domain;

Requirements

Required Technical Skills:

  • At least 3 years of hands-on experience as a Data Engineer, including at least two years using Microsoft data tools;
  • Must have knowledge of data warehouse concept (Star, snowflake) methodology and experience in designing data warehouse dimensional modelling;
  • Must have hands-on experience in Azure Data Factory, Azure Synapse Analytics / Data Warehouse, Azure Analysis Services; and Azure Databricks;
  • Experience in data processing with Python is a must;
  • Good knowledge of SQL is a must;
  • Good understanding and/or experience with big data.

Advantages:

  • Experience with writing data applications in Kubernetes;
  • Experience with REST APIs;
  • Knowledge of cloud computing/distributed computing;
  • Experience with Cloud Data storage services (in particular Azure Queue Storage/Azure Blob Storage).

Personal Skills:

  • Analytical, meticulous, team player;
  • Strong written and oral communication skills in English;
  • Strong interpersonal skills to liaise effectively with users and stakeholders; overcoming challenges of language, cultural, and time zones;
  • Ability to work under pressure towards fixed deadlines, on multiple projects;
  • Ability to work independently in a dynamic environment;
  • Open communication attitude;
  • Work experience in a multinational environment is a plus.
  Required Skills:

Python

 Salary Package:

$ None - None (US Dollar)