Job Openings Machine Learning Engineer

About the job Machine Learning Engineer

About Wizpresso (learn more at https://wizpresso.com)

Wizpresso is a technology company that transforms enterprise workflow and empowers financial markets stakeholders. We develop software underpinned by natural language processing (NLP) and deep learning to augment market intelligence, regulatory reporting, and knowledge management. We deliver value to users by removing communication barriers between participants, improving business growth, managing risks, and enhancing operational efficiency.

Our clients range from global financial institutions and professional services firms to enterprises. Wizpresso has won numerous accolades over the years, including the APICTA 2023 AI of The Year Awards, Maker in China 2022 Champion, CUHK Corporate Innovation Index 2022, HKICT Grand Fintech Award 2021, EPIC 2021 Fintech Champion, IFTA Fintech Awards 2021 and 2020, and etnets Fintech Awards 2020 and 2019. If you like to be a pioneer in financial and regulatory technology and enjoy developing a life-changing product for business professionals, Wizpresso is for you. At Wizpresso, you will be part of a fast-growing, stimulating, and enthusiastic culture. Join us and be a data wizard and an automation maven!

  • We are obsessed with our customers.
  • Building a powerful yet simple and elegant product is engrained in our culture.
  • We value execution and outcomes over appearance and office politics.
  • We believe everyone has great ideas and unique strengths.

If you agree with the above values, we would love to meet you.

Job Brief

We are seeking a highly motivated and talented individual to join our team as a Junior Machine Learning Operations Engineer. In this role, you will play a crucial part in optimizing and maintaining our machine learning systems and infrastructure, ensuring seamless integration and deployment of models into production. You will work closely with cross-functional teams, including data scientists, software engineers, and DevOps professionals, to deliver scalable and reliable machine learning solutions.

Responsibilities:

  • Collaborate with data scientists and software engineers to deploy machine learning models into production environments.
  • Develop and maintain scalable infrastructure for training, testing, and deploying machine learning models.
  • Implement and manage continuous integration and continuous deployment (CI/CD) pipelines for machine learning projects.
  • Monitor and optimize machine learning systems to ensure performance, scalability, and reliability.
  • Troubleshoot and resolve issues related to model performance, data quality, and system stability.
  • Collaborate with DevOps teams to ensure smooth integration of machine learning systems with existing infrastructure.
  • Develop and maintain documentation related to machine learning infrastructure, processes, and best practices.
  • Stay up-to-date with the latest advancements in machine learning operations and identify opportunities for improvement within the organization.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
  • Solid understanding of machine learning concepts and techniques.
  • Proficiency in programming languages such as Python and/or R.
  • Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn).
  • Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes).
  • Knowledge of version control systems, such as Git.
  • Strong problem-solving and troubleshooting skills.
  • Excellent communication and collaboration abilities.
  • Ability to work effectively in a fast-paced and dynamic environment.
  • Attention to detail and a commitment to delivering high-quality results.

Preferred Qualifications:

  • Experience with deploying machine learning models in production environments.
  • Knowledge of big data processing frameworks (e.g., Apache Spark, Hadoop).
  • Familiarity with data pipeline orchestration tools (e.g., Airflow, Luigi).
  • Understanding of software development methodologies (e.g., Agile, Scrum).
  • Experience with infrastructure-as-code tools (e.g., Terraform, Ansible).