Vacantes disponibles Solution Data Architect

Acerca del puesto Solution Data Architect

Company Summary

The client is the leading life sciences language intelligence platform. Its advanced analytics enable 40% of the top 10 global life sciences organizations to gain clarity and complete visibility on the efficacy and impact of their scientific engagement on patient outcomes.

Medical affairs teams rely on Sorcero to multiply their productivity by 10x and to transform their medical strategy with real-time advanced analytics across the therapeutic landscape. 


The customer transforms decision-making in life sciences by empowering stakeholders with insights to improve patient outcomes. By joining our team, you would play a critical role in our growth and success by collaborating with our network of passionate entrepreneurs to build a scalable, impactful organization. 

Job Description

The customer builds AI-powered solutions, combining the power of deep learning with the accuracy of ontologies to drive natural-language understanding. 

The Data Arquitect Sr will be responsible for Leading the Data Engineering Team to build out the client's Data Platform, Data Architecture and Vertical SaaS solutions that are used by scientists and researchers at top Pharma companies.

You will enjoy working with a highly talented and diverse team of data scientists and engineers specializing in deep learning, active learning, and classical machine learning on one of the richest data sets in Life Sciences and Healthcare.

The ideal candidate will have a strong background in Data Engineering (with Python), have experience working with large data sets, and deploying data-driven solutions. You are focused on results, a self-starter, able to put the team first, and have demonstrated success in using data science to develop and deploy solutions with a focus on impact.

Role and Responsibilities:

  • Lead the Data and Analytics Engineering Team
  • Develop and Evangelize the Data Architecture, Data, and Analytics Strategy with Engineering and AI teams and digest feedback for further efficiency iteration with the goal of maintaining service owner feature velocity
  • Demonstrate a sense of autonomy and urgency for ownership of your areas and proactively take accountability for progress forward
  • Facilitate processes and workflows with a large-scale production infrastructure using state-of-the-art AI pipelines within cloud service providers
  • Drive agile through experimentation, prototyping, and solid execution
  • Define and track key success metrics for the Data and Analytics Engineering work with management to prioritize business and Data and Analytics needs
  • Share with management and peers potential new improvement opportunities
  • Responsible for defining Data and Analytics architecture, developing an implementation plan, and providing thought leadership for all Data Engineering
  • Build out Data Pipelines and Services at scale in collaboration with engineering peers and leads
  • Work with Tech Lead, Product, and peer engineers to build out client's Data and Analytics Architectures and Services to ingest, store and enrich Life Sciences and Healthcare data set
  • Build on all aspects of building the Data Services including infrastructure (Compute, Storage, Networking), Data Ingestion (batch and streaming), Data Store, Data Catalogs, ETL, Analytics
  • Help build out vertical SaaS applications to drive a major impact on our business
  • Ensure engineering designs are guided by high performance, scalability, and security, following the strict healthcare policy compliance, with low cost

Required Qualifications:

  • 3+ years of experience leading an Engineering team.
  • Solid understanding of data structures, data modeling, and software architecture
  • Deep knowledge of math, probability, statistics, and algorithms
  • Ability to write robust code in Python
  • An excellent track record (5+ years) of delivering Data Services and products, including core Data Pipelines, Data Lakes, Data Catalogs, Data Lineage, Data Governance, and Data Platforms for enterprises
  • Solid experience in big data technologies such as Spark, Flink, Iceberg, Avaro, Hadoop, Presto, Elastic Search, Kafka, Kubernetes
  • Solid experience in building out Analytics technologies such as Big Querry, Redshift, Snowflake, Power BI, etc.
  • Extensive hands-on experience in building software and Data platforms/product, able to translate business requirements into clean/logical design, strong skills in coding, design and code review, sprint management, and DevOps; able to deliver under pressure
  • Foster an open, respectful, collaborative culture, bias to action customer-obsessed, always willing to go extra miles to meet customers' needs
  • A strategic thinker and planner who can help craft plans to multiple stakeholders
  • Can lead change and operate effectively within a dynamic organization
  • Strengths in meeting objectives through influence, facilitation and team building
  • Experience with GCP/AWS/Storage/Compute/DB, Docker, and Machine Learning technologies such as TF Serving, TensorFlow, Pytorch, MLFlow, Scikit-learn, Apache MXNet, Jupyter, Splunk, Grafana, Cloud Watch, etc.
  • Excellent communication skills: ability to translate robust analytics and metrics into simple-to-understand takeaways and present findings to management and business partners
  • A great collaborator who can work across operating styles and can bring together multiple perspectives, able to handle conflicts with the best interests of the company and customers in mind