About the job Senior Databricks Data Engineer
Syffer is an all-inclusive consulting company focused on talent, tech and innovation. We exist to elevate companies and humans all around the world, making change, from the inside to the outside.
We believe that technology + human kindness positively impacts every community around the world. Our approach is simple, we see a world without borders, and believe in equal opportunities. We are guided by our core principles of spreading positivity, good energy and promote equality and care for others.
Our hiring process is unique! People are selected by their value, education, talent and personality. We dont present ethnicity, religion, national origin, age, gender, sexual orientation or identity.
Its time to burst the bubble, and we will do it together!
What You'll do:
- Design, build, and optimize ETL/ELT data pipelines using Azure Databricks (Python, PySpark, SQL, Delta Lake);
- Create, configure, and maintain Databricks workspaces, clusters, repos, and job schedules for multi team data product delivery;
- Develop robust data ingestion, transformation, and processing frameworks for structured, semi-structured, and unstructured data;
- Implement data governance and data security best practices using Databricks and Azure technologies, ensuring controlled, auditable access to datasets;
- Manage publication of high-quality, well-documented datasets for downstream analytics, reporting, and AI workloads;
- Integrate Databricks pipelines with the Azure Data Platform, ensuring reliable orchestration, observability, and CI/CD automation;
- Build and maintain Delta Lake architectures, including medallion (bronze/silver/gold) layer structures;
- Collaborate closely with data architects, data owners, and security teams to align with enterprise standards;
- Diagnose performance bottlenecks, optimize compute cost, and apply cluster-level and code-level tuning strategies;
- Full Remote;
Who You Are:
- Minimum 5 years professional experience delivering Azure Databricks data engineering solutions in enterprise environments;
- Strong expertise in Databricks (Workspaces, Notebooks, Jobs, Workflows, Repos, Unity Catalog, Delta Lake, Delta Live Tables, MLflow);
- Strong knowledge of Azure Data Platform services (ADLS Gen2, Azure Key Vault, Azure Monitor, Azure Log Analytics, Azure Entra ID / RBAC, Terraform provider as a plus);
- Experience implementing data security and governance frameworks (Unity Catalog, access controls, masking, row-level security, ABAC, governed tags, credential management, lineage, auditability, secure CI/CD);
- Strong data engineering and software development skills (Python, SQL, PySpark, Git, Databricks CI/CD, Spark performance tuning, distributed computing and data modelling concepts);
- Knowledge of AI/ML lifecycle and MLflow capabilities (experiments access control and model serving);
- Experience working in Agile or DevOps-focused teams with strong analytical, problem-solving, and communication skills;
- Fluent in portuguese and english;
What you'll get:
- Wage according to candidate's professional experience;
- Remote Work whenever possible;
- Delivery of work equipment adjusted to the performance of functions;
- Benefits plan;
- And others.
Work together with expert teams on projects of large magnitude and intensity, long term together with our clients, all leaders in their industries.
Are you ready to step into a diverse and inclusive world with us?
Together we will promote uniquess!