About the job Data Engineer
Why are you looking for a job?
If your answer ticks all the boxes, then maybe we can work together.
- You have a curious mind - You won't understand what we're talking about if you dont.
- You want to learn more around technology - You won't survive if you dont.
- You have many ideas that can become products - You'll be the boss of your ideas
- You want to make the world a bit better - We dont like you if you dont.
We happen to be just like that as well. We like hacking things here and there (you included, don't give us a reason to do so ) and create scalable solutions that bring value to the world.
Squaredev? ️
We use state-of-the-art technology to build solutions for our own customers and for the customers of our partners. We make sure we stay best-in-class by participating in research projects across Europe, collaborating with top universities and enterprises on AI, Data, and Cloud.
What you'll do:
The ideal candidate will be responsible for:
- Designing and implementing data pipelines (batch and streaming) for analytics and AI workloads. The tools used will be python, SQL in Microsoft Fabric as well as low code tools in the Fabric suite.
- Building and maintaining data lakes / warehouses (One Lake, BigQuery, Delta Lake).
- Developing and optimizing ETL/ELT workflows using tools like Fabric, Spark Jobs, dbt, Airflow, or Prefect.
- Using / Managing cloud data infrastructure on Azure
- Ensuring data quality, observability, and governance across all pipelines.
- Working closely with data scientists and software engineers to deploy and maintain AI-ready datasets.
To excel in this role, you'll need:
- Strong experience in SQL and Python (PySpark, or similar).
- Hands-on experience with data modeling, ETL frameworks, and data orchestration tools.
- Familiarity with distributed systems and modern data platforms (Spark, Databricks, Fabric, Snowflake, or BigQuery). Fabric will be preferred.
- Understanding of data lifecycle management, versioning, and data testing.
- Solid grasp of Git and CI/CD workflows.
Nice to have:
- Experience with Microsoft Fabric, Data Factory, Dataflow Gen2
- Knowledge of vector databases (pgvector, Pinecone, Milvus) or semantic search pipelines.
- Interest / knowledge in LLMs, AI pipelines.
- Familiarity with data catalogs, lineage tools, or dbt tests.
- DevOps familiarity (Docker, Kubernetes, Terraform).
- Certifications in Azure, Fabric or similar platforms.
What we offer:
Flexibility Hybrid working model.
5 extra holidays to spend with your family and friends.
Private health insurance.
Apple MacBook Pro to do your magic.
Well, that's it! Feedback and questions are always welcome. We want to become better and learn from you, whether you want to join or are in the mood to help. Thanks for your time reading this. Looking forward to hearing from you.