About the job Data Engineer Snowflake and Databricks -New York, NY -Hybrid
Data Engineer Snowflake and Databricks -New York, NY -Hybrid
Share Your Resume and Build Your Future!
Join our Talent Community for New York. Local postings emphasize production grade ELT on Snowflake and Databricks with strong Python and SQL and Airflow and dbt plus secure data sharing and cost governance. Demand is deep across banks and fintechs with senior ranges at the top of the market.
As a Data Engineer you will build reliable pipelines and modular models on Snowflake and Databricks, optimize cost and performance, and deliver trusted data for risk and finance and product analytics.
Requirements:
- 5 to 8 years in data engineering within financial services or high scale platforms
- Hands on Snowflake warehouses and roles and RBAC and tasks and streams
- Hands on Databricks PySpark and Delta Lake and Unity Catalog and jobs
- Strong SQL and Python and performance tuning and partitioning and caching
- Orchestration Airflow or dbt or Azure Data Factory or AWS Glue
- CI and CD with Git and pull requests and automated tests
- Nice to have Kafka or Kinesis and event driven patterns
Responsibilities:
-
Design and deliver ELT pipelines with reproducible code and tests
-
Model data sets for analytics and BI and ML using medallion style layers
-
Implement data quality checks and SLAs with alerts and lineage
-
Tune warehouses and clusters and storage formats for cost and speed
-
Harden security roles and policies and secrets and PII handling
Outcomes we track:
-
Pipeline success rate 99.5% with data freshness SLAs met 95%+
-
Cost per query reduced 20% in 90 days through tuning
-
Data quality defects per release < 2 with automatic rollback
Compensation and terms:
-
Consultant pay $100 to $170 per hour
-
Contract Hybrid New York NY or Remote US W2 or 1099
How to apply:
- Apply on our site FinTrust Careers
- Or email talent@FinTrustConnect.com with subject [Apply] Data Engineer New York
Follow FinTrust Connect on LinkedIn
Keywords
Snowflake, Databricks, Delta Lake, Unity Catalog, PySpark, SQL, Python, dbt, Airflow, Azure Data Factory, AWS Glue, Kafka, Cost Optimization, Role Based Access, Lineage, Data Quality, New York