Job Category: Engineering
Work Model: Onsite
Duration: 12–24 months

Job Overview

We are looking for an experienced Data Engineer – Data Warehouse to support the modernization of enterprise data platforms within a large financial services environment. 

The ideal candidate will have strong hands-on experience with Snowflake, dbt, DataStage, and Control-M, along with a solid understanding of data modeling and enterprise data engineering best practices.

Key Responsibilities

  • Design, develop, and maintain end-to-end data pipelines, from data ingestion through consumption.
  • Support the migration of legacy ETL workflows into a modern, cloud-native data platform.
  • Build and optimize data warehouse structures using dimensional modeling techniques.
  • Implement and maintain data models such as Star Schema, Snowflake Schema, and Slowly Changing Dimensions (Type I & II).
  • Work closely with business and technical teams to translate data requirements into scalable engineering solutions.
  • Identify and manage risks related to data processing in compliance with financial services policies and standards.
  • Develop and maintain CI/CD pipelines for data deployments using Git and Azure DevOps.
  • Ensure reliability, performance, and data quality across enterprise data products.
  • Collaborate with cross-functional teams to support highly visible and business-critical data initiatives.

Required Skills & Experience

  • Strong experience as a Data Engineer supporting enterprise data warehouse platforms.
  • Hands-on experience with:
    • Snowflake
    • Dbt
    • IBM DataStage
    • Control-M
  • Solid understanding of data warehouse design and dimensional modeling.
  • Experience building and maintaining automated data pipelines.
  • Familiarity with CI/CD practices for data engineering.
  • Experience working in regulated or large enterprise environments is a plus.
  • Strong analytical, problem-solving, and communication skills.

Apply for this position

Allowed Type(s): .pdf, .doc, .docx