-
Migrate, Design, develop, and deploy AbInitio graphs to DBT jobs to process and analyze large volumes of data.
-
Collaborate with data engineers and data scientists to understand data requirements and implement appropriate data processing pipelines.
-
Optimize DBT jobs for performance and scalability to handle big data workloads.
-
Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.
-
Demonstrated proficiency with Data Analytics, Data Insights
-
Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process
-
Leverage SQL, programming language (Python or similar) and/or ETL Tools (Azure Data Factory, Data Bricks, Talend and SnowSQL) to develop data pipeline solutions to ingest and exploit new and existing data sources.
-
Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.
-
Collaborate with data engineers and data scientists to understand data requirements and implement appropriate data processing pipelines.
-
Optimize Databricks jobs for performance and scalability to handle big data workloads.