Required Skills: AWS or Azure, Python, PySpark, SQL including data modeling and data warehousing, CI/CD, Git
Job Description
Job Title: Principal Data Engineer
Location: 1375 Enclave Pkwy, Houston, TX 77077
Term: 6 months Contract to hire
Travel: 20%
Hours: Regular 40 hours/weekly - 4 Days a week onsite
Interview:
1st round - Technical Interview Teams video 45 minutes
2nd round - 30 minutes Teams video
3rd Round - in person at Friedkin Corporate Campus Panel 1: (1 hour)
Final Round is F2F interview
As a summary, the most important technical skills we need for this position are:
Prior experience leading a team
•Databricks including delta table, Lakehouse and unity Catalog
•AWS or Azure
•Python
•PySpark
•SQL including data modeling and data warehousing
•CI/CD
•Git
The Principal Data Engineer within the Data Science and Analytics team, plays a crucial role in architecting, implementing, and managing robust, scalable data platforms. This position demands a blend of cloud data engineering, systems engineering, data integration, and machine learning systems knowledge to enhance GST's data capabilities, supporting advanced analytics, machine learning projects, and real-time data processing needs. You will guide other team members and collaborate closely with cross-functional teams to design and implement modern data solutions that enable data-driven decision-making across the organization.
As a Principal Data Engineer, you will:
• Collaborate with Business, and IT functional experts to gather requirements or issues, perform gap analysis and recommend/implement process and/or technology improvements to optimize data solutions.
• Design data solutions on Databricks including Delta Lake, Data Warehouse, Data Mart and others to support the data science and analytical needs of the organization.
• Design and implement scalable and reliable data pipelines to ingest, process, and store diverse data at scale, using technologies such as Databricks, Apache Spark, Kafka, Flink, AWS Glue or other AWS services.
• Work within cloud environments like AWS to leverage services including but not limited to EC2, RDS, S3, Athena, Glue, Lambda, EMR, Kinesis, and SQS for efficient data handling and processing.
• Develop and optimize data models and storage solutions (SQL, NoSQL, Key-Value DBs, Data Lakes) to support operational and analytical applications, ensuring data quality and accessibility.
• Utilize ETL tools and frameworks (e.g., Apache Airflow, Talend) to automate data workflows, ensuring efficient data integration and timely availability of data for analytics.
• Implement pipelines with a high degree of automation for data workflows and deployment pipelines using tools like Apache Airflow, Terraform, and CI/CD frameworks.
• Collaborate closely with business analysts, data scientists, machine learning engineers, and optimization engineers, providing the data infrastructure and tools needed for complex analytical models, leveraging Python, scala or R for data processing scripts.
• Ensure compliance with data governance, compliance and security policies, implementing best practices in data encryption, masking, and access controls within a cloud environment.
• Establish best practices for code documentation, testing, and version control, ensuring consistent and reproductive data engineering practices across the team.
• Monitor and troubleshoot data pipelines and databases for performance issues, applying tuning techniques to optimize data access and throughput.
• Ensure efficient usage of AWS and Databricks resources to minimize costs while maintaining high performance and scalability.
• Cross functional work understanding data landscape, developing proof of concepts, and demonstrating to stakeholders.
• Leads one or more data projects and support with internal and external resources. Coach and mentor junior data engineers.
• Stay abreast of emerging technologies and methodologies in data engineering, advocating for and implementing improvements to the data ecosystem.
What We Need from You
• Bachelor's Degree Computer Science, Data Science, MIS, Engineering, Mathematics, Statistics or other quantitative discipline with 5-8 years of hands-on experience in data engineering, with a proven track record in designing and operating large-scale data pipelines and architectures Required
• Proven experience designing scalable, fault-tolerant data architecture and pipelines on Databricks delta lake, Lakehouse, unity Catalog, streaming, AWS, ETL/ELT development and data modeling, with a focus on performance optimization and maintainability Required
• Deep experience of platforms and services like Databricks, and AWS native data offerings Required
• Solid experience with big data technologies (Databricks, Apache Spark, Kafka) and AWS cloud services related to data processing and storage Required
• Strong hands-on experience with ETL/ELT pipeline development using AWS tools and Databricks Workflows Required
• Strong experience in AWS cloud services, with hands-on experience in integrating cloud storage and compute services with Databricks Required
• Proficient in SQL and programming languages relevant to data engineering (Python, Java, Scala Required
• Hands on RDBMS and data warehousing experience (data modeling, analysis, programming, stored procedures) Required
• Good understanding of system architecture and design patterns to design and develop applications using these principles Required
• Proficiency with version control systems like Git and experience with CI/CD pipelines for automating data engineering deployments Required
• Familiarity with machine learning model deployment and management practices is a plus Preferred
• Experience with SAP, BW, HANA, Tableau, or Power BI is a plus Preferred
• Experience with auto, manufacturing, or supply chain industries is a plus Preferred
• Project life-cycle leadership and support for requirement workshop, design, development, test cycles and production cutover, post-go live support, and environment strategy. Strong knowledge of agile methodologies Required
• Strong communication skills, capable of collaborating effectively across technical and non-technical teams in a fast-paced environment. Required
• AWS Certified Solution Architect Preferred
• Databricks Certified Associate Developer for Apache Spark Preferred
• or other relevant certifications. Preferred