Required Skills: Python, PySpark, AWS RDS MySQL, Glue, S3, Redshift, Lambda, Step Functions, RDS Aurora/MySQL, Apache Iceberg, CloudWatch, SNS, SQS, EventBridge
          Job Description
                    Job Title: - ETL Developer
 Location: Chicago, IL (Hybrid)
 Terms: - Contract (Only W2)
  
 Job Details:
 Top 5 Skill sets
 1. Python or PySpark
 2.Complex SQL Development, debugging, optimization
 3.AWS - Glue, Step Functions,
 4.Knowledge of inner working of Databases - like AWS RDS MySQL
 5. Big Data Processing
  
 Nice to have skills or certifications:
 1. Experience as a lead for decent sized ETL team
 2. Experience with Apache Iceberg
 3.Observability tools like Dynatrace or Datadog
  
 Job Summary:
 An ETL developer needs to design, build, test and maintain systems that extract, load and transform data from multiple different systems.
  
 Primary Responsibilities:
 • Leads, Designs, implements, deploys and optimizes backend ETL services.
 • Support a massive scale enterprise data solution using AWS data and analytics services.
 • Analyze and interpret complex data and related systems and provides the efficient technical solutions.
 • Provide support to ETL schedule and maintain compliance to same.
 • Develop and maintain standards to ETL codes and maintain an effective project life cycle on all ETL processes.
 • Coordinate with cross functional teams like architects, platform engineers, other developers and product owners to build data processing procedures.
 • Perform root cause analysis on production issues and perform routine monitoring on databases and provide support to ETL environments.
 • Help create functional specifications, technical designs and working with business process area owners.
 • Implement industry best practices code and configuration for production and non-production environments in an highly automated environment.
 • Provides technical advice, effort estimate, impact analysis.
 • Provides timely project status and issue reporting to management.
  
 Qualifications:
 • 6+ years’ experience using ETL tools to perform data cleansing, data profiling, transforming, and scheduling various workflows.
 • Expert level proficiency with writing, debugging and optimizing SQL.
 • 3-4 years programming experience using Python or PySpark/Glue required.
 • Knowledge of common design patterns, models and architecture used in Big Data processing.
 • 3-4 years' experience with AWS services such as Glue, S3, Redshift, Lambda, Step Functions, RDS Aurora/MySQL, Apache Iceberg, CloudWatch, SNS, SQS, EventBridge.
 • Capable of troubleshooting common database issues, familiarity with observability tools.
 • Self-starter, responsible, professional, and accountable.
 • A finisher, seeing a project or task through to completion despite challenges.