Required Skills: Data Engineering , Scala , Java , Spark , Hive , ETL , GCP , Azure
Job Description
Role: Data EngineerLocation: Hybrid in Sunnyvale, CA
Employment Type: W2 (No sponsorship available)
Breakdown: 60% Data Pipeline Development | 40% Model Development
Must-Have Skills:
✔ SCALA
✔ SPARK
✔ Longevity in roles (3+ years per position)
✔ Background in leading tech companies (Amazon, Meta, Yahoo, PayPal, etc.)
Nice-to-Have Skills:
➤ LLMs
➤ Generative AI
Job Description:
The selected candidate will be working on Data Engineering for a Marketing Technology Platform, focusing on large-scale data pipelines. Key responsibilities include:
- Extracting & Transforming data from structured and unstructured sources.
- ETL development using Scala, Java, and Spark.
- Working in a GCP/Azure environment with Hive tables.
- Data analysis, SQL querying, and reporting.