Big Data Platform Admin (Hadoop)
  • Synchrony Systems Inc
4 Days Ago
100000-140000 per Annum + Benefits
Yearly
Irving-TX
6-35 Years
Required Skills: (
Job Description
Job Title: BigData Platform Admin (Hadoop)
Locations: Irving, TX
Employment Type: Full-Time (W2 only)
Work Model: Onsite (5 Days a week)
 
Note: This role is not open for C2C/C2H/1099 or any contract arrangements
 
Job Description
Must Have Technical/Functional Skills: BigData Platform Admin & Strategist
 
Job Description:
  • We are looking for a highly skilled and passionate BigData Platform Admin, who acts as a crucial liaison between the Hadoop admin team and various application development teams.
  • The role is responsible for ensuring the optimal performance, stability, and future readiness of the Hadoop platform, focusing on strategic oversight rather than day-to-day administrative tasks.
  • As a strategist will facilitate communication, drive best practice, assess technical impacts of the platform changes, and contribute to the overall health and efficiency of the Hadoop ecosystem.
 
Responsibilities:
Stakeholder Unification: Serve as a single point of contact and unified stakeholder for all Hadoop-related concerns, bridging the gap between platform administrators and application teams.
Platform Upgrade Management:
  • Review and assess upcoming Hadoop platform upgrades, including new features, libraries and patches.
  • Conduct impact analysis on existing applications and services, identifying potential risks and opportunities
  • Co ordinate and communicate upgrade schedules and requirements will all relevant teams.
Technical Feature and Library Evaluation:
  • Identify and evaluate new technical features and libraries within the Hadoop ecosystem that can benefit application teams or improve platform efficiency.
  • Propose and advocate for the adoption of new technologies and methodologies to enhance the platform’s capabilities.
Cluster Health and Optimization :
  • Monitor overall cluster health, performance metrics, and resource utilization.
  • Propose and implement optimization strategies to improve cluster efficiency, scalability and cost-effectiveness.
  • Collaborate with the admin team to troubleshoot and resolve complex platform-level issues.
Resource Management and Housekeeping :
  • Oversee and manage the allocation of cluster resources (CPU, memory, storage) across various applications and tenants.
  • Establish and enforce policies for resource quota management, data lifecycle and storage optimization.
  • Implement housekeeping strategies to maintain a clean and efficient cluster environment.
Best Practices and Overall Excellence :
  • Define, document and promote best practices for Hadoop application development, deployment and operations.
  • Ensure operational stability and resiliency of the Hadoop platform, implementing measures to prevent outages and minimize downtime.
  • Contribute in disaster recovery and business continuity plan for the Hadoop ecosystem.
Solution Proposal and Innovation :
  • Research and propose suitable technical solutions to address emerging business needs, performance bottlenecks, or architectural challenges within the Hadoop ecosystem.
  • Stay abreast of the Industry trends and advancements in big data technologies, continuously seeking opportunities for innovation.
 
Qualifications :
Education : Bachelor’s or Master’s degree in Computer Science, Engineering or a related field.
Experience :
  • 5+ years of experience in big data environment, with a focus on Hadoop.
  • Proven experience in a technical leadership or architect role, working closely with both operations and development teams.
  • Experience with distributed systems, data processing frameworks (e.g. Spark, Hive) and data warehousing concepts.
  • Familiarity with the cloud platforms (eg. AWS, Azure, GCP) and containerization technologies (eg. Dockets, Kebernetes) is a plus.
Technical Skills :
  • Deep understanding of Hadoop ecosystem components (HDFS, YARN, MapReduce, Hive, Spark, Kafka, Etc.)
  • Strong understanding of Spark architecture and core concepts.
  • Proficiency in Linux scripting for automation and system management.
  • Basic to intermediate proficiency in Python/Scala for scripting and data manipulation.
  • Experience with monitoring tools (eg. Grafana, Prometheus) and logging frameworks.
  • Awareness of various data engineering solutions and consumption tools within the big data landscape.
  • Strong understanding of security best practices in a big data environment.
Benefits Overview
  • Discretionary Annual Incentive.
  • Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans.
  • Family Support: Maternal & Parental Leaves.
  • Insurance Options: Auto & Home Insurance, Identity Theft Protection.
  • Convenience & Professional Growth: Commuter Benefits & Certification & Training Reimbursement.
  • Time Off: Vacation, Time Off, Sick Leave & Holidays.
  • Legal & Financial Assistance: Legal Assistance, 401K Plan, Performance Bonus, College Fund, Student Loan Refinancing

Jobseeker

Looking For Job?
Search Jobs

Recruiter

Are You Recruiting?
Search Candidates