Hadoop Developer | 6 to 9 years | Bengaluru & Pune

Job Description

  • Developing distributed computing Big Data applications using Spark, Elastic Search on MapR Hadoop
  • The Hadoop Developer is responsible for designing, developing, testing, tuning and building a large-scale data processing system, for Data Ingestion and Data products that allow the Client to improve quality, velocity and monetization of our data assets for both Operational Applications and Analytical needs
  • Strong work experience on Hadoop distributed computing framework (including apache Spark)
  • Very strong hold over one or more Object Oriented Programming Languages (e.g. Spark, Scala, Python)
  • Experience in hosted PaaS cloud environment – (AWS, Azure or GCP)  
  • Scripting skills – Shell or Python 
  • Knowledge in DWH concepts
  • In-depth understanding of ANY of the relational database systems
  • Strong UNIX Shell scripting experience to support data warehousing solutions

Primary skills

  • Spark/Scala
  • Python
  • Good experience in Hadoop/hive eco system

Secondary skills

  • Willingness to adapt to new technologies
  • Understanding of the ML is added advantage
  • Strong communication skills
  • In-depth understanding of ANY of the relational database systems
  • Ability to deliver the tasks independently  

Ref:

758147

Posted on:

July 22, 2021

Experience level:

Experienced

Contract type:

Permanent

Location:

Bangalore

Business units:

FS

Department:

Big Data & Analytics