Hadoop Developer | 4 to 6 years | Bengaluru & Pune

Job Description

  • Developing distributed computing Big Data applications using Spark, Elastic Search on MapR Hadoop
  • The Hadoop Developer is responsible for designing, developing, testing, tuning and building a large-scale data processing system, for Data Ingestion and Data products that allow the Client to improve quality, velocity and monetization of our data assets for both Operational Applications and Analytical needs
  • Strong work experience on Hadoop distributed computing framework (including apache Spark)
  • Very strong hold over one or more Object Oriented Programming Languages (e.g. Spark, Scala, Python)
  • Experience in hosted PaaS cloud environment – (AWS, Azure or GCP)  
  • Scripting skills – Shell or Python 
  • Knowledge in DWH concepts
  • In-depth understanding of ANY of the relational database systems
  • Strong UNIX Shell scripting experience to support data warehousing solutions

Primary Skills

  • Spark/Scala
  • Python
  • Good experience in Hadoop/hive eco system

Secondary Skills

  • Willingness to adapt to new technologies
  • Understanding of the ML is added advantage
  • Strong communication skills
  • In-depth understanding of ANY of the relational database systems
  • Ability to deliver the tasks independently  



Posted on:

July 22, 2021

Experience level:


Contract type:





Big Data & Analytics