Hadoop developer – 4 to 6 years -Pune
Must have –
– Big Data, Hive, Pig, Sqoop, Map Reduce Framework, Hue
– Strong SQL Skills,
– Unix Shell Scripting
– Data warehousing background
– Experience in working in Agile environment
Nice to Have –
– Experience in ETL and BI tools – Talend
– Knowledge of Spark
– Knowledge of Java and OOP language
– Cloudera certification is preferred
– Knowledge of any data visualization tool
The responsibilities of the role include (but are not limited to) the following:
• Contribute to full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support
• Analyze specifications and perform program/database design activities
• Analyze and translate business needs into effective technical solutions and documents
• Contribute and lead projects utilizing the Big data and Analytics landscape.
• Ability and willingness to learn newer tools and technologies in Big data arena and successfully deliver projects utilizing the same.
Experience and Skills:
• At least 6 years of Application development experience through full lifecycle
• Strong Software Development Lifecycle management experience.
• Experience with Red Hat Linux and Bash Shell Scripting,
• Mandatory to have strong SQL skills. Good to have experience in No-SQL.
• Core Java will be preferred; however knowledge of any OOP language is required.
• Thorough knowledge and hands on experience in Hadoop, Map Reduce Framework, Hive, Sqoop, Pig , Hue, Unix, Java, Sqoop, Impala and Talend
• Cloudera certification (CCDH) is preferred.
• Strong experience in any ETL and BI tools
• Experience working with teams spread across many countries and time zones.
• Self-starter who works with minimal supervision
• Strong communications skills.
• Ability to adapt complex situations in project and streamline processes across it.
• Ability to work in a Global model, influence stakeholders and increase ownership at a local level.