Role: Hadoop with Spark Development
Experience: 4 to 6 Years
– Strong development experience is a must. Consistent track record for education and professional career.
– Experience with Apache Spark (required)
– Good to have experience – Storm, Kafka, NiFi, Spark Streaming, Spark MLlib, Spark GraphX, Flink, Samza, Map Reduce
– Familiarity with data loading tools like Flume, Sqoop.
– Knowledge of workflow/schedulers like Oozie.
– Proven understanding with Hadoop, HBase, Hive, Pig, and HBase.
– Good understanding of Object oriented design, Design Patterns
– Has done development or debugging on Linux/ Unix platforms.
– Motivation to learn innovative trade of programming, debugging and deploying
– Self starter, with excellent self-study skills and growth aspirations
– Excellent written and verbal communication skills. Flexible attitude, perform under pressure.
– Test driven development, a commitment to quality and a thorough approach to the work.
– A good team player with ability to meet tight deadlines in a fast-paced environment