Short Description

Hadoop (Spark) – 9 to 13 years – Bangalore

Qualifications

Bachelors/Masters

Job Responsibilities

• 6+ years of experience with developing software in Java
• 3+ years of experience in Spark or Pyspark, Crawling frameworks
• 2+ years of experience working with Spark Streaming/DStreams
• Working knowledge in at least 2 of: Scala, Java, Python, or Go-Lang
• Working knowledge of data Apache Spark ecosystem technologies like Spark RDD, DataFrames, Spark Sql, Hive, Sqoop, Python, Scala, Oozie, Flume, Kafka, Hive, Presto, Oozie, Pig, Hue, and Zeppelin
• Demonstrated working knowledge in Unit, Integration, and Load testing for Spark and Hadoop environment
• Write Spark RDD/DataFrame/SQL to power data for extraction, transformation and aggregation from multiple file formats including JSON, CSV & other compressed file format
• Experienced in handling large datasets using Partitions, Spark in Memory capabilities, and Broadcasts in Spark
• Working knowledge with using Apache (Hortonworks) Atlas for Tagging Hive data and integrating with Falcon, masking columns in Hive with Atlas and Ranger, and importing hive metadata into Atlas
• Preferred: Financial Services, Hortonworks Platform Data Platform Certified Developer HDPCA Data Flow Certified Administrator HCA, Databricks Certified Spark Developer

Apply now