Hadoop (Spark) – 9 to 13 years – Bangalore

Short Description

Hadoop (Spark) – 9 to 13 years – Bangalore



Job Responsibilities

• 6+ years of experience with developing software in Java
• 3+ years of experience in Spark or Pyspark, Crawling frameworks
• 2+ years of experience working with Spark Streaming/DStreams
• Working knowledge in at least 2 of: Scala, Java, Python, or Go-Lang
• Working knowledge of data Apache Spark ecosystem technologies like Spark RDD, DataFrames, Spark Sql, Hive, Sqoop, Python, Scala, Oozie, Flume, Kafka, Hive, Presto, Oozie, Pig, Hue, and Zeppelin
• Demonstrated working knowledge in Unit, Integration, and Load testing for Spark and Hadoop environment
• Write Spark RDD/DataFrame/SQL to power data for extraction, transformation and aggregation from multiple file formats including JSON, CSV & other compressed file format
• Experienced in handling large datasets using Partitions, Spark in Memory capabilities, and Broadcasts in Spark
• Working knowledge with using Apache (Hortonworks) Atlas for Tagging Hive data and integrating with Falcon, masking columns in Hive with Atlas and Ranger, and importing hive metadata into Atlas
• Preferred: Financial Services, Hortonworks Platform Data Platform Certified Developer HDPCA Data Flow Certified Administrator HCA, Databricks Certified Spark Developer



Posted on:

November 3, 2018

Experience level:

Experienced (non-manager)

Education level:

Bachelor's degree or equivalent

Contract type:





Financial Services


By continuing to navigate on this website, you accept the use of cookies.

For more information and to change the setting of cookies on your computer, please read our Privacy Policy.


Close cookie information