Job Role- Big Data+ ETL Developer
Exp- 4 to 6 yrs
• Experience in Big Data experience
• Design & implement work flows using Unix / Linux scripting to perform data ingestion and ETL (Ab Initio) on Big Data platform.
• Excellent Understanding/Knowledge of Hadoop architecture and various components such as HDFS, Job Tracker, Task Tracker, high availability, HDFS job tracker, MapReduce, Spark RDDS/programming , Hive ,Pig , Kafaka & Flume.
• Must have excellent programing knowledge of Spark
• Provide hands-on leadership for the design and development of ETL (Ab Initio) data flows using Big Data Ecosystem tools and technologies.
• Lead analysis, architecture, design, and development of data warehouse and business intelligence solutions.
• Define Cloud Data strategies, including designing multi-phased implementation roadmaps.
• Work independently, or as part of a team, to design and develop Big Data solutions
• For AWS, S3, Redshift, Elastic Map Reduce (EMR) and Cloudera certification is plus .