more than 180,000 people in over 40 countries, Capgemini is a global leader in
consulting, technology and outsourcing services. The Group reported 2015 global
revenues of EUR 11.9 billion. Together with its clients, Capgemini creates and
delivers business, technology and digital solutions that fit their needs,
enabling them to achieve innovation and competitiveness. A deeply multicultural
organization, Capgemini has developed its own way of working, the Collaborative
Business Experience™, and draws on Rightshore®, its worldwide delivery model.

more about us at www.capgemini.com.

is a trademark belonging to Capgemini.

is an Equal Opportunity Employer encouraging diversity in the workplace. All
qualified applicants will receive consideration for employment without regard
to race, national origin, gender identity/expression, age, religion,
disability, sexual orientation, genetics, veteran status, marital status or any
other characteristic protected by law.

is a general description of the Duties, Responsibilities and Qualifications
required for this position. Physical, mental, sensory or environmental demands
may be referenced in an attempt to communicate the manner in which this
position traditionally is performed. Whenever necessary to provide individuals
with disabilities an equal employment opportunity, Capgemini will consider
reasonable accommodations that might involve varying job requirements and/or
changing the way this job is performed, provided that such accommodations do
not pose an undue hardship.

the following link for more information on your rights as an

Location: Cary, NC


for implementation and ongoing administration of Hadoop infrastructure. Aligning
with the engineering team to propose and deploy new hardware and software
environments required for Hadoop and to expand existing environments. Working
with AD teams to setup and monitor Hadoop users. This job includes setting
adding approved Active Directory Groups and testing HDFS, Hive, Pig and
MapReduce access.. Cluster maintenance
as well as creation and removal of nodes using tools using Ambari. Performance
tuning of Hadoop clusters and Hadoop MapReduce routines. Screen Hadoop
cluster job performances and capacity planning Monitor
Hadoop cluster connectivity and securityManage and review
Hadoop log files. File system
management and monitoring. HDFS support and
maintenance. Diligently teaming
with the infrastructure, network, database, application and business
intelligence teams to guarantee high data quality and availability. Collaborating with
application teams to install operating system and Hadoop updates, patches,
version upgrades when required. Database backup and
recovery. Database
connectivity and security. 15. Performance monitoring and tuning.

Administrator Skills: 1. General operational expertise such as good
troubleshooting skills, understanding of system’s capacity, bottlenecks, basics
of memory, CPU, OS, storage, and networks. 2. Hadoop skills like HDFS, YARN +
MapReduce2, Hive, HBase, Pig, Sqoop, Oozie, ZooKeeper, Flume, Ambari, Kafka,
Knox, Slider, Solr, Spark etc. 3. The most essential requirements are: They
should be able to deploy Hadoop cluster, add and remove nodes, keep track of
jobs, monitor critical parts of the cluster, 4. configure name-node high
availability, schedule and configure it and take backups. 5. Good knowledge of
Linux as Hadoop runs on Linux. 6. Familiarity with open source configuration
management and deployment tools such as Ambari and Linux scripting. 7.
Knowledge of Troubleshooting Core Java Applications is a plus.

Apply now