043925-Infrastructure Consultant 2 – Bigdata / Hadoop Admin with MongoDB and Splunk

About Capgemini


A global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the entire breadth of clients’ opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50 year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. Capgemini is driven by the conviction that the business value of technology comes from and through people. It is a multicultural company of over 200,000 team members in more than 40 countries. The Group reported 2018 global revenues of EUR 13.2 billion.

 


About Infrastructure Services :

The Cloud Infrastructure Services Global Business Line is Capgemini’s consulting and infrastructure build-and-run provisioning offering, and supports the group’s cloud-based services. As part of the integrated cloud offering from Capgemini, Cloud Infrastructure Services delivers a broad range of cloud services to build and support the hybrid cloud estate by encompassing the leading public cloud players and leading private cloud technologies. With EUR 1.5 billion annual revenue, Cloud Infra Services helps clients virtualize and optimize their IT estates through infrastructure outsourcing services such as data center, helpdesk, network support, and service integration and service maintenance support.  Our other services also include infrastructure transformation services-helping clients consolidate and migrate entire workloads and data centers.

 

 

Visit us at www.capgemini.com. People matter, results count.

Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.

 

 

This is a general description of the Duties, Responsibilities and Qualifications required for this position. Physical, mental, sensory or environmental demands may be referenced in an attempt to communicate the manner in which this position traditionally is performed. Whenever necessary to provide individuals with disabilities an equal employment opportunity, Capgemini will consider reasonable accommodations that might involve varying job requirements and/or changing the way this job is performed, provided that such accommodations do not pose an undue hardship.

 

 

 

Click the following link for more information on your rights as an Applicant –http://www.capgemini.com/resources/equal-employment-opportunity-is-the-law

 

  

  

 

 

 

 

7+ years of IT experience with min 3+ years of experience on Bigdata Administration (Hadoop,MongoDB & Splunk)Good hands-on experience as an Hadoop adminExpertise in Cluster maintenance using tools like IBM BigInsights, Hortonworks Ambari, Ganglia etcGood at Performance tuning of Hadoop clusters and Hadoop MapReduce routines. Screen Hadoop cluster job performances and capacity planning Monitor Hadoop cluster health, connectivity and security Experience on troubleshooting, backup and recovery.Manage and review Hadoop log files. File system management and monitoring. HDFS support and maintenance. Familiar with HDFS, MR and Yarn commands (CLI) and utilities. Experience in schedulers in Hadoop, job scheduling and monitoring. Experience on hive, hbase, sqoop, RDBMS and Hadoop eco system. Experience and Troubleshooting issues on Apache HBase and Solr.Experience in handling platforms secured with Ranger and Kerberos.Expertise in Creating new MongoDB databases, instances, Database objects/views and etc. Discussion with engineering team and suggestions on performance tuning, indexing strategies and volume/stress testing.Evaluate/test/implement backup/recovery across all MongoDB environmentsInstall, configure, Test the Disaster Recovery (DR) strategy monitor system performance after go-liveSupport, maintain, and expand Splunk infrastructure in a highly resilient configurationStandardized Splunk agent deployment, configuration and maintenance across a variety UNIX and Windows platformsTroubleshoot Splunk server and agent problems and issuesAssist internal users Splunk in designing and maintaining production-quality dashboards Should possess a basic knowledge on Dockers and ContainersDiligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability. Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required. Candidate should have played lead role handing team size of 10 to 12 resources, he/she should be having excellent communication, presentation & customer handling skills.  Required:Hadoop, HDFS, HBase, Solr, Hive, Kafka, High availability, Ranger, Kerberos    Desired:Linux Shell Scripting, Performance Tuning, SplunkMongoDB Cloud Manager, Grafana, Linux Shell Scripting.  

Involves researching, developing, innovating and delivering effective and consistent solutions to support the infrastructure systems ensuring the application of current and emerging technologies.     Day to Day responsibilities:
• To automate, administer, manage, run and make reliable, trustworthy and dependable the processing of production;
• To integrate in production of the new application programs or data processing sequences;
• To configure and parameterize the production equipment;
• To automate the procedures of technical management and pattern matching of alarms;
• To define and implement the procedures of recovery in the event of incident and of restoration of the data. To define, implement and to follow the protection plan and the plans help;
• To implement the standards, standards, rules and procedures of the field to be administer, manage, run and take care of their application program;
• To prepare and upgrade the dashboards, the Handout, specification, instructions, information and the reference frames of production, to transfer knowledge ;
• To analyze, treat and capitalize the incidents of production of level 2. To technically assist the administrator of application program and system Junior, the pilots of operations management and the wizards users;
• To formalize the reports of incident and the action plans and to ensure the implementation of it. To put in place, introduce, position rights to use and access, to maintain and put at day ;
• Optimize measuring instruments of performance and to produce them referrers;
• To level the pieces of software and the products, to apply the patches. To take part in the projects of change.     • Qualification: Engineering or equivalent degree; 6-8 years (2 years min relevant experience in the role) in Infrastructure Management• Must have experience in Technology Solution Design     Candidates should be flexible / willing to work across this delivery landscape which includes and not limited to Agile Applications Development, Support and Deployment.

 

 

 

 

Ref:

043925

Posted on:

May 20, 2019

Experience level:

Manager

Education level:

Bachelor's Degree (±16 years)

Contract type:

Regular

cookies.

By continuing to navigate on this website, you accept the use of cookies.

For more information and to change the setting of cookies on your computer, please read our Privacy Policy.

Close

Close cookie information