Data Lake Developer

About Capgemini <?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" />

With more than 170,000 people in over 40 countries, Capgemini is one of the world's foremost providers of consulting, technology and outsourcing services. The Group reported 2014 global revenues of EUR 10.5 billion. Together with its clients, Capgemini creates and delivers business and technology solutions that fit their needs and drive the results they want. A deeply multicultural organization, Capgemini has developed its own way of working, the Collaborative Business ExperienceTM, and draws on Rightshore®, its worldwide delivery model.

Learn more about us at http://www.capgemini.com/ .

 

Rightshore ® is a trademark belonging to Capgemini

Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender identity/expression, age, religion, disability, sexual orientation, genetics, veteran status, marital status or any other characteristic protected by law.

 

Title: Data Lake Developer

 

Job Overview: Capgemini is hiring a senior North American based Data Lake Developer for a digital transformation project. The successful candidate must have end-to-end experience on a Data Lake project preferably using Pivotal Big Data Suite.

 

 

Duties & Responsibilities:

Ability to bring data from multiple sources across all timelines with varying Quality of Service (QoS)

Ability to take data from the storage tier and convert it to structured data for easier analysis by downstream applications

Ability to run analytical algorithms and user queries with varying QoS (realtime, interactive, batch) to generate structured data for easier analysis by downstream applications

Ability to integrate insights with business decisioning systems to build datadriven applications.

Ability to manage the data lifecycle, access policy definition, and master data management and reference data management services

Ability to monitor, configure, and manage the whole data lake from a single operations environment (Pivotal Command Center)

Successful experience developing robust metadata driven ingestion solution with real time, near time and batch feeds.

 Successful experience developing SQL on Hadoop solutions using Hawq or similar for use in in-memory databases, GreenPlum Data Warehouse, etc.

 

Skill, Experience & General information Required:

 

Basic Technical Requirements

Agile project experience; Pivotal Big Data Suite or other Big Data development experience including Pivotal HD, GemFire XD, HAWQ, GreenPlum, Spring XD, Redis, J2EE, Metadata Management,  Experience on Financial Services projects, Experience working on Large Programs, Data and Insights experience

 

Desirable Skills

Nice to have skills: Business Analysis, Project/team leadership, Data Architecture, Data Analysis, Data Management, Data Governance, Data Modeling

 

Location: CHARLOTTE, NC