Data Engineer

A global leader in consulting, technology services and digital transformation, the Capgemini Group is at the forefront of innovation to address the entire breadth of clients’ opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50-year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. Capgemini is driven by the conviction that the business value of technology comes from and through people. It is a multicultural company of over 200,000 team members in more than 40 countries. The Group reported 2018 global revenues of EUR 13.2 billion. People matter, results count. Learn more about us at www.capgemini.com  

Let’s talk about the team:

Our Insights and Data team helps our clients make better business decisions by transforming an ocean of data into streams of insight. Our clients are among Australia’s top performing companies and they choose to partner with Capgemini for a very good reason – our exceptional people.Due to continued growth within Capgemini’s Insights & Data practice we intend to recruit a number of Data engineers relevant consulting and communication skills. If you are already working in a consultancy role, or have excellent client-facing skills gained within large organizations, we would like to discuss our consultant opportunities with you.  

Let’s talk about the role and responsibilities: 

The Data Engineer will expand and optimise our clients’ data and data pipeline architecture, as well as optimise their data flow and collection for cross functional teams. Your responsibilities include:

  • Build robust, efficient and reliable data pipelines consisting of diverse data sources to ingest and process data into data platforms (Hadoop, AWS or GCP).
  • Design and develop real time streaming and batch processing pipeline solutions
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Work with stakeholders including the Product Owner and data analyst teams to assist with data-related technical issues and support their data infrastructure needs.
  • Collaborate with Architects to define the architecture and technology selection. 

Let’s talk about your qualifications and experience:  

To be considered for this role you must have:

  • Proven working experience as Big Data engineer for 2+ years preferably in building data lake solution by ingesting and processing data from various source systems
  • Experience with multiple Big data technologies and concepts such as HDFS, NiFi, Kafka, Hive, Spark, Spark streaming, HBase, EMR and GCP
  • Development experience in one or more of Java, Scala, python and bash.
  • In-depth understanding of Data Management practices and Database technologies
  • Ability to work in team in diverse, fast-paced Agile environment
  • Apply DevOps, Continuous Integration and Continuous Delivery principles to build automated pipelines for deployment and production assurance on the data platform.
  • Knowledge of building self-contained applications using Docker and OpenShift 
  • Share knowledge with immediate peers and build communities and connections that promote better technical practices across the organisation
  • Implement test cases and test automation.
  • Experience in building various frameworks for enterprise data lake is highly desirable 

What happens next and what can we offer you?

Interested?  Passionate people are Capgemini’s Ace of

Ref:

CAP/1407054HA

Posted on:

July 3, 2019

Experience level:

Experienced (non-manager)

Education level:

Bachelor's degree or equivalent

Contract type:

Permanent

Location:

Melbourne

Department:

DS - I&D - Big Data

cookies.

By continuing to navigate on this website, you accept the use of cookies.

For more information and to change the setting of cookies on your computer, please read our Privacy Policy.

Close

Close cookie information