I&D – Big Data Engineer

Big Data engineer develop, maintain, test and evaluate big data solutions within organisations. Most of the time he/she is also involved in the design of big data solutions, because of the experience they have with Hadoop based technologies such as MapReduce, Hive MongoDB or Cassandra. A Big Data engineer builds large-scale data processing systems, is an expert in data warehousing solutions and should be able to work with the latest (NoSQL) database technologies.

A Big Data engineer should have sufficient experience in software engineering before the move can be made to the field of big data. Experience with UNIX, object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures should be present.

Big Data engineer should also have the capability to architect highly scalable distributed systems, using different open source tools. He or she should understand how algorithms work and have experience building high-performance algorithms.

The Capgemini Big Data engineer should embrace the challenge of dealing with petabyte or even exabytes of data on a daily basis. He/She understands how to apply technologies to solve big data problems and to develop innovative big data solutions. In order to be able to do this, the Big Data engineer should have extensive knowledge in different programming or scripting languages like Java, Linux, C++, PHP, Ruby, Phyton and/or R. Also expert knowledge should be present regarding different (NoSQL or RDBMS) databases such as MongoDB or Redis. Building data processing systems with Hadoop and Hive using Java or Python should be common knowledge to the big data technical consultant.

A Big Data engineer generally works on implementing complex big data projects with a focus on collecting, parsing, managing, analysing and visualizing large sets of data to turn information into insights using multiple platforms. He or she should be able to decide on the needed hardware and software design needs and act according to the decisions. The Big Data engineer should be able to develop prototypes and proof of concepts for the selected solutions.

Additional qualifications should include:

  • To enjoy being challenged and to solve complex problems on a daily basis;
  • To have excellent oral and written communication skills;
  • To be proficient in designing efficient and robust ETL workflows;
  • To be able to work with cloud computing environments;
  • To have a Bachelor’s or Master’s degree in computer science or software engineering;
  • To be able to work in teams and collaborate with others to clarify requirements;
  • To be able to assist in documenting requirements as well as resolve conflicts or ambiguities;
  • To be able to tune Hadoop solutions to improve performance and end-user experience.

Desired Skills & Experience

  • Resident in Belgium
  • Languages must haves: Speak and write fluent English. Also mother tongue Dutch or French, and basic knowledge of 2nd Belgian language (French or Dutch) is requested.
  • Has at least some experience with one of the big data technologies.. Knowing PIVOTAL Big data Suite is a plus.
  • Has experience and good knowledge of data warehouse design principles and ETL.
  • Has ambition to develop personally to Big data Architect and/or project manager.
  • Has an eye for details, software and data quality, understands the added value of a methodological approach.
  • The willingness to constantly learn in a fast evolving world of innovation driven by ICT.
  • Excellent communication skills.

Job Description



Posted on:

July 31, 2020

Experience level:


Contract type:



Digital Engineering and Manufacturing Services