AWS Big Data Engineer

About the Team

Our Insight and Data team helps our clients make better business decisions by transforming an ocean of data into streams of insight. Our clients are among Australia’s top performing companies and they choose to partner with Capgemini for a very good reason – our exceptional people.

About the role

The Big Data Engineer will expand and optimise our clients’ data and data pipeline architecture, as well as optimise their data flow and collection for cross functional teams. Your responsibilities include:

  • Build robust, efficient and reliable data pipelines consisting of diverse data sources to ingest and process data into AWS based data lake platform
  • Design and develop real time streaming and batch processing pipeline solutions
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Design, develop and implement data pipelines for data migration & collection, data analytics and other data movement solutions.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Build DevOps pipeline
  • Work with data and analytics experts to strive for greater functionality in our data systems

About you

You will have the ability to optimise data systems and build them from the ground up. You will support software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

Essential skills and experience

  • Proven working experience as Big Data engineer for 2+ years preferably in building data lake solution using AWS Big Data stack
  • Experience with multiple Big data technologies and concepts such as HDFS, Hive, MapReduce, Spark, Spark streaming and NoSQL DB like HBase etc
  • Experience with specific AWS technologies (such as S3, Redshift, EMR, and Kinesis)
  • Experience in one or more of Java, Scala, python and bash.
  • Ability to work in team in diverse/ multiple stakeholder environment
  • Experience in working in a fast-paced Agile environment
  • BS in Computer Science, Statistics, Informatics, Information Systems or another quantitative field

Preferable skills and experience

  • Knowledge of and/or experience with Big Data integration and streaming technologies (e.g. Kafka, Flume, etc.)
  • Experience in building data ingestion framework for enterprise data lake is highly desirable
  • Experience of CI/CD pipeline using Jenkins
  • Knowledge of building self-contained applications using Docker, Kubernetes or similar technologies

What we can offer you?

Capgemini is a world leader in technology enabled change. We can offer our consultants;

  • Formal training with industry recognized certifications
  • The ability to interact with peers in a sharing and inclusive community
  • Exciting and challenging projects
  • A well-structured and tailored career framework
  • A culture of collaboration and recognition
  • Excellent remuneration

About Capgemini

Capgemini is one of the world’s foremost providers of consulting, technology, outsourcing services and local professional services. Present in over 40 countries with more than 180,000 people, the Capgemini Group helps its clients transform in order to improve their performance and competitive positioning.

Ranked among Ethisphere’s 2018 Most Ethical Companies in the Word. Our seven values are at the heart of ev

Ref:

CAP/1360613

Posted on:

October 6, 2018

Experience level:

Manager

Education level:

Bachelor's degree or equivalent

Contract type:

Permanent

Location:

Sydney

Department:

DS - I&D - Big Data

cookies.

By continuing to navigate on this website, you accept the use of cookies.

For more information and to change the setting of cookies on your computer, please read our Privacy Policy.

Close

Close cookie information