A global leader in consulting, technology services and digital transformation, Capgemini is at the forefront of innovation to address the entire breadth of clients’ opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50-year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. Capgemini is driven by the conviction that the business value of technology comes from and through people. It is a multicultural company with 200,000 team members in over 40 countries. In 2017 the Group achieved €12.8 billion in revenue. Learn more about us at www.capgemini.com
Let’s talk about our Team
Our Insights and Data team helps our clients make better business decisions by transforming an ocean of data into streams of insight. Our clients are among Australia’s top performing companies and they choose to partner with Capgemini for a very good reason – our exceptional people.Due to continued growth within Capgemini’s Insights & Data practice we intend to recruit a number of Big Data Engineers with relevant consulting and communication skills. If you are already working in a consultancy role, or have excellent client-facing skills gained within large organizations, we would like to discuss our consultant opportunities with you.
Let’s talk about the role
The Big Data Engineer will expand and optimise our clients’ data and data pipeline architecture, as well as optimise their data flow and collection for cross functional teams. Your responsibilities include:
- Build robust, efficient and reliable data pipelines consisting of diverse data sources to ingest and process data into Hadoop data platform.
- Design and develop real time streaming and batch processing pipeline solutions
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Design, develop and implement data pipelines for data migration & collection, data analytics and other data movement solutions.
- Work with stakeholders including the Product Owner and data analyst teams to assist with data-related technical issues and support their data infrastructure needs.
- Collaborate with Architects to define the architecture and technology selection.
Let’s talk about your skills and experience
You will have the ability to optimise data systems and build them from the ground up. You will support software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
Essential skills and experience
- Proven working experience as Big Data engineer for 2+ years preferably in building data lake solution by ingesting and processing data from various source systems on AWS cloud
- Experience with multiple Big data technologies and concepts such as HDFS, NiFi, Kafka, Hive, Spark, Spark streaming, HBase , EMR and Redshift on AWS
- Experience in one or more of Java, Scala, python and bash.
- Ability to work in team in diverse/ multiple stakeholder environment
- Experience in working in a fast-paced Agile environment
- BS in Computer Science, Statistics, Informatics, Information Systems or another quantitative field
- Implement test cases and test automation.
- Apply DevOps, Continuous Integration and Continuous Delivery principles to build automated pipelines for deployment and production assurance on the data platform.
- Share knowledge with immediate peers and build communities and connections that promote better technical practices across the organisation
Preferable skills and experience