Capgemini is creating a big data hub for our client using latest Hadoop framework and Spark. Data from various sources will be ingested into Data hub, it will be cleaned, transformed and used for analysis
- Scala and Spark code developer on a day to day basis.
- Unit test writing and Test Driven Development.
- Managing code using GIT/BitBucket and participate in Code Reviews.
- Package and prepare code for deployment.
- Prepare functional design of code and documentation of his/hers code.
- Participate in requirements gathering. Participate in team meetings for coding best practices.
- Experience in Scala programming 5+ years
- Experience of developing Apache Spark 3+ years
- Familiar in GIT(Bitbucket), SDLC(Bamboo, Aritifactory, SonarQube) will be preferable
- UNIX/Network/Shell scripting 3+ years
- Hadoop knowledge will be preferable