PySpark | 6 to 9 years | Chennai & Bengaluru

Job Description
  • Analyze large complex data sets to resolve data quality transformation issues independently
  • Troubleshoot failures and provide a permanent fix
  • Understand Business Requirements Document Write Technical Design Documents
  • Create Entity Relationship diagrams
  • Write Unit Integration Tests
  • Identify gaps proactively and propose solutions to improve the existing system
  • Work closely with KTC Technical Lead KTC Supervisor and GP Functional Lead
  • Understand the business processes corresponding to the technical solutions
  • Adopt technical stack used by the team
Primary Skills
  • Manage AWS Infrastructure Services Security Optimize scale and tune performance for AWS Redshift Data Warehouse AWS S3 data lake and AWS EMR
  • Optimize Spark and AWS EMR configurations
  • Develop Maintain Optimize complex Spark scripts
  • Develop Maintain Optimize complex SQL queries Redshift Stored Procedures
Secondary Skills
  • Develop Maintain Optimize complex Talend Data Integration Big Data Jobs
  • Complete POC on new AWS services Work on small size projects enhancements independently



Posted on:

August 21, 2020

Experience level:


Contract type:




Business units:

I and D Global Practice