Data Engineer II in Hyderabad at Advance Auto Parts

Date Posted: 2/13/2021

Career Snapshot

  • Employee Type:
  • Location:
  • Career Type:
  • Date Posted:

Career Description

Job Description

The Data Engineer (Kafka) is responsible for managing the Kafka platform including the technical components of connectors, Topics, custom frameworks, schemas, development of frameworks.

 This position will utilize confluent Kafka and works with both Kafka development and non kafka development teams. This is core kafka position and will not involve fair bit of analysis, feasibility study of connectors, PoCs, and reusable component development. The successful candidate will have an understanding of variety of data sources, formats of data, and an enterprise level application and streaming services.


  • Be the Subject Matter Expert of Confluent Kafka platform and lead the Kafka implementation in critical initiatives
  • Develop, Maintain, support, and enhance the Confluent kafka platform and related services.
  • Performs needed analysis, and tests on understandings the issues, capabilities like connectors, topics (various settings) in Kafka
  • Implement and maintain kafka ACLs, and keys.
  • Provide guidance to junior team members on best practices and implementation standards
  • Manage and guide the enterprise on adoption and implementation of Kafka as a critical integration platform
  • Collaborate and work with data analysts in various functions to ensure that data meets their needs w.r.t Kafka
  • Work with Kafka support to help resolve issues / complex use-cases on kafka
  • Provide technical guidance for design and implementation of data pipelines in Kafka.


  • Bachelor’s Degree plus at least 6-9 years of experience with minimum 3+years in Kafka, managing data warehouse and/or business intelligence systems. An advanced degree or certifications in a related field is a plus.

Knowledge, Skills & Abilities:

  • Expertise in leading and developing Kafka initiatives. Prior experience in creating publication and consumption of Kafka topics, resiliency, monitoring through real time data streams. Knowledge of KSQL and K-streams is a significant plus.
  • Experience working with DevOPS tools like Jenkins, Ansible, Terraform, GitHUB.
  • Experience working with integration tools such as APIs, Web Services, JDBC/ODBC connectors, and other integration technologies.
  • experience working with programming languages used in ETL and/or ELT environments, such as SQL and Python will be an added advantage.


  1. Software Engineer Jobs
  2. Project Engineer Jobs