Charlotte , North Carolina, United States
Big Data Engineer
Terms: All Capgemini contracts are ongoing with a right to hire after 3 months. Conversion at 90 days is rare and pre-specified by Capgemini, but the right to hire remains in place.
Target Pay Rate: $55/hr C2C
Location: Charlotte, NC
Work schedule details: Onsite
Details:
Responsibilities:
• Design, develop, and deploy data solutions using Bigdata platforms leveraging components like Spark, HFDS, Kafka, Hive etc.,
• Build and maintain robust, scalable, and efficient Python applications, APIs and scripts following microservices architecture
• Collaborate with cross-functional teams, including product owners, architects, scrum masters and other stakeholders, to define project requirements, timelines, and deliverables.
• Ensure the development adheres to best practices in coding standards, code reviews, and continuous integration/continuous deployment (CI/CD)
• Implement and maintain security best practices across all bigdata solutions to ensure data integrity and compliance with industry standards
• Stay updated with the latest trends and technologies in bigdata and Python development, and drive the adoption of new tools and techniques that enhance productivity and efficiency.
Required Qualifications:
-
Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related fields.
-
Proficiency in Python programming and Big Data platforms (Spark, Kafka, HDFS, Hive).
-
Experience with microservices architecture and RESTful APIs.
-
Familiarity with CI/CD tools (Jenkins, Git, Docker, Kubernetes).
-
Strong understanding of security practices and data compliance standards.
-
Excellent problem-solving and analytical skills.
-
Strong communication and collaboration abilities.
-
Ability to thrive in a fast-paced, dynamic environment.
Nice to Have:
-
Experience with cloud platforms (AWS, Azure, or GCP).
-
Knowledge of real-time data processing and streaming frameworks.
-
Familiarity with container orchestration tools like Kubernetes.