This site uses cookies.
We use cookies to offer you a better browsing experience & analyze site traffic. If you continue
to use this site, you consent to our use of cookies.
Read authentic reviews from candidates, clients and employees.
Learn more about how Great Recruiters is transforming the industry.
Ongoing contract (Client maintains right to hire after 3 months though rarely exercises this right)
Location: Alpharetta, GA
C2C Pay Rate: $65.00
Responsibilities:
Design, build, and optimize scalable data pipelines for batch and real-time processing.
Work extensively with Hadoop ecosystem components including HDFS, MapReduce, Hive, and Spark.
Implement and manage cloud-based data warehousing solutions, with a focus on Snowflake, including architecture, data manipulation, and performance tuning.
Apply best practices for data ingestion, transformation, data quality validation, and secure data handling.
Develop robust data models that support efficient storage, access, and analytics in a warehousing environment.
Requirements:
Strong proficiency in Python, Java, and SQL.
In-depth knowledge of Big Data frameworks: HDFS, MapReduce, Hive, and Spark.
Expertise in Snowflake: architecture design, data manipulation, and query optimization.
Hands-on experience in core data engineering concepts: ingestion, transformation, quality checks, and data security.
Proven ability to design efficient data models for data warehousing environments.
Background as a Hadoop developer with Java expertise.
Experience working with large-scale data systems in production environments.