Huquo

Data Engineer - ETL/Hadoop/Spark

Click Here to Apply

Job Location

Mumbai, India

Job Description

Job Description : - Min. 3 years of Hands-on experience with data modeling, data warehousing, and building ETL pipelines. Excel in the design, creation, and management of very large datasets Data Architecture and Design : - Collaborate with cross-functional teams to design and implement scalable and efficient data architectures, data models, and data integration processes. Data Pipeline Development : - Develop and maintain ETL (Extract, Transform, Load) processes and data pipelines to move and transform data from various sources into data warehouses and data lakes. Big Data Technologies : - Utilize big data technologies and distributed computing frameworks (e.g., Hadoop, Spark, Kafka) to handle large volumes of data and ensure high-speed data processing. Performance Optimization : - Optimize data pipelines and queries for efficiency, scalability, and performance to meet the demands of real-time and batch data processing. Data Monitoring and Troubleshooting : - Implement monitoring systems to track data pipeline performance and identify and resolve issues proactively. Required skill-sets : - Kafka, python, postgreSQL, elasticsearch, airflow, agile methodologies, shell scripting, fast API, NoSQL, Hadoop, Spark, Hive. Note : This is Mumbai based location. Employee need to work in Hybrid working model. (ref:hirist.tech)

Location: Mumbai, IN

Posted Date: 4/24/2024
Click Here to Apply
View More Huquo Jobs

Contact Information

Contact Human Resources
Huquo

Posted

April 24, 2024
UID: 4601284762

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.