Talent500

Senior Data Engineer - Python/PySpark

Job Location

bangalore, India

Job Description

The candidate will have responsibilities across the following functions : Data Pipeline Development and Maintenance : - Design, build, and optimize scalable ETL / ELT pipelines to ingest data from diverse sources such as APIs, cloud platforms, and databases. - Ensure pipelines are robust, efficient, and capable of handling large volumes of data. Data Integration and Harmonization : - Aggregate and normalize marketing performance data from multiple platforms (e. g., Adobe Analytics, Salesforce Marketing Cloud, Delta's EDW, ad platforms). - Implement data transformation and enrichment processes to support analytics and reporting needs. Data Quality and Monitoring : - Develop and implement data validation and monitoring frameworks to ensure data accuracy and consistency. - Troubleshoot and resolve issues related to data quality, latency, or with Stakeholders : - Partner with marketing teams, analysts, and data scientists to understand data requirements and translate them into technical solutions. - Provided technical support and guidance on data-related issues or projects. Tooling and Automation : - Leverage cloud-based solutions and frameworks (e. g., AWS) to streamline processes and enhance automation. - Maintain and optimize existing workflows while continuously identifying opportunities for improvement. Documentation and Best Practices : - Document pipeline architecture, data workflows, and processes for both technical and non-technical audiences. - Follow industry best practices for version control, security, and data Learning and Innovation : - Stay current with industry trends, tools, and technologies in data engineering and marketing analytics. - Recommend and implement innovative solutions to improve the scalability and efficiency of data systems. Requirements : - Bachelor of Science degree in Computer Science or equivalent. - Extensive experience with databases and data platforms (Teradata and AWS preferred) - At least 3 years of post-degree professional experience as a data engineer developing and maintaining data pipelines. - 2 Hands-on experience in designing, implementing, and managing large-scale data and ETL solutions utilizing AWS Compute, Storage, and database services (S3 Lambda, Redshift, Glue, Athena). - Proficiency in Python, SQL, PySpark, Glue, Lambda, S3 and other AWS tool sets. - Strong knowledge of relational and non-relational databases. - Good understanding of data warehouses, ETL, AWS architecture, Airflo, and Redshift. - Ability to create clean, well-designed code and systems. - Proven ability to work with large and complex datasets. - Strong analytical and programming skills with the ability to solve data-related challenges efficiently. - Strong attention to detail and a commitment to data accuracy. - Proven ability to learn new data models quickly and apply them effectively in a fast-paced environment. - Excellent communication skills with the ability to present complex data findings to both technical and non-technical audiences. - Desired Airline Industry Experience. - Desired experience working with marketing and media data. - Experience working with SAS to develop data pipelines. - AWS certifications : Solution Architect (SAA / SAP) or Data Analytics Specialty (DAS). - Experience migrating data pipelines and systems to modern cloud-based solutions. - Familiarity with marketing data platforms. (ref:hirist.tech)

Location: bangalore, IN

Posted Date: 5/1/2025
View More Talent500 Jobs

Contact Information

Contact Human Resources
Talent500

Posted

May 1, 2025
UID: 5142818150

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.