Artefact

Senior AWS Data Engineer - SQL/Python

Job Location

pune, India

Job Description

Senior AWS Data Engineer DBT (5-9 Years Experience) Job Title : Senior AWS Data Engineer DBT Experience : 5-9 Years About the Job : We are seeking a seasoned and results-oriented Senior AWS Data Engineer with 5-9 years of experience to lead and contribute to our critical data initiatives. In this leadership role, you will be responsible for designing, building, optimizing, and maintaining complex data pipelines and infrastructure on AWS. You will play a key role in shaping our data architecture, ensuring data quality, and driving the adoption of modern data engineering best practices. Your expertise in AWS data services and DBT will be crucial in empowering our data analytics capabilities and supporting our strategic Customer Data Platform (CDP) roadmap. Key Responsibilities : - Lead the design, development, and implementation of scalable and robust data pipelines and data models on the AWS platform, utilizing services like S3, Glue, Lambda, Redshift, and potentially others. - Take ownership of end-to-end data engineering projects, from understanding business requirements to deployment and ongoing maintenance. - Architect and implement data transformations using DBT (Data Build Tool) to ensure data consistency, quality, and accessibility for data analysts and scientists. - Write and optimize high-performance SQL queries for complex data manipulation, analysis, and reporting needs. - Develop and implement sophisticated Python-based solutions for data processing, automation, and integration with various internal and external systems. - Design and manage the CI/CD pipelines for data engineering workflows using tools like Jenkins and Bitbucket, ensuring efficient and reliable deployments. - Drive improvements in data quality, reliability, and performance across our data landscape. - Provide technical leadership and mentorship to junior data engineers, fostering a collaborative and knowledge-sharing environment. - Collaborate closely with cross-functional teams, including product, engineering, analytics, and marketing, to understand their data needs and translate them into effective technical solutions. - Contribute significantly to the architectural design and evolution of our data lake and data warehouse on AWS. - Lead the data integration efforts for CDP-related initiatives, ensuring seamless data flow from various marketing and analytics platforms into our central data repository. - Evaluate and adopt new AWS data services and technologies to enhance our data infrastructure and capabilities. - Proactively identify and address potential bottlenecks and performance issues in our data pipelines. - Contribute to the definition and enforcement of data governance policies and standards. Required Skills and Experience : - AWS Data Engineering : Extensive hands-on experience (5-9 years) designing, building, and managing complex data solutions on AWS, with deep expertise in services such as S3, Glue (including advanced features like Glue workflows and custom connectors), Lambda, Redshift (including performance tuning), and potentially other relevant services (e.g., EMR, Kinesis). - DBT (Data Build Tool) : Proven expertise in leveraging DBT for advanced data transformations, data modeling, testing, and documentation in a collaborative environment. - SQL : Mastery of writing and optimizing complex SQL queries for large-scale datasets across different database systems. - Python : Advanced proficiency (5-9 years) in Python programming for data engineering tasks, including experience with relevant libraries (e.g., Pandas, PySpark, Boto3). - Version Control & CI/CD : Deep understanding of Git and extensive experience in designing and implementing CI/CD pipelines using tools like Jenkins and Bitbucket for data engineering workflows. - Data Modeling : Significant experience in designing and implementing various data modeling techniques (e.g., dimensional modeling, data vault) for large and complex data environments. Additional Skills (Strongly Preferred) : - Big Data Technologies : Hands-on experience with big data processing frameworks like Databricks (Spark) and their integration within the AWS ecosystem. - Workflow Orchestration : Strong experience with workflow orchestration tools such as Apache Airflow or AWS Step Functions for managing intricate data dependencies and schedules. - Data Governance : Experience in implementing and enforcing data governance policies, data quality frameworks, and data security best practices. - Experience with infrastructure-as-code (IaC) tools like Terraform or CloudFormation for managing AWS resources. - Familiarity with data streaming technologies (e.g., Kafka, Kinesis Streams). - Experience with NoSQL databases (e.g., DynamoDB). - Exposure to machine learning workflows and data preparation for ML models. Qualifications : - Demonstrated success in leading and delivering complex data engineering projects in large organizations. - Strong analytical and problem-solving skills with the ability to think strategically and execute tactically. - Excellent communication, presentation, and interpersonal skills with the ability to effectively communicate technical concepts to both technical and non-technical audiences. - Proven ability to take ownership, work independently, and drive initiatives forward. - Experience mentoring and guiding junior team members. - A strong understanding of data warehousing principles and best practices. - While a Bachelor's or Master's degree in Computer Science, Engineering, or a related field is preferred, equivalent demonstrable experience and a strong portfolio of successful projects are highly valued (ref:hirist.tech)

Location: pune, IN

Posted Date: 5/17/2025
View More Artefact Jobs

Contact Information

Contact Human Resources
Artefact

Posted

May 17, 2025
UID: 5202593880

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.