Edgematics
Edgematics - Technical Lead - Data Engineering
Job Location
pune, India
Job Description
Job Title : Technical Lead - Data Engineering Company : Edgmatics Technologies Location : Pune, India Job Type : Full-time, Onsite Experience : 12 - 13 Years Department : Edgmatics Technologies : Edgmatics Technologies is a leading global consulting and solutions integrator specializing in data management, business intelligence, and analytics. We are dedicated to helping organizations unlock the power of their data, providing end-to-end data management solutions that enable data-driven decision-making and support the achievement of critical business objectives. Join us in delivering innovative data solutions that transform This Role : We are seeking a highly experienced and talented Data Engineer stepping into a Technical Lead role. With 12 to 13 years of experience, you will bring hands-on expertise in ETL tools, a deep understanding of CI/CD practices, and a proven ability to lead a technical team of 5 or more members. This is a client-facing role where you will be responsible for technically leading the team, contributing hands-on to building robust ETL jobs, Data Quality jobs, and Big Data jobs, performing performance optimization, creating reusable assets, and overseeing production deployments. You will play a key role in designing and implementing Data Engineering and Data Quality frameworks. Experience with Data Warehouse appliances like Snowflake, Redshift, or Synapse is Responsibilities : - Technically lead, mentor, and guide a team of 5 data engineers, fostering a collaborative and high-performing environment. - Work hands-on with the team in designing, developing, and maintaining scalable, efficient, and reliable data solutions across various platforms. - Design, develop, and maintain end-to-end data pipelines using various Data Integration tools (any ETL tool like Talend, Informatica, etc.) to effectively ingest, process, and transform large volumes of data from diverse and heterogeneous sources. - Contribute significantly to the design and development of cloud-based data pipelines, with hands-on experience in platforms like Azure Data Factory or AWS Glue/Lambda. - Lead the end-to-end implementation of Data Integration projects using various ETL technologies. - Design and implement robust database solutions for storing, processing, and querying large volumes of structured, unstructured, and semi-structured data. - Lead and execute Job Migrations of ETL processes from older versions to newer, more efficient platforms or versions. - Write and optimize advanced SQL scripts in various SQL Database environments at a medium to expert level. - Provide technical guidance and support to the team, troubleshooting technical challenges and ensuring best practices are followed. - Design, integrate, and optimize data flows between various databases, data warehouses, data lakes, and Big Data platforms. - Collaborate closely with cross-functional teams, including business analysts and stakeholders, to gather data requirements and translate them into scalable and efficient data solutions. - Identify and implement optimization techniques to improve the performance, scalability, and cost-effectiveness of ETL processes, data loads, and data pipelines. - Interact with clients on a daily basis, providing technical progress updates, addressing technical questions, and building strong relationships. - Define and implement best practices for data integration, ETL development, and data pipeline orchestration. - Lead the implementation of complex ETL data pipelines or similar frameworks designed to efficiently process and analyze massive datasets. - Ensure the highest standards of data quality, reliability, and security are maintained across all stages of the data pipeline. - Troubleshoot and debug data-related issues in production systems, providing timely and effective resolutions. - Stay current with emerging technologies and industry trends in data engineering, Big Data technologies, ETL tools, and CI/CD practices, incorporating them into our data architecture and processes. - Optimize data processing workflows and underlying infrastructure for enhanced performance, scalability, and cost-effectiveness. - Champion a culture of technical excellence, continuous learning, and improvement within the team. - Lead the implementation and automation of CI/CD pipelines specifically for data engineering workflows, encompassing automated testing, deployment, and monitoring. - Oversee the migration of data engineering assets to production environments from lower environments, performing rigorous testing and Have Skills : - Must hold certifications in at least one ETL tool, one Database technology, and one Cloud platform (Snowflake certification is highly preferred). - Must have successfully implemented at least 3 end-to-end Data Engineering projects from conception to production. - Must have extensive hands-on experience in performance management, optimization, and tuning for data loads, data processing, and data transformation within Big Data environments. - Must be flexible and proficient in writing code using languages such as JAVA, Scala, Python, etc., as required for data processing and scripting. - Must have successfully implemented CI/CD pipelines for data engineering workflows using tools like Jenkins, GitLab CI, Azure DevOps, or AWS CodePipeline. - Must have technically managed and guided a team of a minimum of 5 members. - Must demonstrate strong Technical Ownership capability for Data Engineering deliverables, taking responsibility for the technical success of projects. - Strong communication capabilities with proven experience in client-facing roles, effectively interacting with clients daily. - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - Minimum of 9 years of relevant experience in software engineering or a related role with a significant focus on ETL tools, databases, and data integration. - Proficiency in using various ETL tools such as Talend, Informatica, or similar platforms for building and orchestrating data pipelines. - Hands-on experience with relational databases such as MySQL, PostgreSQL, or Oracle, and familiarity with NoSQL databases such as MongoDB, Cassandra, or Redis. - Solid understanding of database design principles, data modeling techniques, and advanced SQL query optimization. - Extensive experience with data warehousing, Data Lake, and Delta Lake concepts and Qualifications : - Experience with specific cloud data warehousing appliances like Snowflake, Amazon Redshift, or Azure Synapse Analytics. - Experience with streaming technologies like Kafka or Spark Streaming. - Knowledge of data governance and data lineage concepts. - Familiarity with containerization technologies like Docker and orchestration tools like Join Edgmatics Technologies? - Opportunity to lead and work on cutting-edge data engineering projects for global clients. - Be part of a company specializing in data management, business intelligence, and analytics. - Work in a challenging and rewarding environment with a focus on innovation and technical excellence. (ref:hirist.tech)
Location: pune, IN
Posted Date: 5/1/2025
Location: pune, IN
Posted Date: 5/1/2025
Contact Information
Contact | Human Resources Edgematics |
---|