CriticalRiver Technologies

Criticalriver Technologies - Data Architect - Python Programming

Job Location

hyderabad, India

Job Description

Job Description : We are seeking a highly experienced and visionary Data Architect to lead the design and implementation of our next-generation data ecosystem. In this critical role, you will be responsible for defining and driving our data strategy, ensuring scalability, performance, and reliability of our data infrastructure. You will leverage your deep expertise in cloud-based data warehousing (specifically Snowflake), ETL/ELT processes, data modeling, and data governance to build a robust and efficient data platform that empowers data-driven decision-making across the organization. You will collaborate closely with data engineers, analysts, and business stakeholders to understand their needs and translate them into a cohesive and future-proof data architecture. Key Responsibilities : - Design and implement scalable, high-performance data architectures leveraging the capabilities of Snowflake as the primary data warehouse. - Define the overall data model, data flow, and integration strategies. - Architect and oversee the development of robust and efficient ETL/ELT pipelines using tools like dbt (Data build tool), Fivetran, and Airflow for seamless ingestion, transformation, and processing of large and complex datasets from various sources. - Apply strong data modeling principles, including dimensional data modeling (e.g., star schema, snowflake schema), to design efficient and effective data structures within Snowflake using dbt. - Continuously monitor and optimize data pipelines for performance, reliability, scalability, and cost-effectiveness. Implement best practices for data transformation and loading. - Establish and enforce data integrity, data quality, data governance policies, and security measures across the entire data ecosystem. - Ensure compliance with relevant data regulations. - Demonstrate a strong and thorough understanding of data warehousing concepts, principles, and best practices. - Utilize strong SQL skills for data querying, manipulation, and analysis within Snowflake and other data sources. - Leverage Python for scripting, automation, and building data pipelines where necessary. - Effectively collaborate with data engineers, data analysts, business intelligence teams, and business stakeholders to understand data requirements, provide architectural guidance, and ensure alignment with business objectives. - Possess strong communication skills to articulate complex technical concepts to both technical and non-technical audiences. - Work extensively with cloud platforms (preferably AWS, GCP, or Azure) to design, deploy, and manage data infrastructure components, including Snowflake, data lakes, and other relevant services. - Design and implement reverse ETL solutions using tools like Hightouch to activate processed data in operational systems and enable real-time operational analytics. - Proactively monitor data pipelines and workflows, identify potential issues, and implement effective troubleshooting and resolution strategies to ensure smooth and uninterrupted data operations. - Drive and evangelize best practices in data modeling, data pipeline development, performance tuning, and cost optimization across the data engineering team. - Stay updated with emerging technologies and trends in cloud-based data engineering, data warehousing, and related areas. Evaluate and recommend new tools and technologies to enhance our data capabilities. - An understanding of ERP systems, particularly NetSuite, and the structure and flow of their data is considered a significant added advantage. Qualifications : - Bachelor's or Master's degree in Computer Science, Data Science, Information Systems, or a related field. - 10 years of progressive experience in data architecture, data warehousing, and ETL/ELT development. - Deep and extensive experience with Snowflake as a cloud data warehouse. - Proven expertise in designing and implementing ETL/ELT pipelines using dbt (Data build tool), Fivetran, and Airflow. - Strong proficiency in data modeling techniques, particularly dimensional modeling. - Excellent SQL skills with experience in optimizing complex queries. - Strong programming skills in Python for data manipulation and automation. - Hands-on experience working with cloud platforms (AWS, GCP, or Azure) and their data-related services. - Experience implementing reverse ETL solutions using tools like Hightouch. - Solid understanding of data governance, data quality, and data security principles. - Excellent analytical, problem-solving, and decision-making skills. (ref:hirist.tech)

Location: hyderabad, IN

Posted Date: 5/1/2025
View More CriticalRiver Technologies Jobs

Contact Information

Contact Human Resources
CriticalRiver Technologies

Posted

May 1, 2025
UID: 5142818452

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.