Populace World Solutions
Azure Databricks Engineer - Data Factory
Job Location
bangalore, India
Job Description
Job Title : Azure Data Bricks Engineer Location : Chennai, Bangalore, Pune, Mumbai Experience : 5 Years Job Description : We are seeking a highly skilled and experienced Azure Data Bricks Engineer to join our dynamic team. In this role, you will be responsible for the development, implementation, and maintenance of our data pipelines and data processing solutions within the Microsoft Azure cloud environment. You will leverage your deep expertise in Azure Data Bricks, Azure Data Factory, and Spark SQL to build scalable and maintainable applications that extract, transform, and load data from various sources into our data lake and data warehouses. You will work closely with engineering and product teams to understand complex data systems and deliver robust data solutions. The ideal candidate will have a strong development background in Azure data technologies, a solid understanding of data warehousing concepts, and excellent problem-solving skills. You should be passionate about working with large datasets and building efficient data pipelines that drive business : - Design and Development of Data Pipelines : Design, develop, and implement scalable and maintainable data pipelines using Azure Data Bricks and Azure Data Factory. - ETL/ELT Development : Develop robust ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes to ingest and process data from diverse sources. - Spark SQL Expertise : Utilize Spark SQL extensively for data manipulation, transformation, and querying within the Azure Data Bricks environment. - Azure Data Bricks Implementation : Build and optimize data processing workflows using Azure Data Bricks notebooks, jobs, and clusters. - Azure Data Factory Expertise : Create and manage data integration workflows, pipelines, and activities within Azure Data Factory. - Data Source Integration : Develop solutions to extract data from various data sources, including SQL Server, Hadoop Data Lake (Azure Data Lake Storage), and other data storage locations. - Data Transformation and Loading : Implement complex data transformations and load data into target systems such as SQL Server, Hadoop Data Lake, and other data warehouses. - Performance Optimization : Identify and resolve performance bottlenecks in data pipelines and Spark SQL queries to ensure efficient data processing. - Collaboration with Engineering and Product Teams : Work closely with engineering and product teams to understand complex data systems, data requirements, and business needs. - Data Quality and Governance : Implement data quality checks and contribute to data governance initiatives to ensure the accuracy and reliability of data. - Monitoring and Troubleshooting : Monitor the performance and health of data pipelines and troubleshoot any issues that arise. - Documentation : Create and maintain clear and comprehensive technical documentation for developed data pipelines and processes. - Adherence to Best Practices : Follow best practices for data engineering, coding standards, and version control. - Continuous Learning : Stay up-to-date with the latest advancements in Azure data technologies and big data processing Skills : - Strong proficiency in Azure Data Bricks : Minimum 4-5 years of hands-on development experience in building and managing data solutions using Azure Data Bricks. - Extensive experience with Azure Data Factory : Ability to design, develop, and deploy data integration pipelines using Azure Data Factory. - Solid expertise in Spark SQL : Proven ability to write and optimize complex SQL queries within the Spark environment. - Strong experience with SQL : Excellent understanding of relational database concepts and proficient in writing complex SQL queries. - Experience in performing data transformations and manipulations using Azure Data Bricks. - Understanding of complex data systems and the ability to learn and navigate new data sources Skills : - Experience with other Azure data services such as Azure Synapse Analytics, Azure Stream Analytics, Azure Event Hubs. - Familiarity with different data formats (e.g., Parquet, Avro, JSON, CSV). - Experience with scripting languages such as Python or Scala. - Knowledge of data warehousing concepts and dimensional modeling. - Experience with CI/CD pipelines for data engineering deployments. - Understanding of data governance and data quality principles. - Experience working with big data technologies and distributed computing. - Strong problem-solving and analytical skills. - Good communication and collaboration : - Bachelor's degree in Computer Science, Engineering, or a related field. - Minimum of 5 years of professional experience in data engineering. (ref:hirist.tech)
Location: bangalore, IN
Posted Date: 5/1/2025
Location: bangalore, IN
Posted Date: 5/1/2025
Contact Information
Contact | Human Resources Populace World Solutions |
---|