buscojobs Brasil

Data Engineer (Dataviz / Power Bi)

Job Location

Rio de Janeiro, Brazil

Job Description

Executive-Level Jobs - Your Exclusive Access to Preselected Top Positions with SalaryBenchmarks. Get Introduced to the Right Contacts for Your Career Leap.Register Now! Land Your Dream Job . Access to Top Jobs . Hoje Our client is a U.S.-based company that provides technical expertise, testing, and certification services to the global food and agricultural industry. Their mission is to ensure food safety, quality, and sustainability across international supply chains. This role is critical to building, maintaining, and modernizing data pipelines that process large-scale regulatory data from around the world and transform it into usable datasets for downstream applications and APIs. The engineer will work hands-on with Python, SQL, and related tools to untangle legacy “spaghetti code” pipelines, migrate processes to more maintainable platforms such as Airflow, and ensure that our data is accurate, reliable, and ready for client-facing products. This role requires both strong technical ability and a consulting mindset—able to learn undocumented systems, troubleshoot gaps, and design forward-looking solutions that will scale as our data environment evolves. Required Qualifications: Minimum 7 years’ experience using Python for analyzing, extracting, creating, and transforming large datasets. Proficiency in Python 3 and common Python libraries and tools for data engineering, specifically Pandas, NumPy, and Jupyter Notebooks. Deep experience with SQL and relational data using Oracle, Postgres, or MS SQL Server. Solid understanding of database design principles, data modeling, and data warehousing concepts. Excellent troubleshooting skills and instincts. Curious, self-motivated, and self-directed; comfortable working within an Agile software development team with short, iterative delivery cycles. College degree or equivalent experience in computer science, software development, engineering, information systems, math, food science, or other applicable field of study. Preferred Qualifications: NoSQL database design and development using MongoDB, AWS DynamoDB, or Azure Cosmos DB. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services related to data storage/processing Exposure to Terraform or other Infrastructure-as-Code tooling. Proficient in Azure DevOps for source code and pipeline management. Hoje Our client is a U.S.-based company that provides technical expertise, testing, and certification services to the global food and agricultural industry. Their mission is to ensure food safety, quality, and sustainability across international supply chains. This role is critical to building, maintaining, and modernizing data pipelines that process large-scale regulatory data from around the world and transform it into usable datasets for downstream applications and APIs. The engineer will work hands-on with Python, SQL, and related tools to untangle legacy “spaghetti code” pipelines, migrate processes to more maintainable platforms such as Airflow, and ensure that our data is accurate, reliable, and ready for client-facing products. This role requires both strong technical ability and a consulting mindset—able to learn undocumented systems, troubleshoot gaps, and design forward-looking solutions that will scale as our data environment evolves. Required Qualifications: Minimum 7 years’ experience using Python for analyzing, extracting, creating, and transforming large datasets. Proficiency in Python 3 and common Python libraries and tools for data engineering, specifically Pandas, NumPy, and Jupyter Notebooks. Deep experience with SQL and relational data using Oracle, Postgres, or MS SQL Server. Solid understanding of database design principles, data modeling, and data warehousing concepts. Excellent troubleshooting skills and instincts. Curious, self-motivated, and self-directed; comfortable working within an Agile software development team with short, iterative delivery cycles. College degree or equivalent experience in computer science, software development, engineering, information systems, math, food science, or other applicable field of study. Preferred Qualifications: NoSQL database design and development using MongoDB, AWS DynamoDB, or Azure Cosmos DB. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services related to data storage/processing Exposure to Terraform or other Infrastructure-as-Code tooling. Proficient in Azure DevOps for source code and pipeline management. Come to one of the biggest IT Services companies in the world! Here you can transform your career! Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment. We are looking for "Data Engineer" Remote mode , who wants to learn and transform his career. In this role you will: (responsibilities) Snowflake, DBT, SQL Agile Methodologies; Operational Monitoring: Proactively monitor data jobs and pipelines to ensure smooth execution and timely delivery of datasets. Respond to alerts and resolve issues with minimal downtime. Pipeline Maintenance: Maintain and enhance DBT models and SQL scripts to support evolving business needs and ensure data accuracy. Warehouse Operations: Oversee Snowflake operations including user access, query performance, and resource utilization. Incident Response: Act as a first responder for data job failures, conducting root cause analysis and implementing preventive measures. Collaboration: Work closely with data engineers, analysts, and business stakeholders to support operational data needs and troubleshoot issues. Process Optimization: Identify opportunities to automate manual tasks, improve pipeline efficiency, and reduce operational overhead. Documentation & Reporting: Maintain clear documentation of operational procedures, job schedules, and incident logs. Provide regular updates to stakeholders on system health and performance. What can you expect from us? • Professional development and constant evolution of your skills, always in line with your interests. • Opportunities to work outside Brazil • A collaborative, diverse and innovative environment that encourages teamwork. What do we offer? Health insurance Life insurance Gympass TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates Partnership with SESC Reimbursement of Certifications Free TCS Learning Portal – Online courses and live training International experience opportunity Discount Partnership with Universities and Language Schools Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire TCS Gems – Recognition for performance Xcelerate – Free Mentoring Career Platform At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer! ID: About the Product Niche is the leader in school search. Our mission is to make researching and enrolling in schools easy, transparent, and free. With in-depth profiles on every school and college in America, 140 million reviews and ratings, and powerful search tools, we help millions of people find the right school for them. We also help thousands of schools recruit more best-fit students, by highlighting what makes them great and making it easier to visit and apply. Niche is all about finding where you belong, and that mission inspires how we operate every day. We want Niche to be a place where people truly enjoy working and can thrive professionally. About the Role Niche is looking for a skilled Data Engineer to join the Data Engineering team. Youʼll build and support data pipelines that can handle the volume and complexity of data while ensuring scale, data accuracy, availability, observability, security, and optimum performance. Youʼll be developing and maintaining data warehouse tables, views, and models, for consumption by analysts and downstream applications. This is an exciting opportunity to join our team as weʼre building the next generation of our data platform, and engineering capabilities. Youʼll be reporting to the Manager, Data Engineering (Core). What You Will Do Design, build, and maintain scalable, secure data pipelines that ensure data accuracy, availability, and performance. Develop and support data models, warehouse tables, and views for analysts and downstream applications. Ensure observability and quality through monitoring, lineage tracking, and alerting systems. Implement and maintain core data infrastructure and tooling (e.g., dbt Cloud, Airflow,RudderStack, cloud storage). Collaborate cross-functionally with analysts, engineers, and product teams to enable efficient data use. Integrate governance and security controls such as access management and cost visibility. Contribute to platform evolution and developer enablement through reusable frameworks, automation, and documentation. What We Are Looking For Bachelorʼs degree in Computer Science, Data Science, Information Systems, or a related field. 3-5 years of experience in data engineering. Demonstrated experience of building, and supporting large scale data pipelines – streaming and batch processing. Software engineering mindset, leading with the principles of source control, infrastructure as code, testing, modularity, automation, CI/CD, and observability. Proficiency in Python, SQL, Snowflake, Postgres, DBT, Airflow. Experience of working with Google Analytics, Marketing, Ad & Social media platform, CRM/Salesforce, and JSON data; Government datasets, and geo-spatial data will be a plus. Knowledge and understanding of the modern data platform, and its key components – ingestion, transformation, curation, quality, governance, and delivery. Knowledge of data modeling techniques (3NF, Dimensional, Vault). Experience with Docker, Kubernetes, Kafka will be a huge plus. Self-starter, analytical problem solver, highly attentive to detail, effective communicator, and obsessed with good documentation. First Year Plan During the 1st Month: Immerse yourself in the company culture, and get to know your team and key stakeholders. Build relationships with data engineering team members, understand the day to day operating model, and stakeholders that we interact with on a daily basis. Start to learn about our data platform infrastructure, data pipelines, source systems, and inter-dependencies. Start participating in standups, planning, and retrospective meetings. Start delivering on assigned sprint stories and show progress through completed tasks that contribute to team goals. Within 3 Months: Start delivering on assigned data engineering tasks to support our day to day, and roadmap. Start troubleshooting production issues, and participating in on-call activities. Identify areas for improving data engineering processes, and share with the team. Within 6 Months: Contribute consistently towards building our data platform, which includes data pipelines, and data warehouse layers. Start to independently own workstreams whether it is periodic data engineering activities, or work items in support of our roadmap. Deepen your understanding, and build subject matter expertise of our data & ecosystem. Within 12 Months: Your contributions have led to us making significant progress in implementing the data platform strategy, and key data initiatives to support the company’s growth. Youʼve established yourself as a key team member with subject matter expertise within data engineering. We are looking for dynamic consultants to grow our Information Systems and Digital team in Brazil . Your experience, knowledge, and commitment will help us to face our client's challenges. You will be supporting different projects through your expertise as Data Engineer. Your main responsibilities: Develop and maintain robust ETL pipelines to acquire data from diverse sources, including Oracle, SAP, and SQL-based systems. Transform raw data into clean, structured datasets that support reporting, analytics, and data science use cases. Collaborate with Data Science, Reporting, and Front-End teams to deliver reliable and reusable data solutions. Contribute to the creation of reusable frameworks, standardize patterns, and maintain comprehensive technical documentation. Participate in agile development processes, engaging in daily stand-ups and iterative releases with product managers and engineering peers. Monitor, troubleshoot, and optimize data workflows to ensure high performance and availability in a production environment. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field. 3 years of experience in Data Engineering Proven experience designing and implementing end-to-end ETL/ELT data pipelines. Strong proficiency in SQL and Python for data processing and transformation. Hands-on experience with Azure Cloud services and familiarity with tools such as Databricks, Delta Lake, and Spark. Knowledge of CI/CD pipelines and version control tools such as Git, Azure DevOps, or GitHub. Comfortable working in agile environments, with experience in iterative development and A/B testing methodologies. English CV is a Must We are looking for dynamic consultants to grow our Information Systems and Digital team in Brazil . Your experience, knowledge, and commitment will help us to face our client's challenges. You will be supporting different projects through your expertise as Data Engineer. Your main responsibilities: Develop and maintain robust ETL pipelines to acquire data from diverse sources, including Oracle, SAP, and SQL-based systems. Transform raw data into clean, structured datasets that support reporting, analytics, and data science use cases. Collaborate with Data Science, Reporting, and Front-End teams to deliver reliable and reusable data solutions. Contribute to the creation of reusable frameworks, standardize patterns, and maintain comprehensive technical documentation. Participate in agile development processes, engaging in daily stand-ups and iterative releases with product managers and engineering peers. Monitor, troubleshoot, and optimize data workflows to ensure high performance and availability in a production environment. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field. 3 years of experience in Data Engineering Proven experience designing and implementing end-to-end ETL/ELT data pipelines. Strong proficiency in SQL and Python for data processing and transformation. Hands-on experience with Azure Cloud services and familiarity with tools such as Databricks, Delta Lake, and Spark. Knowledge of CI/CD pipelines and version control tools such as Git, Azure DevOps, or GitHub. Comfortable working in agile environments, with experience in iterative development and A/B testing methodologies. English CV is a Must J-18808-Ljbffr

Location: Rio de Janeiro, Rio de Janeiro, BR

Posted Date: 9/11/2025
View More buscojobs Brasil Jobs

Contact Information

Contact Human Resources
buscojobs Brasil

Posted

September 11, 2025
UID: 5386293547

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.