buscojobs Brasil

Data Engineer (Dataviz / Power Bi)

Job Location

Jacupiranga, Brazil

Job Description

Hoje Our client is a U.S.-based company that provides technical expertise, testing, and certification services to the global food and agricultural industry. Their mission is to ensure food safety, quality, and sustainability across international supply chains. This role is critical to building, maintaining, and modernizing data pipelines that process large-scale regulatory data from around the world and transform it into usable datasets for downstream applications and APIs. The engineer will work hands-on with Python, SQL, and related tools to untangle legacy “spaghetti code” pipelines, migrate processes to more maintainable platforms such as Airflow, and ensure that our data is accurate, reliable, and ready for client-facing products. This role requires both strong technical ability and a consulting mindset—able to learn undocumented systems, troubleshoot gaps, and design forward-looking solutions that will scale as our data environment evolves. Required Qualifications: Minimum 7 years’ experience using Python for analyzing, extracting, creating, and transforming large datasets. Proficiency in Python 3 and common Python libraries and tools for data engineering, specifically Pandas, NumPy, and Jupyter Notebooks. Deep experience with SQL and relational data using Oracle, Postgres, or MS SQL Server. Solid understanding of database design principles, data modeling, and data warehousing concepts. Excellent troubleshooting skills and instincts. Curious, self-motivated, and self-directed; comfortable working within an Agile software development team with short, iterative delivery cycles. College degree or equivalent experience in computer science, software development, engineering, information systems, math, food science, or other applicable field of study. Preferred Qualifications: NoSQL database design and development using MongoDB, AWS DynamoDB, or Azure Cosmos DB. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and services related to data storage/processing Exposure to Terraform or other Infrastructure-as-Code tooling. Proficient in Azure DevOps for source code and pipeline management. TotalPass Come to one of the biggest IT Services companies in the world! Here you can transform your career! Why to join TCS? Here at TCS we believe that people make the difference, that's why we live a culture of unlimited learning full of opportunities for improvement and mutual development. The ideal scenario to expand ideas through the right tools, contributing to our success in a collaborative environment. We are looking for "Data Engineer" Remote mode ,who wants to learn and transform his career. In this role you will: (responsibilities) Snowflake, DBT, SQL Agile Methodologies; Operational Monitoring: Proactively monitor data jobs and pipelines to ensure smooth execution and timely delivery of datasets. Respond to alerts and resolve issues with minimal downtime. Pipeline Maintenance: Maintain and enhance DBT models and SQL scripts to support evolving business needs and ensure data accuracy. Warehouse Operations: Oversee Snowflake operations including user access, query performance, and resource utilization. Incident Response: Act as a first responder for data job failures, conducting root cause analysis and implementing preventive measures. Collaboration: Work closely with data engineers, analysts, and business stakeholders to support operational data needs and troubleshoot issues. Process Optimization: Identify opportunities to automate manual tasks, improve pipeline efficiency, and reduce operational overhead. Documentation & Reporting: Maintain clear documentation of operational procedures, job schedules, and incident logs. Provide regular updates to stakeholders on system health and performance. What can you expect from us? • Professional development and constant evolution of your skills, always in line with your interests. • Opportunities to work outside Brazil • A collaborative, diverse and innovative environment that encourages teamwork. What do we offer? Health insurance Life insurance Gympass TCS Cares – free 0800 that provides psychological assistance (24 hrs/day), legal, social and financial assistance to associates Partnership with SESC Reimbursement of Certifications Free TCS Learning Portal – Online courses and live training International experience opportunity Discount Partnership with Universities and Language Schools Bring Your Buddy – By referring people you become eligible to receive a bonus for each hire TCS Gems – Recognition for performance Xcelerate – Free Mentoring Career Platform At TATA Consultancy Services we promote an inclusive culture, we always work for equity. This applies to Gender, People with Disabilities, LGBTQIA, Religion, Race, Ethnicity. All our opportunities are based on these principles. We think of different actions of inclusion and social responsibility, in order to build a TCS that respects individuality. Come to be a TCSer! ID: Petrolina, Pernambuco TurnKey Tech Staffing About the Product Niche is the leader in school search. Our mission is to make researching and enrolling in schools easy, transparent, and free. With in-depth profiles on every school and college in America, 140 million reviews and ratings, and powerful search tools, we help millions of people find the right school for them. We also help thousands of schools recruit more best-fit students, by highlighting what makes them great and making it easier to visit and apply. Niche is all about finding where you belong, and that mission inspires how we operate every day. We want Niche to be a place where people truly enjoy working and can thrive professionally. About the Role Niche is looking for a skilled Data Engineer to join the Data Engineering team. Youʼll build and support data pipelines that can handle the volume and complexity of data while ensuring scale, data accuracy, availability, observability, security, and optimum performance. Youʼll be developing and maintaining data warehouse tables, views, and models, for consumption by analysts and downstream applications. This is an exciting opportunity to join our team as weʼre building the next generation of our data platform, and engineering capabilities. Youʼll be reporting to the Manager, Data Engineering (Core). What You Will Do Design, build, and maintain scalable, secure data pipelines that ensure data accuracy, availability, and performance. Develop and support data models, warehouse tables, and views for analysts and downstream applications. Ensure observability and quality through monitoring, lineage tracking, and alerting systems. Implement and maintain core data infrastructure and tooling (e.g., dbt Cloud, Airflow,RudderStack, cloud storage). Collaborate cross-functionally with analysts, engineers, and product teams to enable efficient data use. Integrate governance and security controls such as access management and cost visibility. Contribute to platform evolution and developer enablement through reusable frameworks, automation, and documentation. What We Are Looking For Bachelorʼs degree in Computer Science, Data Science, Information Systems, or a related field. 3-5 years of experience in data engineering. Demonstrated experience of building, and supporting large scale data pipelines – streaming and batch processing. Software engineering mindset, leading with the principles of source control, infrastructure as code, testing, modularity, automation, CI/CD, and observability. Proficiency in Python, SQL, Snowflake, Postgres, DBT, Airflow. Experience of working with Google Analytics, Marketing, Ad & Social media platform, CRM/Salesforce, and JSON data; Government datasets, and geo-spatial data will be a plus. Knowledge and understanding of the modern data platform, and its key components – ingestion, transformation, curation, quality, governance, and delivery. Knowledge of data modeling techniques (3NF, Dimensional, Vault). Experience with Docker, Kubernetes, Kafka will be a huge plus. Self-starter, analytical problem solver, highly attentive to detail, effective communicator, and obsessed with good documentation. First Year Plan During the 1st Month: Immerse yourself in the company culture, and get to know your team and key stakeholders. Build relationships with data engineering team members, understand the day to day operating model, and stakeholders that we interact with on a daily basis. Start to learn about our data platform infrastructure, data pipelines, source systems, and inter-dependencies. Start participating in standups, planning, and retrospective meetings. Start delivering on assigned sprint stories and show progress through completed tasks that contribute to team goals. Within 3 Months: Start delivering on assigned data engineering tasks to support our day to day, and roadmap. Start troubleshooting production issues, and participating in on-call activities. Identify areas for improving data engineering processes, and share with the team. Within 6 Months: Contribute consistently towards building our data platform, which includes data pipelines, and data warehouse layers. Start to independently own workstreams whether it is periodic data engineering activities, or work items in support of our roadmap. Deepen your understanding, and build subject matter expertise of our data & ecosystem. Within 12 Months: Your contributions have led to us making significant progress in implementing the data platform strategy, and key data initiatives to support the company’s growth. Youʼve established yourself as a key team member with subject matter expertise within data engineering. We are looking for dynamic consultants to grow our Information Systems and Digital team in Brazil . Your experience, knowledge, and commitment will help us to face our client's challenges. You will be supporting different projects through your expertise as Data Engineer. Your main responsibilities: Develop and maintain robust ETL pipelines to acquire data from diverse sources, including Oracle, SAP, and SQL-based systems. Transform raw data into clean, structured datasets that support reporting, analytics, and data science use cases. Collaborate with Data Science, Reporting, and Front-End teams to deliver reliable and reusable data solutions. Contribute to the creation of reusable frameworks, standardize patterns, and maintain comprehensive technical documentation. Participate in agile development processes, engaging in daily stand-ups and iterative releases with product managers and engineering peers. Monitor, troubleshoot, and optimize data workflows to ensure high performance and availability in a production environment. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field. 3 years of experience in Data Engineering Proven experience designing and implementing end-to-end ETL/ELT data pipelines. Strong proficiency in SQL and Python for data processing and transformation. Hands-on experience with Azure Cloud services and familiarity with tools such as Databricks, Delta Lake, and Spark. Knowledge of CI/CD pipelines and version control tools such as Git, Azure DevOps, or GitHub. Comfortable working in agile environments, with experience in iterative development and A/B testing methodologies. English CV is a Must Petrolina, Pernambuco Fidelis Security We are seeking a skilled and experienced Data Engineer to join our Threat Research team. The primary responsibility of this role will be to design, develop, and maintain data pipelines for threat intelligence ingestion, validation, and export automation flows. Responsibilities: Design, develop, and maintain data pipelines for ingesting threat intelligence data from various sources into our data ecosystem. Implement data validation processes to ensure data accuracy, completeness, and consistency. Collaborate with threat analysts to understand data requirements and design appropriate solutions. Develop automation scripts and workflows for data export processes to external systems or partners. Optimize and enhance existing data pipelines for improved performance and scalability. Monitor data pipelines and troubleshoot issues as they arise, ensuring continuous data availability and integrity. Document technical specifications, data flows, and procedures for data pipeline maintenance and support. Stay updated on emerging technologies and best practices in data engineering and incorporate them into our data ecosystem. Provide technical guidance and support to other team members on data engineering best practices and methodologies. Requirements: Proven experience as a Data Engineer or similar role, with a focus on data ingest, validation, and export automation. Strong proficiency in Python. Experience with data pipeline orchestration tools such as Apache Airflow, Apache NiFi, or similar. Familiarity with cloud platforms such as Snowflake, AWS, Azure, or Google Cloud Platform. Experience with data validation techniques and tools for ensuring data quality. Experience building and deploying images using containerization technologies such as Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills, with the ability to work effectively in a team environment. This position is fully remote and is a contractor (PJ) position. Salary range is 18-25/hr. Required Skills & Experience - 3 years of experience with Power BI development and engineering (DAX and semantic model development and deployment) - Strong hands on coding ability in PySpark and Python - Professional working experience with Databricks, Fabric, or similar Spark platforms - Fair knowledge on the use of Azure Dev Ops (ADO) to manage repository and versions - Strong communication skills and experience engaging directly with business stakeholders and technical teams Nice to Have Skills & Experience - Knowledge of Tabular and ALM tools to work on Semantic Model and automation - Experience working for a global company/ with a global team Job Description A global biopharmaceutical company is seeking strong Power BI/Data Engineer to join their team to support a portfolio of ongoing clinical operations projects. These individuals will work on a team of Power BI /Data Engineers of all levels and will partner with technical and functional partners to meet the project needs. The ideal candidates should and in-depth expertise with Power BI (including DAX and Semantic Model Development), strong PySpark & Python coding experience, and experience with Databricks, Fabric, or a similar Spark platform. Additionally, strong communication skills are a must along with the proven ability to work with cross functional teams and stakeholders to drive project work. Cargo About The Role We are seeking experienced Data Engineers to develop and deliver robust, cost-efficient data products that power analytics, reporting and decision-making across two distinct brands. What You’ll Do - Build highly consumable and cost-efficient data products by synthesizing data from diverse source systems. - Ingest raw data using Fivetran and Python, staging and enriching it in BigQuery to provide consistent, trusted dimensions and metrics for downstream workflows. - Design, maintain, and improve workflows that ensure reliable and consistent data creation, proactively addressing data quality issues and optimizing for performance and cost. - Develop LookML Views and Models to democratize access to data products and enable self-service analytics in Looker. - Deliver ad hoc SQL reports and support business users with timely insights. - (Secondary) Implement simple machine learning features into data products using tools like BQML. - Build and maintain Looker dashboards and reports to surface key metrics and trends. What We’re Looking For - Proven experience building and managing data products in modern cloud environments (GCP preferred). - Strong proficiency in Python for data ingestion and workflow development. - Hands-on expertise with BigQuery, dbt, Airflow and Looker. - Solid understanding of data modeling, pipeline design and data quality best practices. - Excellent communication skills and a track record of effective collaboration across technical and non-technical teams. Why Join Kake? Kake is a remote-first company with a global community — fully believing that it’s not where your table is, but what you bring to the table. We provide top-tier engineering teams to support some of the world’s most innovative companies, and we’ve built a culture where great people stay, grow, and thrive. We’re proud to be more than just a stop along the way in your career — we’re the destination. The icing on the Kake: Competitive Pay in USD – Work globally, get paid globally. Fully Remote – Simply put, we trust you. Better Me Fund – We invest in your personal growth and passions. ️ Compassion is Badass – Join a community that invests in social good. J-18808-Ljbffr

Location: Jacupiranga, São Paulo, BR

Posted Date: 9/11/2025
View More buscojobs Brasil Jobs

Contact Information

Contact Human Resources
buscojobs Brasil

Posted

September 11, 2025
UID: 5386294370

AboutJobs.com does not guarantee the validity or accuracy of the job information posted in this database. It is the job seeker's responsibility to independently review all posting companies, contracts and job offers.