Cloud Data Engineer | Snowflake, DBT, AWS/Azure/GCP, Large-scale Pipelines, Data Governance, Automation job opportunity at Synechron.



DatePosted 6 Days Ago bot
Synechron Cloud Data Engineer | Snowflake, DBT, AWS/Azure/GCP, Large-scale Pipelines, Data Governance, Automation
Experience: 5-years
Pattern: Full time
apply Apply Now
Salary:
Status:

DBT, AWS/Azure/GCP, Large-scale Pipelines, Data Governance, Automation

Copy Link Report
degreeGeneral
loacation Bengaluru - EC-2 Gateway campus, India
loacation Bengaluru - EC..........India
Auto GPT Summarize Enabled

Job Summary Synechron is seeking an experienced Cloud Data Engineer to design, develop, and support scalable, cloud-native data platforms supporting enterprise analytics and data modernization initiatives. The role involves building robust ELT pipelines, optimizing data models, and implementing data governance standards across cloud environments such as AWS, GCP, or Azure. The successful candidate will collaborate with analytics, data science, and platform teams to enable impactful data-driven insights, supporting operations, strategic planning, and innovation. Software Requirements Required: In-depth experience with Snowflake data platform for scalable data warehousing solutions Hands-on expertise in DBT for data transformations, testing, and documentation Strong knowledge of AWS cloud services such as S3, IAM, Glue, and supporting data workflows (GCP or Azure experience preferred) Practical experience with orchestration tools such as Apache Airflow for pipeline management Advanced SQL skills for data modeling, query tuning, and performance optimization Python proficiency for scripting, automation, and data processing tasks Knowledge of big data ecosystem tools such as Spark, Hadoop, or NiFi (preferred) Preferred: Experience with Infrastructure as Code tools such as Terraform or CloudFormation Familiarity with BI tools like Power BI or Tableau for reporting integrations Exposure to data governance, security standards, and compliance frameworks (GDPR, HIPAA, etc.) Overall Responsibilities Design, build, and optimize scalable data pipelines within cloud environments supporting enterprise analytics and reporting Develop data transformation workflows using DBT, ensuring accuracy, quality, and documentation Collaborate with data scientists, analytics teams, and platform engineers to support data ingestion, feature engineering, and ML workflows Monitor pipeline performance, troubleshoot issues, and implement enhancements for efficiency and resilience Support cloud migration, supporting multi-region, hybrid architectures for data platforms Enforce data security, privacy, and governance policies across pipelines and data stores Automate data workflows and infrastructure deployment supporting CI/CD pipelines Document system architecture, data schemas, transformation logic, and operational procedures Technical Skills (By Category) Languages & Scripts: Required: Python, SQL (PostgreSQL, MySQL, or equivalent), Bash for automation Preferred: Scala, R, or Java for supporting big data integrations Data Management & Storage: Snowflake, data models, query optimization, data security best practices, data governance practices Cloud Platforms: AWS (S3, Glue, Redshift), GCP (BigQuery, Dataflow), Azure support supporting migration and scaling Frameworks & Ecosystems: DBT, Spark (PySpark), Hadoop, NiFi (preferred) Orchestration & Automation: Apache Airflow, Terraform, CloudFormation, Jenkins, Git, CI/CD pipelines support for data deployment Security & Compliance: Data encryption, access control, GDPR/HIPAA compliance standards, audit logging Experience Requirements 5+ years supporting or developing large-scale, enterprise data pipelines in cloud environments Proven success in optimizing data workflows, data modeling, and pipeline automation Hands-on experience with cloud data platforms like Snowflake and supporting data ecosystems (Spark, Hadoop, NiFi) Experience supporting data governance, security, and compliance initiatives (GDPR, HIPAA, etc.) Strong background supporting data science, analytics, or ML workflows preferred Day-to-Day Activities Develop, tune, and support scalable data pipelines for enterprise analytics and reporting Collaborate with cross-functional teams to gather data requirements and translate them into technical solutions Automate data ingestion, transformation, and deployment workflows supporting CI/CD practices Troubleshoot data pipeline issues, perform performance tuning, and implement security controls Support cloud migration efforts, data governance initiatives, and pipeline automation projects Document architecture, schemas, and operational procedures Monitor system health, data quality, and compliance status to ensure operational stability and security standards Qualifications Bachelor’s or Master’s degree in Data Science, Computer Science, or related disciplines 5+ years of experience supporting enterprise data platforms in cloud environments Certifications such as GCP Professional Data Engineer, AWS Data Analytics or equivalent are advantageous Proven experience in building, managing, and optimizing large-scale data pipelines supporting analytics and ML workflows Professional Competencies Analytical problem-solving skillset, particularly for optimizing data workflows at scale Effective communication with technical teams and business stakeholders for data requirements and governance Mentoring abilities to guide junior data engineers and promote best practices Strategic thinking for designing scalable, secure cloud data ecosystems supporting enterprise needs Adaptability and continuous learning to leverage new data technologies and compliance standards Organization and time management skills for handling multiple data projects efficiently S​ YNECHRON’S DIVERSITY & INCLUSION STATEMENT   Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Other Ai Matches

Senior Java Backend Developer — Microservices & Cloud Integration Applicants are expected to have a solid experience in handling Job related tasks
Java Developer | Spring, Hibernate | Agile Methodologies | RESTful Web Services Applicants are expected to have a solid experience in handling Hibernate | Agile Methodologies | RESTful Web Services related tasks
AI Agent Developer | Python, LangChain, Cloud (AWS), Autonomous Systems Applicants are expected to have a solid experience in handling LangChain, Cloud (AWS), Autonomous Systems related tasks
Java & Front-End Developer | Spring, Hibernate, React.js, Microservices, Cloud Deployment, API Development Applicants are expected to have a solid experience in handling Hibernate, React.js, Microservices, Cloud Deployment, API Development related tasks