Senior Data Engineer (Cloud / Snowflake / PySpark) job opportunity at Enroute Systems.



Date2026-04-13T18:10:00.218Z bot
Enroute Systems Senior Data Engineer (Cloud / Snowflake / PySpark)
Experience: 5-years
Pattern: full-time
apply Apply Now
Salary:
Status:

Job

Copy Link Report
degreeGeneral
loacation Dinastía, Mexico
loacation Dinastía....Mexico
Auto GPT Summarize Enabled

We love technology, and we enjoy what we do. We are always looking for innovation. We have social awareness and strive to improve it daily. We make things happen. You can trust us. Our Enrouters are always up for a challenge. We ask questions, and we love to learn. We pride ourselves on having great benefits and compensations, a fantastic work environment, flexible schedules, and policies that positively impact the balance of work and life outside of it. At Enroute, we are looking for a Senior Data Engineer to join a growing Data team responsible for designing, building, and evolving scalable data platforms and cloud-native pipelines that support business intelligence, analytics, and operational workloads. The ideal candidate is highly hands-on with Python, Spark/PySpark, Snowflake, and cloud-based data architectures , with strong experience building reliable, production-grade ETL/ELT pipelines and modern data warehousing solutions. This role is ideal for someone who enjoys solving complex data challenges, optimizing performance at scale, and collaborating closely with data scientists, analysts, and engineering teams. ✅ Must-Have Requirements Core Data Engineering 5+ years of professional experience in Data Engineering or related fields Strong experience designing and maintaining scalable data pipelines Deep understanding of ETL / ELT best practices Strong experience with large-scale data processing architectures Proven experience with batch data processing Strong experience with data warehousing concepts Programming & Data Processing Advanced Python Strong hands-on experience with Apache Spark / PySpark Advanced SQL (complex queries, optimization, transformations) Strong experience processing large structured and unstructured datasets Cloud & Infrastructure Hands-on experience with AWS or Azure Experience building cloud-native data solutions Experience with Docker Experience with CI/CD pipelines Strong knowledge of Git / version control Data Orchestration Strong hands-on experience with Apache Airflow Experience designing workflow orchestration pipelines Scheduling, monitoring, and failure recovery strategies Critical Must-Have Strong expertise in Snowflake (MUST HAVE) Snowflake data warehouse design Snowflake development Query and warehouse optimization Performance tuning and cost efficiency Cloud data warehouse architecture best practices 🎯 Responsibilities Design, build, and maintain scalable, reliable, and high-performance data pipelines Develop end-to-end ETL / ELT workflows Process large-scale datasets using Spark / PySpark Build and orchestrate cloud-native pipelines in AWS and/or Azure Design and optimize Snowflake data warehouse solutions Ensure performance, scalability, governance, and cost optimization Write and optimize advanced SQL queries Collaborate with Data Scientists, Analysts, and Software Engineers Translate business requirements into production-ready data solutions Ensure data consistency, availability, and quality Implement CI/CD, Git workflows, and Dockerized deployments Improve reliability and observability of data platforms

Other Ai Matches

Salesforce Developer and Marketing Cloud Applicants are expected to have a solid experience in handling Job related tasks
Junior Solutions Engineer/Ad Tech Operations (AMERS) Applicants are expected to have a solid experience in handling Job related tasks
Full Stack Software Engineer (.NET / React) Applicants are expected to have a solid experience in handling Job related tasks
Automation Anywhere Developer Applicants are expected to have a solid experience in handling Job related tasks