Research Crawling Engineer job opportunity at MLabs.



Date2026-04-28T10:44:21.966Z bot
MLabs Research Crawling Engineer
Experience: General
Pattern: full-time
apply Apply Now
Salary:
Status:

Job

Copy Link Report
degreeGeneral
Poland
Auto GPT Summarize Enabled

Location:  Remote - Must have a 6 hour overlap with EST Remote | Full-time Compensation: $80K - $175K We are hiring on behalf of our client who is a technical infrastructure firm specializing in the delivery of massive-scale web data to organizations developing advanced artificial intelligence models. The organization supports high-capacity bandwidth-sharing networks and operates a distributed crawler capable of accessing high-quality public web data at a global scale. Additionally, the team has engineered sophisticated pipelines for the ingestion, segmentation, and annotation of billions of multimedia files, facilitating dataset creation for frontier research labs. The organization operates as a lean, technical team that prioritizes speed and direct execution. As a Research Crawling Engineer , the successful candidate will design and operate large-scale web data acquisition systems. This role encompasses distributed systems, scraping infrastructure, and data pipelines, focusing on providing high-quality inputs for research and model development. Key Responsibilities Construct and maintain large-scale web crawlers across diverse domains. Design high-throughput, fault-tolerant systems for data collection, managing volumes ranging from millions to billions of URLs per day. Navigate anti-bot systems, rate limits, and dynamic, JavaScript-heavy websites. Develop robust pipelines for data cleaning, deduplication, filtering, and normalization. Build and maintain datasets specifically structured for research and machine learning model training. Monitor and optimize crawl performance, coverage, and data quality through rapid iteration. Collaborate with research teams to ensure data collection efforts align with modeling requirements. Optimize infrastructure to ensure cost-efficiency, low latency, and reliability. Extensive programming experience in one or more of the following: Go, Rust, Python, Java, or C++ . Proven experience in building web crawlers or large-scale data pipelines. Solid understanding of HTTP, networking protocols, and browser behavior. Familiarity with distributed systems and parallel processing techniques. Experience handling large datasets, ideally at the terabyte to petabyte scale . Demonstrated ability to debug and maintain systems within unstable or adversarial environments. Preferred Qualifications: Experience with NLP pipelines or dataset curation for machine learning. Familiarity with LLM pre-training data or retrieval systems. Practical experience with headless browsers (e.g., Playwright, Puppeteer, or Chrome DevTools Protocol). Knowledge of proxy systems, IP rotation, and large-scale request orchestration. Background in data quality evaluation or benchmarking. Experience running workloads on cloud or bare-metal infrastructure.

Other Ai Matches

Web Scraping Specialist Applicants are expected to have a solid experience in handling Job related tasks
Research Crawling Engineer Applicants are expected to have a solid experience in handling Job related tasks
Senior Infrastructure Engineer Applicants are expected to have a solid experience in handling Job related tasks
Senior FinCrime Product Owner Applicants are expected to have a solid experience in handling Job related tasks