Member of Technical Staff - Inference job opportunity at Prime Intellect.



bot
Prime Intellect Member of Technical Staff - Inference
Experience: 3-years
Pattern: Remote
apply Apply Now
Salary:
Status:

Job

Copy Link Report
degreeGeneral
loacation Remote, Remote
loacation Remote....Remote
Auto GPT Summarize Enabled

Building Open Superintelligence InfrastructurePrime Intellect is building the open superintelligence stack - from frontier agentic models to the infra that enables anyone to create, train, and deploy them. We aggregate and orchestrate global compute into a single control plane and pair it with the full rl post-training stack: environments, secure sandboxes, verifiable evals, and our async RL trainer. We enable researchers, startups and enterprises to run end-to-end reinforcement learning at frontier scale, adapting models to real tools, workflows, and deployment contexts.We recently raised $15mm in funding (total of $20mm raised) led by Founders Fund, with participation from Menlo Ventures and prominent angels including Andrej Karpathy (Eureka AI, Tesla, OpenAI), Tri Dao (Chief Scientific Officer of Together AI), Dylan Patel (SemiAnalysis), Clem Delangue (Huggingface), Emad Mostaque (Stability AI) and many others.Role ImpactThis is a hybrid position spanning cloud LLM serving, LLM inference optimization and RL systems. You will be working on advancing our ability to evaluate and serve models trained with our RL Lab at scale. The two key areas are:Building the infrastructure to serve LLMs efficiently at scale.Optimization and integration of inference systems into our RL training stack.Core Technical ResponsibilitiesLLM ServingMulti‑tenant LLM Serving: Build a multi-tenant LLM serving platform that operates across our cloud GPU fleets.GPU‑Aware Scheduling: Design placement and scheduling algorithms for heterogeneous accelerators.Resilience & Failover: Implement multi‑region/zone failover and traffic shifting for resilience and cost control.Autoscaling & Routing: Build autoscaling, routing, and load balancing to meet throughput/latency SLOs.Model Distribution: Optimize model distribution and cold-start times across clusters.Inference Optimization & PerformanceFramework Development: Integrate and contribute to LLM inference frameworks such as vLLM, SGLang, TensorRT‑LLM.Parallelism and Configuration Tuning: Optimize configurations for tensor/pipeline/expert parallelism, prefix caching, memory management and other axes for maximum performance.End‑to‑End Performance: Profile kernels, memory bandwidth and transport; apply techniques such as quantization and speculative decoding.Perf Suites: Develop reproducible performance suites (latency, throughput, context length, batch size, precision).RL Integration: Embed and optimize distributed inference within our RL stack.Platform & ToolingCI/CD: Establish CI/CD with artifact promotion, performance gates, and reproducible builds.Observability: Build metrics, logs, tracing; structured incident response and SLO management.Docs & Collaboration: Document architectures, playbooks, and API contracts; mentor and collaborate cross‑functionally.Technical RequirementsRequired ExperienceBuilding ML Systems at Scale: 3+ years building and running large‑scale ML/LLM services with clear latency/availability SLOs.Inference Backends: Hands‑on with at least one of vLLM, SGLang, TensorRT‑LLM.Distributed Serving Infra: Familiarity with distributed and disaggregated serving infrastructure such as NVIDIA Dynamo.Inference Internals: Deep understanding of prefill vs. decode, KV‑cache behavior, batching, sampling, speculative decoding, parallelism strategies.Full‑Stack Debugging: Comfortable debugging CUDA/NCCL, drivers/kernels, containers, service mesh/networking, and storage, owning incidents end‑to‑end.Infrastructure SkillsPython: Systems tooling and backend services.PyTorch: LLM Inference engine development and integration, deployment readiness.Cloud & Automation: AWS/GCP service experience, cloud deployment patterns.Kubernetes: Running infrastructure at scale with containers on Kubernetes.GPU & Networking: Architecture, CUDA runtime, NCCL, InfiniBand; GPU‑aware bin‑packing and scheduling across heterogeneous fleets.Nice to HaveKernel‑Level Optimization: Familiarity with CUDA/Triton kernel development; Nsight Systems/Compute profiling.Systems Performance Languages: Rust, C++.Data & Observability: Kafka/PubSub, Redis, gRPC/Protobuf; Prometheus/Grafana, OpenTelemetry; reliability patterns.Infra & Config Automation: Terraform/Ansible, infrastructure-as-code, reproducible environmentsOpen Source: Contributions to serving, inference, or RL infrastructure projects.What We OfferCash Compensation Range of $150-300k with significant equity incentivesFlexible work arrangement (remote or San Francisco office)Full visa sponsorship and relocation supportProfessional development budgetRegular team off-sites and conference attendanceOpportunity to shape decentralized AI and RL at Prime IntellectGrowth OpportunityYou'll join a team of experienced engineers and researchers working on cutting-edge problems in AI infrastructure. We believe in open development and encourage team members to contribute to the broader AI community through research and open-source contributions.We value potential over perfection. If you're passionate about democratizing AI development, we want to talk to you.Ready to help shape the future of AI? Apply now and join us in our mission to make powerful AI models accessible to everyone.

Other Ai Matches

Applied Research - RL & Agents Applicants are expected to have a solid experience in handling Job related tasks
Head of Marketing Applicants are expected to have a solid experience in handling Job related tasks
Member of Technical Staff - GPU Infrastructure Applicants are expected to have a solid experience in handling Job related tasks
remote-jobserver Remote
AI Research Resident - Open Source AGI Applicants are expected to have a solid experience in handling Job related tasks