- Design, build, and maintain large-scale data pipelines (batch and streaming) for robotics foundation model training and evaluation at petabyte scale
- Own core data infrastructure: data model, storage systems, ingestion pipelines, transformation frameworks, and orchestration layers
- Standardize data models and unify processing pipelines across real-world teleoperation and synthetic simulation datasets
- Collaborate with a team of driven individuals committed to building general-purpose Physical AI
What You’ll Do
- Design, build, and maintain large-scale data pipelines (batch and streaming) for robotics foundation model training and evaluation at petabyte scale
- Own core data infrastructure: data model, storage systems, ingestion pipelines, transformation frameworks, and orchestration layers
- Standardize data models and unify processing pipelines across real-world teleoperation and synthetic simulation datasets
- Collaborate with a team of driven individuals committed to building general-purpose Physical AI
What You’ll Bring
- Excellent software engineering skills (Python, Go, or similar)
- Extensive experience designing, building, and maintaining large-scale data pipelines (8+ years)
- Deep understanding of distributed systems (Spark, Kafka, or similar)
- Extensive experience with data storage technologies (data lakes, warehouses, object stores like S3)
- Experience running and maintaining production‑grade infrastructure (Kubernetes, Terraform)
- Bonus: Experience supporting AI systems, in particular embodied AI like self-driving
#J-18808-Ljbffr…
