About Rivet
Rivet is an American company building integrated task systems — fusing hardened hardware with software, sensors, AI, and networking — for industrial workforces and defense personnel. We create capabilities that multiply the effectiveness of every individual and withstand the world’s toughest environments.
We serve the people who build, operate, maintain, and defend our way of life. From technicians and engineers to first responders and service members, they embody the hard work, ingenuity, and meritocratic values that drive Western prosperity. Yet too often they are forced to rely on outdated tools that fail under modern pressures. Rivet exists to reset that priority.
At Rivet, you’ll join a mission-driven team that fuses disciplines to deliver decisive outcomes where they matter most. Whether shaping our technology, strengthening our partnerships, or building our culture, every role here contributes to equipping the front lines with the modern systems they deserve.
Who Thrives Here
- People with a deep disdain for bureaucracy, empire building, groupthink, dogma, corporate babble, and wasted time
- Teammates who want to work exclusively alongside others at the top of their field
- Experienced, no-nonsense professionals who are execution-focused and deliver high-quality solutions above all else
Role Description
We are building the perception stack that enables robust three/six degrees of freedom tracking, localization and mapping in complex real-world environments. This role focuses on fusing data from cameras and inertial sensors to power SLAM / Visual-Inertial Odometry pipelines. We need somebody that can create algorithms and systems that run in real time, handle uncertainty gracefully and provide the foundation for advanced spatial intelligence on edge devices.
Role Objectives
- Develop and optimize sensor fusion algorithms combining IMU and camera data for SLAM and VIO (Simultaneous Localization and Mapping & Visual Inertial Odometry)
- Implement state of the art structure-from-motion techniques using different sensor modalities to generate consistent 3D reconstructions
- Build Calibration and sync pipelines across multiple sensor types
- Evaluate algos against public benchmarks and real-world datasets
- Integrate outputs into higher-level autonomy or collaborative mapping systems (i.e. shared mapping systems)
- Collaborate with HW, robotics and platform teams to ensure end-to-end performance
- Define metrics, testing frameworks and deployment strategies for production-ready perception systems
Role Requirements
- BS with 5+ years of academic or industry experience in inertial sensing, computer vision, robotics or related fields with shipped or published work (or MS with 2+ yrs of the above)
- strong background in linear algebra, SLAM, VIO and structure-from-motion
- Proficient in C++/Python with experience in real-time optimization
- Familiar with libraries such as OpenCV, Ceres etc.
- Knowledge of sensor modelling, calibration and noise characteristics
- Experience with real-time processing pipelines on embedded or edge hardware
- Familiarity with probabilistic estimation, filtering (e.g. EKF/ factor-graph optimization)
- Preferred track record of applied research in robotics or AR/VR