Job Openings
Perception/Autonomy Engineer (Degraded & Denied Environments)
About the job Perception/Autonomy Engineer (Degraded & Denied Environments)
About Kessari:
We are building autonomy systems that operate in GNSS-denied and degraded environments. This is not a research role. This is a real-world deployment on physical platforms.
We are specifically looking for engineers who think in terms of uncertainty, not clean inputs.
What this engineer must already think like:
- Vision is probabilistic, not deterministic
- Tracking is more important than detection (maintaining belief over time)
- Comfortable working with degraded inputs: low light, motion blur, packet loss, partial frames
- Design systems that continue operating when vision fails (fallback logic, sensor fusion)
Hard requirements (non-negotiable):
Strong experience with multi-object tracking:
Kalman filters, particle filters, JPDA, SORT/DeepSORT or similar
Sensor fusion experience:
- At minimum camera + IMU
- Bonus if experience with GNSS-denied navigation
Real-time systems experience:
- Edge compute, latency constraints, performance tradeoffs
Has deployed on real physical systems:
- UAVs, robotics, automotive, or defence systems
- Strong C++ (required), Python acceptable alongside
Nice to have (not required):
- SLAM / VIO (visual-inertial odometry)
- Tracking under occlusion and re-identification
- Experience in degraded or contested environments (e.g. EW, poor comms)
- Familiarity with PX4, ArduPilot, ROS
What we do NOT want:
- Pure machine learning engineers with no deployed systems
- Candidates focused only on object detection (YOLO-style pipelines)
- Academic researchers without real-world deployment experience
What they will be doing:
- Building and improving tracking systems that maintain target identity over time
- Designing sensor fusion pipelines robust to degraded inputs
- Working on real-time perception systems running on edge hardware
- Ensuring system reliability when inputs are unreliable or partially missing