Computer Vision Engineer — Autonomy (Remote) We build retrofit autonomy modules for existing UAV fleets operating in GPS-denied environments. This is real-world deployment, not research or simulation. Constrained hardware, degraded comms, systems that have to work first time. Kessari is moving from TRL 6 to TRL 8 with active partners. We need someone to own perception through to targeting, end-to-end. What you’ll do Build and deploy real-time detection and tracking pipelines on edge hardware Take models from training through optimisation into field deployment Work on low-latency systems running on constrained GPUs (Jetson class) Handle messy real-world data (aerial, oblique, thermal, small objects) Ship systems that run at 30 FPS in production Work in GPS-denied conditions where localisation and perception must hold up under uncertainty You’re a fit if you can Train and deploy object detection models (YOLO, RT-DETR or similar) Optimise models for real-time edge inference (TensorRT, ONNX or similar) Implement multi-object tracking (ByteTrack, BoT-SORT or similar) Own the data pipeline (collection, annotation, validation) Work across Python and some C++, Linux, Docker Strong bonus Experience in GPS-denied navigation or perception systems Drone or aerial imagery experience Thermal or infrared perception Visual SLAM or odometry integration CUDA or GPU optimisation Synthetic data or simulation What matters This is not a research role. You need to ship fast, handle ambiguity, and make systems work in the field. Comp €90k – €140k depending on level Equity and performance upside tied to deployments DM directly to apply41bf1e1f-b16b-4260-a40a-17c77a06fd15…
