× Home About Products Technology AI Nav Contact
LIVE SYSTEM DEMO

Autonomous Navigation System

Real-time YOLOv3 object detection fused with PID-controlled flight and a robust state-machine — powering the intelligence behind every Crossm drone.

80
COCO Classes
8
Mission States
3-Axis
PID Control
MAVLink
Protocol

Mission State Machine

The drone operates as a finite state machine — click any state to explore it, or enable auto-cycle to watch a full mission unfold.

crossm_nav — mission_control.py
⚙️
STARTUP
Initializing all subsystems. Loading YOLOv3 model weights and establishing MAVLink connection to the flight controller.
[SYSTEM] Loading yolov3-tiny.weights... [YOLO] Initialized — 80 COCO classes ready [MAVLINK] Connecting to tcp:127.0.0.1:5760

System Architecture

Four tightly-coupled subsystems working in concert to enable full autonomous flight.

👁️

PERCEPTION

YOLOv3-tiny runs real-time inference on every camera frame, outputting bounding boxes and confidence scores for 80 object classes.

yolo_detector.py camera_handler.py
🧭

NAVIGATION

A 3-axis PID controller converts pixel-space target error into NED-frame velocity commands with anti-windup protection.

path_planner.py obstacle_avoidance.py
🎮

CONTROL

State machine orchestrates mission phases. Transitions are triggered by sensor data — altitude check, detection presence, timers.

mission_planner.py flight_controller.py
🚁

HARDWARE

DroneKit wraps MAVLink to send velocity commands, yaw rotations, and arm/takeoff sequences to real or SITL-simulated vehicles.

drone_interface.py settings.yaml

Live Object Detection

YOLOv3-tiny processes each camera frame in milliseconds. Non-max suppression filters redundant detections, leaving clean, confident bounding boxes.

PRIMARY TARGETLOCKED
Class: person — designated tracking target from settings.yaml. PID controller centres this detection in the frame.
Confidence87%
NMS FILTERINGACTIVE
Non-Maximum Suppression (IoU threshold 0.4) eliminates overlapping boxes, ensuring one clean detection per object in the scene.
Suppression Rate94%
INFERENCE ENGINEOpenCV DNN
Model runs via cv2.dnn — no GPU required. Blob pre-processing at 416×416, forward pass through two YOLO output layers.
Threshold50%

Technology Stack

Open-source, battle-tested tools forming the backbone of the autonomous pipeline.

🐍
Python 3
Core runtime
🚁
DroneKit
MAVLink abstraction
👁️
YOLOv3-tiny
Object detection
📷
OpenCV
Vision pipeline
📡
MAVLink
Drone comms protocol
🔢
NumPy
Numerical compute
⚙️
PyYAML
Config management
🌐
DroneKit-SITL
Software-in-the-loop sim