Vision-Based Autonomous Landing with ArduPilot

Build a drone that finds and lands on a moving platform using only a camera

Advanced Autonomous Systems 6–10 weeks
Last reviewed: March 2026

Overview

Precision autonomous landing is one of the defining challenges of advanced drone operations. GPS provides meter-level accuracy at best — nowhere near enough to land on a ship deck, a moving vehicle, or a designated pad in a cluttered environment. Vision-based landing solves this by using a downward-facing camera to detect a fiducial marker and compute precise relative position, enabling centimeter-accurate touchdown.

In this project, you'll build a complete vision-guided landing system: an OpenCV pipeline that detects and tracks an ArUco marker target, estimates the 3D pose of the drone relative to the landing pad, and sends precision landing corrections to ArduPilot via the MAVLink LANDING_TARGET message. ArduPilot's built-in Precision Landing mode handles the control law once it receives your vision-derived corrections.

This architecture — vision frontend + MAVLink backend + flight controller — is the same used in SpaceX's Falcon 9 booster recovery (camera + radar + grid fins), ship-based drone recovery systems, and military VTOL landing systems. Mastering it opens roles in autonomous vehicle development, defense contractors, and UAV service companies.

What You'll Learn

  • Design and implement an ArUco marker detection and pose estimation pipeline with OpenCV
  • Understand camera calibration and the transformation from image pixels to 3D body-frame coordinates
  • Communicate with ArduPilot via MAVLink using the pymavlink library
  • Configure and tune ArduPilot Precision Landing mode for stable final approach
  • Test and validate the system in SITL (Software-In-The-Loop) simulation before hardware deployment

Step-by-Step Guide

1

Set Up ArduPilot SITL and MAVLink

Install ArduPilot SITL on Linux (Ubuntu 20.04 recommended) and run a simulated copter. Connect to the simulator using pymavlink or DroneKit-Python. Practice sending basic commands: arm, takeoff, loiter, land. Understand the MAVLink message structure and heartbeat protocol.

Install the Gazebo plugin for ArduPilot to get a visual simulation environment with a downward-facing camera. This gives you a camera feed to develop your vision pipeline against without risking hardware.

2

Calibrate the Camera

Camera calibration is the foundation of accurate pose estimation. Print a checkerboard calibration pattern and capture 20–30 images from different angles and distances using your actual camera. Use OpenCV's cv2.calibrateCamera() to extract the intrinsic matrix (focal length, principal point) and distortion coefficients.

Validate calibration quality by checking reprojection error — target below 0.5 pixels RMS. Store the calibration in a YAML file that your landing script will load at startup. Never skip this step; even small calibration errors cause significant position errors at altitude.

3

Build the ArUco Detection Pipeline

Print a large ArUco marker (dictionary: DICT_4X4_50, ID: 0, size: 50×50 cm) on your landing pad. Write an OpenCV pipeline that: captures frames from the camera, detects the ArUco marker using cv2.aruco.detectMarkers(), estimates 6-DOF pose using cv2.aruco.estimatePoseSingleMarkers(), and converts the pose to the drone body frame.

Test robustness: what happens when the marker is partially occluded? At steep approach angles? With motion blur? Add filtering (exponential moving average on position estimates) to smooth out frame-to-frame jitter.

4

Send LANDING_TARGET Messages

Connect your vision pipeline to ArduPilot via MAVLink LANDING_TARGET messages. This message tells ArduPilot: "the landing target is at angle_x, angle_y from the camera boresight." ArduPilot's Precision Landing controller uses these angles to compute lateral velocity corrections.

Set the message rate to match your camera frame rate (15–30 Hz). Handle the case where the marker is not detected: send no message (don't send zeros, which ArduPilot interprets as "target at center"). Configure ArduPilot parameters: PLND_ENABLED=1, PLND_TYPE=1 (MAVLink), PLND_EST_TYPE=1 (raw sensor).

5

Test in SITL

In the Gazebo simulation, place a virtual ArUco marker on the ground. Command the simulated copter to take off, fly to 10m above the marker, then initiate a precision landing. Watch the drone's lateral corrections as it descends — it should track the marker and converge to it.

Test edge cases: marker at 2m lateral offset at start of landing, marker moving at 0.5 m/s, and sudden marker occlusion. Document the landing accuracy (final position error at touchdown) as a function of initial offset and target motion speed.

6

Hardware Deployment and Flight Testing

Transfer the system to a real quadcopter running ArduPilot (Pixhawk or Cube). Mount a Raspberry Pi or Jetson Nano for onboard vision processing connected to the flight controller via serial UART. Conduct initial tests at low altitude (1–2m) over a stationary marker before attempting full autonomous landings.

Log flight data from ArduPilot's dataflash logs and correlate with your vision pipeline logs to diagnose any synchronization issues. Measure landing accuracy across 20+ flights and calculate mean and 95th percentile landing error.

Go Further

Extend this project toward advanced autonomy:

  • Moving platform landing — mount the landing pad on a wheeled robot and command it to move at 1–2 m/s while the drone attempts to land; requires a predictive tracker to compensate for latency
  • Deep learning detection — replace the ArUco marker with a trained YOLO model that detects natural landing sites (flat surfaces, H-markings) without requiring a prepared pad
  • Multi-marker fusion — use a pattern of multiple ArUco markers at different scales to maintain tracking across the full descent from 50m to touchdown
  • Night operations — add infrared LED illumination to the landing pad and switch to an IR camera for low-light and night landing capability