Aircraft Detection from Drone Imagery with YOLO

Train a real-time object detector to spot aircraft from above

Undergraduate Computer Vision 4–6 weeks
Last reviewed: March 2026

Overview

Detecting aircraft from aerial imagery is a critical capability for defense, airport operations, and satellite intelligence. In this project, you'll train YOLO (You Only Look Once), the gold standard for real-time object detection, to identify aircraft in overhead imagery.

You'll work with real aerial datasets, learn to annotate images with bounding boxes, train a YOLO model, and evaluate its performance using standard metrics (mAP, precision, recall). The model you build will be fast enough for real-time detection on video feeds.

Object detection with YOLO is one of the most in-demand skills in both aerospace and tech. This project gives you a portfolio-ready demonstration of the full CV pipeline.

What You'll Learn

  • Understand object detection vs. classification and the YOLO architecture
  • Prepare and annotate an aerial image dataset for training
  • Train a YOLOv8 model using the Ultralytics framework
  • Evaluate model performance with mAP, precision-recall curves, and confusion matrices
  • Run real-time inference on images and video using OpenCV
  • Apply transfer learning to adapt a pre-trained model to aerospace data

Step-by-Step Guide

1

Set Up the Environment

Install Python 3.10+, then install the Ultralytics package:

pip install ultralytics opencv-python

If you have a GPU, install the CUDA version of PyTorch first for much faster training. A Google Colab notebook with free GPU is a good alternative if you don't have a local GPU.

2

Obtain and Prepare the Dataset

Use the HRPlanes dataset (high-resolution aerial images of airports) or the DOTA dataset (aerial object detection benchmark). Alternatively, download satellite imagery of airports from Google Earth and annotate it yourself using Roboflow or LabelImg.

Aim for at least 500 annotated images for training. Split into 70% train, 20% validation, 10% test. Ensure diversity in aircraft types, orientations, and lighting conditions.

3

Configure and Train YOLOv8

Create a YAML configuration file specifying your dataset paths, class names, and training parameters. Start with a pre-trained YOLOv8n (nano) model for fast iteration:

yolo detect train data=aircraft.yaml model=yolov8n.pt epochs=100 imgsz=640

Monitor training metrics: box loss should decrease, mAP should increase. If the model overfits (training loss decreases but validation mAP plateaus), add data augmentation.

4

Evaluate Model Performance

Run evaluation on the test set. YOLO outputs mAP@0.5 (mean average precision at 50% IoU) and mAP@0.5:0.95. For aircraft detection from aerial imagery, mAP@0.5 above 0.85 is a good target.

Analyze failures: Are small aircraft missed? Are shadows causing false positives? Use the confusion matrix to understand where the model struggles.

5

Run Real-Time Inference

Write a Python script using OpenCV to load images or video and run your trained model in real-time. Draw bounding boxes and confidence scores on each detection.

Measure inference speed (FPS). YOLOv8n should achieve 30+ FPS on a modern CPU and 100+ on a GPU. This is fast enough for real-time applications like drone video feeds.

6

Scale Up and Compare

Train a larger model (YOLOv8m or YOLOv8l) and compare accuracy vs. speed against the nano model. Document the trade-offs: larger models are more accurate but slower — which matters for edge deployment on drones.

Try adding more classes: helicopters, hangars, runways, vehicles. How does multi-class detection affect performance?

Go Further

Advance your computer vision work:

  • Deploy on edge hardware — export your model to ONNX or TensorRT and run it on a Jetson Nano or Raspberry Pi
  • Add tracking — implement object tracking (DeepSORT) to follow aircraft across video frames
  • Instance segmentation — train YOLOv8-seg to output pixel-level masks instead of bounding boxes
  • Satellite-specific models — fine-tune on very high resolution satellite imagery (0.3m GSD) for overhead detection