Count Aircraft on a Runway with YOLO

Use a real-time object detector to automatically count planes in satellite imagery.

High School Computer Vision 1–2 weeks
Last reviewed: March 2026

Overview

Satellite imagery generates millions of images of Earth's surface every day, and airports are some of the most strategically important locations to monitor. Counting how many aircraft are parked at an airport—or identifying their types—is valuable for logistics planning, intelligence analysis, and airline operations research. Doing this manually would take a team of photo analysts days; a trained YOLO model does it in milliseconds per image.

YOLO (You Only Look Once) is the gold standard real-time object detector used in autonomous vehicles, security cameras, and satellite analysis pipelines worldwide. Ultralytics YOLOv8, the current generation, comes with a model pretrained on COCO—a dataset that includes "airplane" as one of its 80 classes. This means you can detect aircraft in images right now, with zero training, simply by running a few lines of Python.

In this project you will download airport images from public sources, run YOLOv8 inference on them, and build a simple pipeline that counts detected aircraft and outputs an annotated image. You will then explore the limits of the pretrained model and, as an extension, fine-tune it on a small aerospace-specific dataset to improve accuracy on aerial view imagery where planes look very different from the ground-level photos COCO was trained on.

What You'll Learn

  • Explain how YOLO performs single-pass object detection and why it is faster than two-stage detectors.
  • Run YOLOv8 inference on images using the Ultralytics Python API.
  • Filter detection results by class label and confidence threshold.
  • Annotate images with bounding boxes and labels using Python.
  • Quantify detection performance by comparing model output to manually counted ground truth.

Step-by-Step Guide

1

Install YOLOv8 and test with a sample image

Install Ultralytics YOLOv8 with pip install ultralytics. Run a quick test in Python: from ultralytics import YOLO; model = YOLO("yolov8n.pt")—the nano model downloads automatically the first time. Run results = model("https://ultralytics.com/images/bus.jpg") and call results[0].show() to see bounding boxes drawn on the image. You should see people and a bus detected with confidence scores. This confirms everything is working before you move to airport imagery.

2

Acquire airport satellite images

Download 5–10 satellite images of airports from Google Maps (screenshot at high zoom), Copernicus Open Access Hub (free Sentinel-2 imagery), or the DOTA (Dataset for Object deTection in Aerial images) benchmark. Look for airports with clearly visible parked aircraft—LAX, JFK, London Heathrow, and Dubai International are good choices due to high aircraft density. Save them as JPEG files in a folder named airport_images/. Vary the zoom level and lighting conditions to make your test more realistic.

3

Run YOLO inference and filter for aircraft

Write a Python script that loops over all images in your folder, runs model(image_path), and filters results to keep only the "airplane" class (class ID 4 in COCO). Print the count of aircraft detected in each image. Experiment with the conf threshold (try 0.25 and 0.5) and observe how the number of detections changes—lower thresholds catch more planes but also produce more false positives on wing shapes and terminal roofs.

4

Annotate images and save results

For each image, use OpenCV to draw the filtered bounding boxes in red, add a confidence score label in white text, and stamp a total aircraft count in the top-left corner. Save the annotated image to an output/ folder. Create a summary CSV with columns: filename, total_detected, mean_confidence. Load the CSV into pandas and compute which airport had the most detected aircraft and which image had the lowest mean confidence.

5

Compare detections to manual ground truth

Manually count every clearly visible aircraft in three of your images by marking them with a colored dot. Record your manual counts. Compare to YOLO's counts: compute recall (fraction of real planes YOLO found) and precision (fraction of YOLO's detections that are real planes). Identify patterns—does YOLO miss aircraft that are partially under a jetway? Does it false-positive on ground vehicles? This analysis builds critical understanding of where pretrained models succeed and fail on domain-shifted imagery.

6

Summarize findings and explore model limitations

Write a structured one-page report covering: what YOLO is and how it works, your detection pipeline, accuracy results from the ground-truth comparison, and three specific cases where the model failed with a hypothesis for why. Include two annotated image examples—one success and one failure. Discuss what additional training data or model architecture changes might improve performance on overhead aerial imagery compared to COCO ground-level photos.

Go Further

  • Fine-tune YOLOv8 on the DOTA or xView aircraft detection dataset using the Ultralytics training CLI and compare fine-tuned vs. pretrained accuracy on your test images.
  • Run inference on a full airport time-lapse sequence and plot aircraft count versus time to identify peak activity periods.
  • Use the detected bounding boxes to estimate aircraft size (wingspan in pixels × ground sampling distance) and classify aircraft into categories by size.
  • Deploy your model as a Gradio web app where a user can upload any airport image and receive an annotated output with aircraft count.