Edge-Deployed Satellite Detection with YOLO

Train, quantize, and deploy a YOLO model for real-time satellite detection on Jetson Nano

Advanced Computer Vision 6–9 weeks
Last reviewed: March 2026

Overview

Detecting satellites in telescope images, ground-based sensor feeds, or on-orbit cameras is a key enabling technology for Space Domain Awareness (SDA) — tracking the growing population of active satellites and debris. Ground-based electro-optical telescopes generate terabytes of image data nightly; on-orbit cameras on LEO debris monitoring platforms must detect and track objects in real time with severe computational constraints.

In this project, you'll train a YOLOv8 model for satellite detection using publicly available datasets (satellite streaks in ground-based telescope images, or synthetic on-orbit imagery from datasets like SPARK or SPEED+). You'll then perform TensorRT quantization (INT8 precision) to reduce model size by 4× and inference time by 3–8×, and deploy the optimized model on a NVIDIA Jetson Nano — the standard edge compute platform for space payload applications.

Edge deployment of AI vision models is one of the fastest-growing areas in space technology. Planet Labs, Satellogic, and several defense contractors are deploying on-board AI for cloud detection, change detection, and target recognition directly on the satellite — reducing downlink bandwidth requirements by 10–100× by transmitting only relevant images.

What You'll Learn

  • Curate and annotate a satellite detection dataset using Label Studio or Roboflow
  • Train YOLOv8 for satellite/streak detection and evaluate with mAP, precision, recall metrics
  • Apply TensorRT INT8 quantization calibration and measure the accuracy-speed trade-off
  • Deploy the quantized model on Jetson Nano and benchmark real-time inference performance
  • Analyze detection failure modes under challenging conditions: star clutter, streaks at various angles, motion blur

Step-by-Step Guide

1

Assemble the Training Dataset

Use the Satellite Streak Dataset or the SPARK 2022 dataset (synthetic on-orbit satellite imagery with bounding box annotations). Supplement with satellite images from Roboflow Universe — search for "satellite detection" or "space object detection" to find community-contributed annotated datasets.

Augment the dataset with: random rotations (satellites can be at any angle), brightness/contrast variation (simulating different lighting conditions), Gaussian noise (simulating detector noise), and synthetic streak generation (add artificial satellite streaks to background star field images). Target 3,000–5,000 labeled images for good detection performance.

2

Train YOLOv8

Install Ultralytics YOLOv8 (pip install ultralytics) and train on your dataset. Start with YOLOv8n (nano — fastest, smallest) and YOLOv8s (small). Training on a GPU (Google Colab T4 is sufficient): model.train(data='satellite.yaml', epochs=100, imgsz=640).

Monitor mAP@0.5 and mAP@0.5:0.95 during training. For satellite detection, false negatives (missed detections) are more costly than false positives — adjust the confidence threshold post-training to maximize recall while keeping precision above 80%. Typical target: recall > 90% at precision > 80% on your test set.

3

Evaluate and Diagnose Failure Modes

Run inference on the test set and analyze failures systematically: which images cause missed detections? Which cause false positives? Use the YOLO confusion matrix and visualize high-confidence false positives (what non-satellite objects does the model mistake for satellites?).

Particularly analyze: detection at very low SNR (faint satellites), detection of partially occluded objects (satellite behind Earth limb), and detection of fast-moving objects (significant motion blur). Document the minimum detectable SNR and minimum detectable angular extent for your trained model.

4

TensorRT Export and Quantization

Export the trained YOLOv8 model to TensorRT format using the Ultralytics export function: model.export(format='engine', int8=True, data='satellite.yaml'). The INT8 quantization requires a calibration dataset (typically 100–500 images from the training set) to determine optimal quantization ranges for each layer.

Compare three versions: FP32 (full precision), FP16 (half precision), and INT8 (8-bit quantized). For each, measure: model size on disk, mAP on test set, and inference time per image on both desktop GPU and Jetson Nano. Quantize carefully — satellite detection is sensitive to quantization errors in early backbone layers.

5

Deploy on Jetson Nano

Set up the Jetson Nano with JetPack SDK (Ubuntu 18.04 + CUDA + TensorRT). Transfer the TensorRT engine file and run inference using the tensorrt Python library. Measure end-to-end inference throughput: images per second at the target resolution (640×640).

Implement a real-time detection demo: capture frames from a USB camera (simulating a telescope feed), run inference, draw bounding boxes on detected satellites, and display the output stream. Benchmark at multiple batch sizes (1, 4, 8) to find the optimal throughput configuration for the Jetson Nano's hardware.

6

Tracking and Multi-Frame Analysis

Static frame detection is limited — real satellite tracking requires multi-frame association. Implement ByteTrack (a simple, effective tracker compatible with YOLO) to link detections across frames and build trajectories. Trajectories provide much richer information: orbital arc shape, angular velocity, and separation from known catalog objects.

Demonstrate the full pipeline: YOLO detection → ByteTrack trajectory building → trajectory analysis (linear vs. maneuvering vs. debris tumbling, based on trajectory shape). This end-to-end demonstration is what an operational Space Domain Awareness sensor would need to provide.

Go Further

Push toward operational space surveillance capability:

  • Orbital determination — use detected satellite streak endpoints to compute angular velocity, then integrate with a ground station location model to estimate orbital elements using Gauss's method
  • Synthetic data augmentation — use NVIDIA Omniverse or Blender to generate photorealistic synthetic satellite images with known ground truth, dramatically expanding the training dataset
  • Multi-class detection — extend from satellite detection to multi-class: distinguish operational satellites, rocket bodies, and debris fragments by their visual signature (reflectivity, tumble rate, size)
  • Custom hardware — design a custom PCB integrating the Jetson module with a telescope interface board for a complete portable ground-based SDA sensor