Classify Satellites with TensorFlow

Teach a neural network to tell a weather satellite from a GPS bird.

High School Space Systems 3–5 weeks
Last reviewed: March 2026

Overview

There are more than 7,000 active satellites orbiting Earth right now, and space agencies spend enormous effort cataloguing and classifying them. Machine learning—especially convolutional neural networks (CNNs)—has become one of the fastest ways to automate that cataloguing work. In this project you will build a small but real CNN that learns to distinguish between different satellite categories using publicly available imagery.

You will work through the complete ML pipeline: finding and labelling data, preprocessing images into tensors, designing a simple network architecture, training it on your laptop, and measuring how well it generalizes to images it has never seen. Along the way you will learn why layers like convolution and pooling are so effective at spotting shapes regardless of where they appear in an image.

By the end of the project you will have a working classifier you can demo, a Jupyter notebook explaining every decision, and a solid mental model of how modern aerospace companies use ML to monitor orbital traffic—a skill that is rapidly becoming core to space situational awareness work.

What You'll Learn

  • Explain what a convolutional neural network does and why it suits image data.
  • Collect, label, and split a small image dataset into train/validation/test sets.
  • Build and compile a CNN in TensorFlow/Keras using standard layer types.
  • Interpret training curves to diagnose underfitting and overfitting.
  • Report model performance with accuracy, precision, and a confusion matrix.

Step-by-Step Guide

1

Set up Python and TensorFlow

Install Python 3.10+, create a virtual environment, and run pip install tensorflow matplotlib numpy pillow jupyter. Launch a Jupyter notebook and confirm TensorFlow imports cleanly by printing tf.__version__. If you have a compatible GPU, follow the CUDA setup guide; otherwise the CPU version works fine for this project size.

2

Collect and label your dataset

Download at least 150 images per class (weather satellites, GPS satellites, communication satellites) from NASA Worldview, ESA image galleries, or Google Images. Organize them into folders named after each class—TensorFlow's image_dataset_from_directory will read these folder names as labels automatically. Aim for consistent image sizes around 128×128 pixels.

3

Preprocess images into tensors

Use tf.keras.utils.image_dataset_from_directory to load your folders, setting image_size=(128,128) and batch_size=32. Add a Rescaling(1./255) layer so pixel values fall between 0 and 1. Split off 20% of your data as a validation set and visualize a sample batch with Matplotlib to confirm labels are correct.

4

Design and train a CNN

Build a Sequential model with three Conv2D+MaxPooling blocks, a Flatten layer, a Dense(128) layer with ReLU activation, a Dropout(0.3) layer to reduce overfitting, and a final Dense layer with softmax output equal to your number of classes. Compile with adam optimizer and sparse_categorical_crossentropy loss, then call model.fit() for 20 epochs. Watch the validation accuracy climb.

5

Evaluate and visualize results

Plot training and validation accuracy/loss curves side by side to check for overfitting. Run model.evaluate() on your held-out test set and report final accuracy. Use sklearn.metrics.confusion_matrix to see which classes confuse your model, then visualize it as a heatmap with Seaborn. Note any patterns—misclassifications often reveal interesting similarities between satellite designs.

6

Write up findings and save your model

Save your trained model with model.save('satellite_classifier.h5') so you can reload it later. Write a final notebook cell summarizing what your model learned, where it struggles, and what a larger dataset or deeper architecture might improve. Add a section connecting your work to real-world space situational awareness programs like LeoLabs or the Space Fence.

Go Further

  • Apply data augmentation (random flips, rotations, brightness shifts) and compare accuracy before and after.
  • Replace your custom CNN with a pretrained MobileNetV2 backbone (transfer learning) and observe how quickly it reaches high accuracy with less data.
  • Deploy your saved model as a simple Flask web app where users can upload an image and receive a classification.
  • Experiment with Grad-CAM visualization to highlight which pixels in an image most influenced the network's decision.