Sort Good vs. Bad Parts with Image Classification
Teach a computer to inspect parts like a quality engineer
Last reviewed: March 2026Overview
In aerospace manufacturing, every part must pass inspection before it goes onto an aircraft. Human inspectors examine thousands of parts per day — looking for cracks, warping, surface defects, and dimensional errors. It's critical work, but it's also repetitive and error-prone. Automated visual inspection using machine learning is one of the fastest-growing applications of AI in the aerospace industry.
In this project, you'll build your own automated inspector from scratch. You'll create a dataset by photographing real parts — ideally 3D-printed parts where you can intentionally introduce defects (under-extrusion, warping, layer shifting) alongside good parts. Then you'll train an image classification model using TensorFlow and Keras to sort parts into "pass" and "fail" categories.
This project teaches you the full machine learning pipeline: data collection, labeling, model training, and evaluation. You'll also discover why data quality matters more than algorithm complexity — a lesson that applies to every ML project in every industry. The same transfer learning approach you'll use here powers inspection systems at Boeing, Airbus, and SpaceX.
What You'll Learn
- ✓ Create a labeled image dataset from real physical objects with consistent photography
- ✓ Understand the difference between training from scratch and transfer learning
- ✓ Use TensorFlow/Keras to build, train, and evaluate an image classifier
- ✓ Interpret model predictions using a confusion matrix and classification accuracy
- ✓ Apply data augmentation to improve model performance with small datasets
Step-by-Step Guide
Create Your Parts and Plan the Dataset
You need two categories of parts: good and defective. If you have access to a 3D printer, print 10–15 copies of the same small object (a bracket, clip, or fitting). For defective parts, intentionally change print settings: lower the temperature (causes poor layer adhesion), increase speed (causes ringing/vibration marks), or reduce infill (causes weak structure).
If you don't have a 3D printer, use any handmade objects — clay pieces, cut wood shapes, or even folded paper airplanes. The key is having clear, consistent differences between "good" and "bad" examples. Aim for at least 50 images per class (100 total) — more is better.
Photograph Your Dataset
Set up a consistent photography station: a plain white or neutral background, consistent lighting (natural light from a window or a desk lamp), and a fixed camera position (a phone on a stand works perfectly). Photograph each part from the same angle and distance.
Take multiple photos of each part with slight variations in rotation and position — this gives the model more training examples. Organize images into folders: dataset/good/ and dataset/bad/. This folder structure is what TensorFlow's image_dataset_from_directory expects.
Set Up TensorFlow and Load Data
Install TensorFlow: pip install tensorflow. If you're on a laptop without a GPU, TensorFlow will run on CPU — slower but perfectly fine for a small dataset.
Use tf.keras.utils.image_dataset_from_directory to load your images. Set the image size to 224×224 pixels (the standard input size for most pre-trained models). Split into 80% training and 20% validation. Enable data augmentation — random flips, rotations, and brightness changes — to artificially expand your small dataset.
Build the Model with Transfer Learning
Instead of training a CNN from scratch (which needs thousands of images), use transfer learning. Load a pre-trained MobileNetV2 model (trained on ImageNet's 1.4 million images) and replace only the final classification layer:
Freeze the pre-trained layers so they don't change during training. Add a GlobalAveragePooling2D layer, a Dense(128) layer with ReLU activation, and a final Dense(1) with sigmoid activation for binary classification. This gives you a powerful model that can learn from just dozens of images.
Train and Evaluate
Compile the model with the adam optimizer and binary_crossentropy loss. Train for 10–20 epochs — with transfer learning on a small dataset, training takes only a few minutes even on CPU.
Plot training and validation accuracy over epochs. Watch for overfitting: if training accuracy rises but validation accuracy drops, the model is memorizing your training images rather than learning general patterns. Data augmentation and dropout layers help prevent this. Generate a confusion matrix on the validation set — how many good parts were flagged as bad, and vice versa?
Test on New Parts and Analyze Errors
Print or make 5–10 new parts that the model has never seen. Photograph them with the same setup and run them through your classifier. Does it correctly sort new parts? Try edge cases: a part that's almost good but has a tiny flaw, or a good part photographed at a different angle.
Analyze any mistakes. If the model misclassifies parts, look at the images — is the lighting different? Is the defect too subtle? This error analysis process is exactly what ML engineers do in real manufacturing inspection systems. Document what you learned and what you'd improve with more data or a different approach.
Career Connection
See how this project connects to real aerospace careers.
Aerospace Manufacturing →
Automated visual inspection is being deployed across aerospace manufacturing — from composite layup inspection to fastener verification
Avionics Technician →
Technicians are increasingly working alongside AI inspection tools that flag potential defects for human review
Aerospace Engineer →
Design engineers must understand manufacturing inspection capabilities to set realistic tolerance specifications
Drone & UAV Ops →
Drone-mounted cameras use similar image classification to inspect infrastructure — bridges, power lines, wind turbines
Go Further
Expand your manufacturing AI skills:
- Classify defect types — instead of just pass/fail, train the model to identify specific defect types (warping, cracking, surface roughness)
- Add localization — use Grad-CAM to generate heatmaps showing where in the image the model detects the defect
- Build a real-time inspection station — connect a webcam and run live classification as parts pass in front of the camera
- Try the NEU Surface Defect Dataset — move to a professional industrial dataset with six types of hot-rolled steel surface defects