Process Parameter Optimization for Additive Manufacturing
Use Bayesian optimization to tune 3D printing for aerospace-grade quality
Last reviewed: March 2026Overview
Additive manufacturing (AM) — particularly laser powder bed fusion (LPBF) — is transforming aerospace. GE Aviation prints fuel nozzles for the LEAP engine, Relativity Space 3D-prints entire rocket bodies, and SpaceX prints SuperDraco engine chambers. But AM part quality is exquisitely sensitive to process parameters: a 10% change in laser power or scan speed can mean the difference between a fully dense, high-strength part and one riddled with porosity that would fail in service.
Finding the optimal process parameters traditionally requires expensive physical experiments — printing dozens of test specimens with different settings and destructively testing each one. Bayesian optimization (BO) dramatically reduces this cost by intelligently selecting which experiments to run next, using a probabilistic surrogate model to predict outcomes and an acquisition function to balance exploration (trying new regions of parameter space) with exploitation (refining around known good settings).
In this project, you'll build a complete BO pipeline using BoTorch (Meta's Bayesian optimization library built on PyTorch and GPyTorch). You'll define the AM parameter space, implement a Gaussian process surrogate model, optimize for multiple competing objectives (minimize porosity AND maximize strength), and demonstrate that BO finds near-optimal parameters in far fewer experiments than grid search or random search. This is directly applicable to real AM process development and is an active research area at national labs and aerospace companies.
What You'll Learn
- ✓ Formulate additive manufacturing process optimization as a black-box optimization problem with multiple objectives
- ✓ Implement Gaussian process surrogate models using GPyTorch for modeling expensive-to-evaluate functions
- ✓ Use BoTorch to implement Bayesian optimization with acquisition functions (Expected Improvement, qNEHVI)
- ✓ Handle multi-objective optimization and compute Pareto-optimal process parameter sets
- ✓ Compare Bayesian optimization efficiency against random search and grid search baselines
Step-by-Step Guide
Define the Optimization Problem
Define the LPBF parameter space to optimize: laser power (100–400 W), scan speed (200–1200 mm/s), hatch spacing (60–120 μm), and layer thickness (20–60 μm). These four parameters control the volumetric energy density delivered to the powder bed: E = P / (v × h × t), a key predictor of part quality.
Define two objective functions: minimize porosity (volume fraction of voids, target < 0.1%) and maximize ultimate tensile strength (UTS, target > 1000 MPa for Ti-6Al-4V). These objectives partially conflict — very high energy density eliminates porosity but can cause keyhole defects that reduce strength. Use published experimental data from literature (e.g., the NIST AM benchmark datasets) to create a realistic simulation of the objective functions, or build an analytical model based on energy density relationships.
Create the Objective Function Simulator
Since you likely don't have access to an LPBF machine, create a synthetic objective function that mimics real AM physics. Model porosity as a function of energy density with a U-shaped curve (too little energy → lack-of-fusion porosity; too much → keyhole porosity). Model tensile strength as a function of porosity and microstructure (controlled by cooling rate, which depends on scan speed).
Add realistic noise to the simulator — real AM experiments have significant variability between builds. Use published regression models from AM literature (e.g., the Eagar-Tsai thermal model for melt pool dimensions) to make your simulator physically grounded. The simulator should take ~0.1 seconds per evaluation, mimicking the expensive nature of real experiments (which take hours).
Implement the Gaussian Process Surrogate
Build a Gaussian process (GP) model using GPyTorch that learns the mapping from process parameters to quality outcomes. The GP provides both a mean prediction (best estimate of the objective) and an uncertainty estimate (how confident we are) — this uncertainty is what makes Bayesian optimization work.
Use a Matérn 5/2 kernel (standard for physical processes) with automatic relevance determination (ARD) — this lets the GP learn that some parameters matter more than others. Fit the GP on an initial set of 10–15 random experiments (the "seed" design). Validate the GP predictions against held-out points to verify the surrogate is accurate before using it for optimization.
Implement Bayesian Optimization Loop
Use BoTorch to implement the optimization loop. At each iteration: (1) fit the GP to all observations so far, (2) optimize the acquisition function to select the next experiment, (3) evaluate the objective at the selected parameters, (4) add the result to the dataset. Repeat for 50–100 iterations.
For single-objective optimization, use Expected Improvement (EI) as the acquisition function — it naturally balances exploration and exploitation. For multi-objective optimization (porosity AND strength simultaneously), use qNEHVI (q-Noisy Expected Hypervolume Improvement), which optimizes the Pareto frontier. BoTorch handles the acquisition function optimization via L-BFGS with random restarts.
Multi-Objective Optimization and Pareto Analysis
Run the multi-objective BO campaign targeting both porosity minimization and strength maximization. After optimization, extract the Pareto frontier — the set of process parameter combinations where you cannot improve one objective without worsening the other.
Visualize the Pareto frontier in objective space (porosity vs. strength) with each point colored by a key process parameter (e.g., laser power). This reveals the fundamental trade-offs: achieving <0.05% porosity might require accepting 950 MPa instead of 1050 MPa strength. Also plot the parameter settings along the Pareto frontier — do they form physically interpretable patterns? (They should cluster around an optimal energy density band.)
Benchmark Against Baselines
Run the same optimization problem with random search and grid search (with the same total budget of function evaluations). Compare how quickly each method finds near-optimal solutions. BO should find competitive solutions in 30–50 evaluations where grid search needs 200+ to cover the 4D parameter space adequately.
Plot the optimization convergence curve: best-so-far objective value vs. number of evaluations. BO's curve should descend much faster than random search, demonstrating sample efficiency — the entire point of Bayesian optimization. Compute the hypervolume indicator for multi-objective runs to quantify how well each method approximates the true Pareto frontier.
Physical Interpretation and Documentation
Analyze the optimal parameters found by BO. Do they correspond to known good process windows from literature? For Ti-6Al-4V LPBF, the optimal energy density is typically 60–80 J/mm³ — does your optimizer converge to this range? If so, the BO successfully learned the physics from data alone.
Document the complete methodology in a format suitable for a journal paper or conference presentation. Include: problem formulation with physical motivation, GP model specification and validation, acquisition function choice and justification, optimization results with convergence plots, Pareto frontier analysis, and comparison against baselines. Discuss how this workflow would integrate into a real AM process development program — where each "evaluation" costs $500–2000 in machine time and materials, making sample efficiency genuinely valuable.
Career Connection
See how this project connects to real aerospace careers.
Aerospace Manufacturing →
AM process optimization is one of the highest-value applications of ML in aerospace manufacturing — directly reducing development cost and time
Aerospace Engineer →
Design engineers increasingly specify AM parts, requiring understanding of process-structure-property relationships that BO helps map
Space Operations →
In-space manufacturing (planned for lunar/Mars bases) will require autonomous process optimization with minimal human intervention
Go Further
Extend this into research-grade work:
- Constrained optimization — add constraints like minimum surface roughness or maximum residual stress and use constrained BO (e.g., BoTorch's constrained EI)
- Transfer learning between materials — train a GP on one alloy (Ti-6Al-4V) and use it as a prior for optimizing a new alloy (Inconel 718), reducing experiments needed
- High-fidelity simulation coupling — replace the synthetic objective with a thermal FEA simulation (e.g., ABAQUS or OpenFOAM) that predicts melt pool geometry from process parameters
- Batch Bayesian optimization — select multiple experiments per iteration for parallel execution on a multi-laser AM system, using q-batch acquisition functions