Ethics, Safety & Certification

The Certification Problem

Every piece of software on a certified aircraft must comply with DO-178C — the standard for airborne software. It requires every line of code to be traceable, tested, deterministic, and free of unintended functions. This framework was designed for traditional software that doesn't change after certification.

Why AI/ML Breaks This Framework

Traditional SoftwareML Models
Deterministic — same input, same outputProbabilistic — outputs are predictions
Behavior specified by codeBehavior learned from data
Doesn't change after deploymentCan be retrained (behavior shifts)
Every path can be testedExhaustive testing is impossible
Engineers can explain outputsInternal reasoning is often opaque

Where the FAA Stands

The FAA published its Roadmap for AI Safety Assurance in August 2024. Key positions: AI may only be initially applied to low-risk applications — not flight-critical systems. The G34/WG114 standards committee is developing certification standards. No ML-based system has been certified for flight-critical functions yet.

Where EASA Stands (Europe)

EASA defines three levels of AI in aviation:

LevelNameWhat It Means
Level 1Human AssistanceAI provides recommendations. Human makes all decisions.
Level 2Human-AI TeamingAI makes some decisions under human oversight.
Level 3Advanced AutomationAI operates with high autonomy. Guidance still in development.

In November 2025, EASA released its first regulatory proposal on AI for aviation, aligning with the EU AI Act.

Trust and Explainability

When a neural network predicts an engine bearing will fail in 300 flight hours, can you explain why? In most cases, no. The model processed thousands of sensor readings through millions of parameters and produced a number. This is the explainability gap.

Why It Matters in Aerospace

  • Certification: Regulators need to understand what the system will do in every scenario. "Trust us, it works" isn't acceptable.
  • Pilot trust: Controllers won't follow recommendations they can't understand. Pilots won't trust an autopilot they can't predict.
  • Failure investigation: When something goes wrong, investigators need to reconstruct why the AI made its decisions.
  • Legal liability: If an AI-controlled aircraft causes an accident, who is responsible? The manufacturer? The operator? The training data curator?

What's Being Done

NASA is researching Explainable AI specifically for air traffic management. MIT's Chuchu Fan Lab is developing formal methods to mathematically prove safety properties of ML-based control systems. EASA's learning assurance framework requires demonstrating that the training process was sound.

This is where your generation comes in. The engineers who figure out how to make AI safe, explainable, and certifiable for aerospace will define the next 50 years of aviation.

Autonomous Weapons

This section presents facts, not opinions. The debate is real, consequential, and relevant to anyone considering a career in defense aerospace.

What Exists Today

Shield AI's V-BAT operates autonomously in GPS/comms-denied environments and has a weapons deal as of early 2026. Anduril's Arsenal-1 is a manufacturing facility designed for hyperscale production of autonomous weapons systems. Multiple nations deploy AI-enabled loitering munitions.

The International Debate

In November 2025, the UN General Assembly passed a historic resolution calling for negotiations on autonomous weapons. 156 nations supported it. The United States and Russia voted no. The UN Secretary-General has called for a ban on lethal autonomous weapons operating without human control.

The Spectrum of Autonomy

LevelHuman Involvement
AI-assisted targetingAI identifies targets. Human reviews and approves each engagement.
Human-on-the-loopAI engages targets. Human monitors and can intervene.
Human-out-of-the-loopAI selects and engages independently. No human in individual decisions.

The Accountability Problem

If an autonomous drone strikes a civilian target, who is responsible? The commander? The engineer? The company? The government? AI creates responsibility gaps that existing legal frameworks don't address.

What matters is that you make this decision consciously, not by default. Know what the companies you apply to build. Know what the technology you develop will be used for. Have the conversation with yourself before you have to have it with your employer.

Other Ethical Considerations

Bias in Training Data

ML models learn from data. If the data is biased, the model is biased. In aerospace: predictive maintenance models trained on one engine fleet may not generalize to another, computer vision trained in certain conditions may fail in others, and ATC optimization may inadvertently disadvantage certain airports.

Job Displacement

AI will not eliminate aerospace jobs in the near term, but it will change them. Autonomous cargo flight could reduce demand for cargo pilots. AI-augmented inspection will evolve maintenance technician roles. AI route optimization reduces the need for human flight dispatchers. The net effect: more jobs created than displaced — but the new jobs require different skills.

Environmental Impact

AI cuts both ways. Positive: Route optimization saves fuel (1.2M gallons at Alaska Airlines alone), generative design reduces aircraft weight, predictive maintenance prevents wasteful replacements. Negative: Training large AI models consumes significant energy, and the compute infrastructure has its own carbon footprint.

Questions Worth Thinking About

These don't have easy answers. They're the questions that will define your career:

  1. Should an AI system be allowed to make a life-or-death decision without a human in the loop?
  2. How do you certify a system whose internal reasoning you can't fully explain?
  3. If autonomous flight is demonstrably safer than human-piloted flight, is it ethical to not deploy it?
  4. Where is the line between a defensive autonomous system and an offensive autonomous weapon?
  5. Should engineers have the right to refuse work on autonomous weapons, similar to medical conscientious objection?
  6. If an AI-designed component fails, who bears legal responsibility — the engineer who set the constraints, the algorithm, or the manufacturer?