Articles by AeroEd

The Race to Put Data Centers in Orbit Is Getting Very Real — and Very Contested

88,000 satellites. A million more from SpaceX. Nvidia building space chips. Is orbital computing the future — or magical thinking?

The News

In the span of three months, the idea of putting data centers in orbit went from speculative to something the world’s largest technology companies are filing paperwork for.

On March 13, Starcloud — a Redmond, Washington startup founded in January 2024 — filed with the FCC to operate up to 88,000 satellites as orbital data centers for AI workloads. The constellation would orbit at 600–850 km in dusk-dawn sun-synchronous orbits, providing near-continuous solar power. The company has raised $34 million from NFX, Y Combinator, In-Q-Tel (the CIA’s venture arm), and Google’s accelerator program. It launched its first satellite in November 2025 — a 60 kg spacecraft carrying the first Nvidia H100 GPU ever operated in orbit, which successfully ran Google’s Gemma language model and trained a small LLM on Shakespeare.

That filing was already the second-largest orbital data center proposal. In January, SpaceX filed for up to one million satellites for the same purpose — orbital compute — at altitudes of 500–2,000 km. The filing omitted satellite mass, dimensions, deployment schedule, and cost estimates. SpaceX described the project as a step toward a “Kardashev Type II civilization.”

Three days after Starcloud’s filing, Nvidia unveiled the Space-1 Vera Rubin Module at its GTC conference — purpose-built AI hardware for satellites delivering up to 25x more compute than the H100 for space-based inference. Six companies were named as launch partners, including Starcloud, Axiom Space, Planet Labs, and Kepler Communications.

And behind all of it: Google is developing Project Suncatcher with Planet Labs — clusters of 81 satellites at ~650 km altitude, each cluster spanning 1 km radius, running Google Trillium TPU v6e accelerators with free-space optical links at tens of terabits per second. Two demonstration satellites are planned for early 2027. Eric Schmidt, the former Google CEO, acquired Relativity Space in March 2025 and became its CEO. When asked if AI’s power demands explained why he bought a rocket company, he answered: “Yes.”

Why It Matters

The thesis is simple: AI is eating electricity faster than the grid can supply it. Global data center power consumption could reach 945 terawatt-hours by 2030. AI alone may demand 67 additional gigawatts by that year. Permitting new gigawatt-scale facilities can take a decade. Starcloud claims space offers 10x lower energy costs, with solar panels up to 8x more productive in orbit than on Earth.

The skeptics are equally specific — and students should pay close attention, because this is what real engineering analysis looks like when it meets ambitious claims.

The Breakthrough Institute called the concept “magical thinking.” Radiation causes bit flips and permanent chip damage — Meta’s Llama 3 training had 419 unexpected interruptions in 54 days on Earth, and space adds radiation failures on top. Protecting against it requires triple modular redundancy, tripling launch costs and capex. AI chips become obsolete in 2–3 years, meaning entire constellations need replacement on that cycle. Sam Altman doesn’t expect orbital compute to deliver meaningful capacity “within five years.” AWS CEO Matt Garman dismissed it as unrealistic given launch capacity. Harvard astrophysicist Jonathan McDowell called a million new satellites “a big challenge for astronomy” and said removing failed ones is “absolutely required” to prevent Kessler Syndrome.

The honest framing: Starcloud has put one GPU in orbit for 21 months. The filing is for 88,000 satellites with four-kilometer solar arrays powering five-gigawatt data centers. The gap between those two points is enormous — in engineering, in capital, in regulatory approval, and in the physics of cooling a computer in a vacuum where, as Jensen Huang noted, “there’s no conduction, there’s no convection — there’s just radiation.”

But the gap between “impossible” and “inevitable” in aerospace has historically been shorter than skeptics expected. The question isn’t whether orbital computing is feasible in principle — the demonstrations have begun. It’s whether the engineering, economics, and launch cadence can close the gap between a 60 kg prototype and an 88,000-satellite constellation before the terrestrial grid solves its own power problem.

Career Connection

This emerging sector sits at the intersection of nearly every aerospace discipline — which is why it matters even if the most ambitious filings never fully materialize:

  • Aerospace Engineering — Radiation-hardened computing, thermal management in vacuum (radiators the size of tennis courts for 100 kW systems, soccer fields for 1 MW), power system design for near-continuous solar, and optical intersatellite link engineering. These are real spacecraft design problems being worked on now at Starcloud, Google, SpaceX, and Nvidia.
  • Space Operations — Managing constellations of thousands to hundreds of thousands of satellites requires autonomous orbit management, collision avoidance at unprecedented scale, and space domain awareness. The Space Force already tracks 22,000+ objects — filings like these would multiply that challenge by orders of magnitude.
  • Aerospace Manufacturing — Building 88,000 (or a million) satellites requires manufacturing at automotive scale, not traditional aerospace batch production. This is a factory problem as much as a design problem — and it needs production engineers, supply chain managers, and quality systems specialists.

Go Deeper

← Older Two Billionaires Bet on Asteroid Mining. They Lost. The Industry They Started Is Just Getting Going. Newer → Nvidia Just Made Space a Computing Platform — Here's What That Means for Careers