Frequently Asked Questions
Technical questions from engineers and investors about the Centradiant architecture.
▸ Why orbital data centers at all?
AI compute demand is growing faster than terrestrial infrastructure can scale. Data centers already consume over 4% of US electricity, and that share is accelerating. Meanwhile, space offers effectively unlimited real estate, abundant solar energy (1,361 W/m² uninterrupted in a dawn/dusk orbit), and natural vacuum for thermal radiation. The economic case isn't about replacing terrestrial data centers — it's about adding capacity where Earth-side constraints (power grid, water, permitting, land) are becoming binding.
The near-term use case is even more compelling: on-orbit processing for Earth observation. Imaging satellites generate roughly 26 TB/day across major constellations, but 70–90% of that imagery is clouds, ocean, or unchanged terrain. Processing on-orbit with ML triage — cloud detection, change detection, object classification — can reduce downlink volume by 90% and deliver actionable intelligence in minutes instead of hours. That's the difference between "interesting data" and "tactical advantage," particularly for defense and disaster response.
Several major players are already moving: SpaceX has explored Starlink compute, Starcloud is developing orbital GPU infrastructure, and multiple defense programs (SDA Proliferated Warfighter Space Architecture) need on-orbit processing. The question isn't whether orbital compute will happen — it's who solves the thermal bottleneck first.
▸ What's the novel idea here?
Liquid Droplet Radiators have been studied since the 1980s — the physics of spraying hot fluid into vacuum and letting droplets radiate heat is well understood. The problem has always been collection. Previous designs relied on electromagnetic steering or electrostatic charging to guide droplets back to a collector, adding complexity, parasitic power, and failure modes. None reached TRL 5.
Centradiant's innovation is using centrifugal force for collection. A slowly spinning disk (2 RPM) creates 0.045g at the 10-meter rim — enough to passively drive droplets outward from hub-mounted nozzles to a dual-layer mesh collector at the rim. No magnets. No charging. No parasitic collection power. The working fluid (CB-DC705, a silicone oil doped with 100 ppm carbon black for emissivity) never crosses the rotary interface. Heat transfers from the non-rotating spacecraft bus through a Galinstan liquid-metal thermal joint — conductive coupling, no fluid seals.
The disk geometry is the second key insight. Earlier LDR concepts used spherical mesh enclosures requiring hundreds of square meters of mesh. The disk constrains droplets to a radial plane, so only the peripheral rim needs mesh — a 95% reduction in mesh area (from ~250 m² to ~19 kg of Ti-6Al-4V dual-layer mesh). This is what makes the system launchable on a single rideshare.
The result is 48 patent claims (8 independent) covering the centrifugal disk LDR architecture. Patent Application #63/981,796 was filed February 12, 2026.
▸ There's about 20–30 kg/kW delta from other designs. Is that from reduced weight or lower parasitic consumption?
It's primarily reduced mass. Traditional deployable panel radiators come in at 25–35 kg/kW(th). Centradiant's spinning disk LDR achieves 4.4 kg/kW(th) under conservative assumptions — roughly a 6–8× improvement. That delta comes from the fundamental physics: liquid droplets have enormous surface-area-to-mass ratios compared to solid panels, so you radiate the same heat with far less material.
The parasitic power story reinforces the advantage but isn't the primary driver. Centrifugal collection requires zero electrical power — the spin does the work. The main parasitic load is the DC-705 pump at 585 W, which pushes fluid through the PCHE heat exchanger and nozzle array. That's included in our power budget (the solar array is sized at 145 m² to cover GPUs, pumps, avionics, and thermal control). By contrast, electromagnetic collection systems in prior LDR concepts required significant power for steering magnets or electrostatic charging, eating into the net compute capacity.
In concrete terms: our 934 kg launch-mass spacecraft rejects 54 kW of thermal energy and delivers 44.8 kW of compute (64 GPUs). A traditional radiator system sized for 54 kW thermal would mass 1,350–1,890 kg for the radiator alone — before you add the compute payload, solar arrays, or bus. That mass delta is the difference between a viable rideshare launch and a dedicated launch vehicle.
▸ How can you reliably collect the droplets with this centrifugal system?
The physics works in our favor here. Nozzles at the hub (2 m radius) generate 376 µm droplets via natural Rayleigh breakup from 200 µm laser-drilled orifices. In the rotating frame, centrifugal acceleration pushes each droplet radially outward at 0.045g (at the rim). The droplets travel from hub to rim in about 10.8 seconds, arriving at the dual-layer Ti-6Al-4V mesh collector with a Weber number of approximately 24.
Weber number is the key metric for capture reliability — it quantifies impact energy relative to surface tension. Below We ≈ 40, droplets are captured without splashing on standard mesh. With our hydrophilic (SiOₓ PECVD) mesh coating, the critical Weber number rises to 100–300 based on published heat exchanger literature. At We = 24, we have substantial margin. Once captured, centrifugal force continuously presses fluid against the mesh (0.045g at the rim), driving it into chevron drainage channels and back through spoke-integrated return tubes to the hub.
Our analysis shows capture efficiency exceeding 99.99% per cycle. Even in the red team analysis, the centrifugal collection mechanism itself was not flagged as a critical risk — the concerns were about edge cases like droplet coalescence and long-term mesh fouling, both of which have identified mitigations. The 27,248 nozzles provide massive redundancy: losing 1% of nozzles has negligible thermal impact. This is one of the areas we'll validate directly in the $225K ground prototype — droplet impact on mesh under centrifugal analog in vacuum.
▸ Do you lose some of the liquid and does it require recharging?
Yes, there are two loss mechanisms, and both are manageable. The first is evaporative loss. DC-705 has exceptionally low vapor pressure (4×10⁻¹⁰ torr at 25°C), which is why it was selected — it's one of the lowest-volatility fluids available. At our operating temperature of ~48°C, Langmuir evaporation modeling gives approximately 0.9 kg/year of loss. The system carries 106 kg of fluid inventory, including 6 kg of evaporative spare, so this is well within the 5-year mission design life without resupply.
The second mechanism is escape through the mesh. At >99.99% capture efficiency per cycle, the loss rate is extremely small. Our V2 analysis calculated 0.099 kg/day fluid loss rate (meeting the <0.5 kg/day requirement), but this was for the earlier design iteration. The V3 disk geometry with hydrophilic mesh coating improves on this significantly because the Weber number at impact is well within the safe capture regime.
For multi-year missions, we've budgeted for the losses. The system does not require active recharging during its 5-year design life. That said, if orbital servicing becomes routine (as several companies are developing), topping off the fluid inventory would be a straightforward operation — it's just pumping silicone oil into a reservoir, far simpler than most satellite servicing tasks.
▸ What about general serviceability/upgradability? Won't that require astronaut missions? Or all robotically?
The baseline architecture is designed as a 5-year expendable mission — no servicing required. The system is designed to operate autonomously for its full mission life with all consumables (fluid, xenon propellant) budgeted from launch. At end of life, the spacecraft deorbits.
That said, the modular architecture lends itself well to robotic servicing, which is where the industry is heading. The compute module is a self-contained unit (64 GPUs, cold plates, power distribution) that could be designed as an orbital replacement unit (ORU). Fluid replenishment is mechanically simple — a docking port to the reservoir. The most likely upgrade path would be swapping the compute module for next-generation GPUs while keeping the thermal and power bus intact, since the radiator and solar arrays don't become obsolete.
We don't envision astronaut servicing. The spacecraft operates at 700 km sun-synchronous orbit, which is not routinely accessible to crewed vehicles. Robotic servicing (from companies like Orbit Fab, Astroscale, or Northrop's MEV lineage) is the realistic path. But to be clear: this is a future option, not a requirement. The economics work on a 5-year expendable basis, and any servicing capability is upside.
▸ What kind of workloads are we talking about? What's the interconnect? What's the sweet spot in scale?
The baseline "Centradiant Pathfinder" carries 64 H100 SXM5 GPUs delivering 63.3 PFLOPS FP16 peak (56.8 PFLOPS sustained at ~90% availability). Internal interconnect is NVSwitch — the same fabric used in DGX systems — giving full bisection bandwidth between all GPUs within the node. This handles ML training, fine-tuning, and inference workloads natively.
For inter-satellite communication, the baseline uses Ka-band downlink to dedicated ground stations. We have not baselined laser inter-satellite links (ISLs) in the current design, though the architecture is compatible with them. The honest answer is that for the primary near-term use case — on-orbit processing of Earth observation data — you don't need high-bandwidth inter-satellite links. The satellite processes imagery locally and downlinks only results (metadata, change detections, compressed products), reducing bandwidth requirements by 90%+. For workloads requiring multi-node coordination (distributed training), laser ISLs at 10+ Gbps would be needed, and that's a future constellation capability, not a Pathfinder requirement.
On scale: we've run parametric sizing from 8 to 64 GPUs. The thermal system scales smoothly — disk radius goes from 3.4 m (8 GPU) to 9.2 m (64 GPU), with solar array from 20 m² to 145 m². The sweet spot is at 64 GPUs — it maximizes ROI per unit because fixed costs (bus, ADCS, comms, ground ops) are amortized over more compute. Smaller configurations (16–32 GPU) make sense as pathfinder missions to retire risk before scaling. The total power envelope is roughly 50 kW for the 64-GPU configuration, with 48 kW thermal rejection. All configurations fit within a Falcon 9 rideshare mass/volume envelope.
▸ How do you force the coolant over the thermal interface? Does this design still use a pump?
Yes, there is a pump — but it's on the rotating side only, and it's doing conventional work. The thermal chain has two distinct segments. On the bus (non-rotating) side, a small water loop circulates through GPU cold plates and carries heat to the rotary interface. On the rotating side, a magnetically-coupled gear pump pushes CB-DC705 through a printed circuit heat exchanger (PCHE), out through spoke-integrated piping to the 13 nozzle plates, where it's atomized into the droplet cloud. After collection at the rim mesh, centrifugal force drains the fluid through chevron channels back to the hub reservoir.
The critical design choice is how heat crosses from the non-rotating bus to the spinning disk. We use a Galinstan liquid-metal thermal joint — a 100 mm radius × 300 mm long annular gap filled with 0.5 mm of Galinstan (a gallium-indium-tin alloy that's liquid at room temperature). Heat conducts through the liquid metal with only a 2.9°C temperature drop. No fluid seals, no rotating pipe joints, no risk of coolant leaking across the interface. The working fluid stays entirely on one side of the rotary boundary.
The DC-705 pump draws 585 W — the main parasitic thermal load. It's pushing fluid through 228 kPa of total pressure drop (nozzle plates + PCHE channels + piping). This is explicitly included in the power budget and solar array sizing. The pump is a conventional magnetically-coupled design (no shaft seal to vacuum), which is mature technology. Heritage exists from ISS fluid loops operating in similar conditions.
▸ How do you expect to maneuver the craft? Won't thrust cause you to lose some of the coolant?
This is a great question and worth breaking into two parts: attitude control (pointing the spacecraft) and station-keeping (maintaining orbit).
For attitude control, the spacecraft uses a pyramidal array of four reaction wheels plus three magnetorquers for desaturation. The spinning disk itself acts as a gyroscopic stabilizer — with a spin-to-transverse moment of inertia ratio of 2.0, it's a major-axis spinner, which is inherently stable against wobble (energy dissipation damps nutation rather than amplifying it). The angular momentum is 1,316 N·m·s at 2 RPM. Reaction wheels handle fine pointing (within ±8.2° demonstrated in simulation), and precession-compensating feedforward in the ADCS software manages the gyroscopic cross-coupling during slew maneuvers. Critically, reaction wheels produce zero translational acceleration — they only apply torques — so the droplet cloud is undisturbed during attitude maneuvers.
Station-keeping is handled by a Hall-effect thruster (SPT-50 class) with xenon propellant, budgeted for the full 5-year mission at 700 km SSO. Here's where the coolant question gets interesting: a thruster burn does apply translational acceleration to the spacecraft, which would shift the droplet trajectories relative to the mesh collector. The practical solution is to pause nozzle spray during burns. Our thermal transient analysis shows the system has significant thermal inertia — GPU temperature rises at about 1°C/s under full load with no cooling, and thermal interlocks trigger within 4–8 seconds. Station-keeping burns for a LEO spacecraft are infrequent (a few minutes per week) and low-thrust (millinewton-class for electric propulsion), so pausing the spray briefly is operationally straightforward. You'd reduce GPU load or briefly idle compute during the burn window.
The spin axis is oriented along the orbit normal, which minimizes coupling between orbit-keeping burns (primarily in-track) and the disk plane. This is an active area of our operations concept development — the detailed burn-timing and thermal-management protocol during maneuvers needs further analysis, but there's no fundamental physics problem. The architecture has the margins to accommodate it.
▸ What if chip thermal efficiency improves? Wouldn't that reduce the need for active cooling and make this unnecessary?
This is a common intuition, and it's worth addressing directly: improving chip efficiency does not reduce the need for cooling. It increases the value of the cooling system.
The power input to the system is fixed. Solar panels convert photons into electricity, and every watt of electricity consumed by a chip becomes a watt of heat. This is not a design limitation. It is thermodynamics. There is no version of a processor that performs computation without producing heat. A more efficient chip does more FLOPS per watt, but it still converts that watt entirely into thermal energy. Landauer's principle establishes the theoretical floor, and real processors operate orders of magnitude above it.
What "better thermal efficiency" actually means in practice is one of two things. First, more computation per watt (improved performance/watt). This does not reduce total heat output. It means you need fewer watts to do the same work, which frees capacity to do more work within the same power and thermal envelope. Economics always pushes toward filling available capacity. If you have a fixed solar budget and a fixed cooling budget, you will utilize both to their limits. The cooling constraint remains binding.
Second, better heat spreading within the chip (improved thermal management at the die level). This helps move heat from the junction to the heatsink more effectively, but it does not eliminate the heat. It still has to leave the spacecraft. In space, radiation is the only mechanism available. No amount of on-chip thermal engineering changes the spacecraft-level energy balance.
The fundamental equation is simple: watts in must equal watts out. Solar input is fixed by array size. Every watt of that input, after conversion losses, ends up as heat that must be radiated to space. The radiator capacity determines the maximum power budget of the station. More efficient chips make the platform more valuable by increasing the computation you can extract from each watt, but they do not shrink the thermal rejection requirement. If anything, they make the case for investing in cooling infrastructure stronger, because each watt of cooling capacity now enables more revenue-generating computation.
Have a technical question not covered here?
Get in Touch →