Problem Statement
The Department of War (DoW) faces a widening gap between the computational demands of modern warfighting and its ability to deliver compute across strategic, operational, and tactical levels. AI now underpins situational awareness, command-and-control, intelligence fusion, and autonomous-systems coordination with modern workloads requiring 50–150 kW per rack today with anticipated needs scaling to 250 kW+, which represents an order of magnitude beyond the ~10–35 kW per rack typical of current field-deployable IT. Current infrastructure struggles to adequately cool modern AI accelerators, condition degraded power in austere theaters, or survive contested environments. Warfighters lack sovereign, survivable, high-density compute at the point of need, delivered as rapidly deployable, logistically sustainable Modular Data Centers (MDCs).
Desired Solutions
The Department of War seeks to prototype a Modular Data Center product family delivered along two parallel solution paths sharing a common architectural baseline. Both paths shall be modular by construction, support linear capacity expansion through standardized module additions, and provide space for up to ten IT racks per module at the densities defined below. Vendors may propose against Solution Path I, Solution Path II, or Solution I & II.
Solution Path I — Core MDC, a stationary or semi-fixed installation providing approximately one megawatt of IT power and matched cooling capacity per module, supporting per-rack densities of 150 kilowatts with engineering headroom to scale to 250 kilowatts. Modules interlink to an aggregate envelope of approximately ten megawatts. The cooling baseline is optimized for liquid-based thermal management.
Solution Path II — Edge MDC, a highly mobile, forward-deployed installation providing approximately 250 kilowatts of IT power and matched cooling capacity per module, supporting per-rack densities of 50 kilowatts with engineering headroom to scale to higher densities. Modules interlink to an aggregate envelope of approximately two megawatts. The solution is able to achieve full operational capability rapidly upon arrival in theater, on utility or generator power. The cooling baseline assumes a ruggedized thermal management system suited to high-density compute and austere environments.
Desired Solution Attributes
Modular Compute Expansion. Linear scaling through standardized container additions with secure interconnection that preserves environmental sealing, TEMPEST, and thermal performance.
- Structural capacity for liquid-cooled HPC payloads: minimum 4,000 lb per rack point load and 450 psf distributed floor load, or greater to accommodate future HPC payloads (e.g., 6,500+ lb)
- Secure interconnection methodology between modular containers (e.g., environmental seal rating (IP class), TEMPEST treatment at the interconnect, and connection time per module)
- Capable of sustained AI workload performance (e.g, Per-rack MLPerf results sustained over a 4-hour run at maximum rated ambient with no thermal throttling; report performance per kW)
High-Density Power Delivery and Resilience. Conditioned, resilient power that sustains AI accelerator load profiles and large transient fluctuations across utility and generator sources.
- Accept and condition utility or generator power across a wide voltage range (208–480 V) and frequency range (45–65 Hz), filtering harmonic distortion in both directions
- Deliver IT power compatible with regional voltage and frequency standards in OCONUS deployment areas (e.g., 230V/50Hz, 200V/50–60Hz, 480V/60Hz)
- Sustained IT power availability: 99.99% threshold across utility and generator transition events, with a 99.999% objective; N+1 or better redundancy topology across all critical electrical subsystems
- Reserved spare capacity for load growth (e.g., 10% or more of installed IT capacity)
- UPS sized to bridge loss of primary power through automatic generator start
- BESS sized for extended generator-off operations or low-observable tactical use (Solution Path II)
High-Density Thermal Management. Cooling engineered for sustained AI workloads at the per-rack densities defined by both solution paths, with ruggedization for Solution Path II in austere environments.
- Downstream architectural flexibility for evolving payload cooling approaches, including rear-door heat exchangers, direct-to-chip liquid cooling, and immersion cooling
- Vendor-provided narrative capability matrix declaring which cooling approaches are supported at which rack-density tiers
- Ambient operating range for Solution Path II: threshold of -20°F to 122°F or greater
Networking and Cybersecurity. Scalable COTS network sized for high-density AI, with boundary defense, cryptography, and supply chain controls intended to align with current DoW standards.
- Backend AI and storage fabric backbone: 25 Gbps to 800 Gbps+ for ultra-low-latency accelerator-to-accelerator and storage traffic
- Frontend general compute: 10 Gbps to 100 Gbps for management traffic and locally hosted mission applications
- External boundary: 1 Gbps to 100 Gbps WAN connectivity for inter-site links
- Weatherproofed external ingress: multi-orbit SATCOM (LEO/MEO/GEO), GNSS/NTP, and sealed ports for terrestrial fiber
- Next-Generation Firewall with inline threat prevention and IDS on External Boundary and Frontend traffic; Backend AI fabric may bypass inline inspection, governed instead by identity-aware ZTNA and micro-segmentation
- Configuration hardening to DoW-approved security baselines or government-accepted vendor equivalents (e.g., DISA STIG or DISA RME-approved Vendor STIG), UCR-CORE interoperability, and FIPS-compliant cryptography for data in transit and at rest
- SCRM compliance with relevant department standards (e.g., NDAA §889, DoDI 5200.44, and TAA)
Deployment and Sustainment. Posture and sustainment differ materially between paths.
Solution Path I — Core MDC:
- Sited installation with planned utility, fiber, and seismic preparation; deployment driven by site-readiness milestones
- Supportable by contracted personnel at skill levels equivalent to commercial data center technicians and electricians
Solution Path II — Edge MDC:
- Air- and ground-transportable (e.g., C-17, tractor-trailer)
- Full operational capability at an unprepared site (e.g., within 96 hours) upon arrival in theater, with provision for rapid displacement
- Supportable in theater by trained military technicians at defined skill levels
Survivability and Data Sovereignty. Resilient mission performance in contested environments through hardened, secure facilities that protect classified payloads at the point of need. Solutions shall augment commercial baselines with defense-specific hardening, including TEMPEST and CBRN, to meet rigorous military and industry standards.
- Threat protection pathway. Defined roadmap and timeline for achieving TEMPEST and CBRN protection tiers
- Accreditation posture. Solution Path I shall demonstrate a pathway to full TS/SAPF accreditation per ICD 705; Solution Path II shall provide a deployable accreditation posture (e.g., T-SCIF or equivalent)
- Cryptographic sovereignty. Architectural assurance that the Government maintains exclusive control over cryptographic keys, with zero vendor access to plaintext data or key material
Submission and Award
Submission Content. Submissions are requested to include:
- Label Solution I, Solution II, or Solution I&II
- An overview and technical details of the proposed solution
- Examples of successful deployment of similar solutions in the commercial sector (highly encouraged)
- Identification of any partners or subcontractors and the capabilities each will deliver
- Status of current production readiness and the estimated level of effort and timeframe to customize it for the Government's purpose
Evaluation Preferences. Preference will be given to submissions that:
- Demonstrate product maturity and deployment validation — products that readily fit or can be adapted for the solicited purpose with the least non-recurring engineering and demonstrate clear subject matter expertise (diagrams, figures encouraged)
- Demonstrate concrete integration and interoperability, particularly across partnered teams
Collaboration and Testing. The Government may test multiple solutions at separate or shared locations. Providers should expect to participate in a shared development space with other vendors, operators, and government developers to rapidly iterate, and may be asked to collaborate in cross-functional efforts under previously established contractual vehicles. Solutions submitted under this AOI may be used in a standalone capacity or as a component of a more complex DIU program, including via technology insertion into other prototyping efforts.