L2D2 District Cooling System
Engineered for AI-Scale, Liquid-Cooled Data Infrastructure
Next-generation thermal management designed for the extreme demands of AI compute workloads, delivering scalable liquid cooling at unprecedented rack densities.
View Architecture
Why District Cooling Matters for AI Data Centers
The data center industry is undergoing a fundamental transformation. As workloads shift toward high-density, AI-native compute architectures like NVIDIA's Rubin (NVL144), Kyber racks, and H100/B100/GH200 systems, traditional air-cooled environments have reached their physical limitations. The thermal envelope of modern GPU clusters simply cannot be managed through conventional CRAC units and hot aisle containment.
L2D2's district cooling platform represents a paradigm shift in data center thermal management, engineered from the ground up to meet and exceed the extreme demands of current and future GPU-based deployments. This isn't an incremental improvement—it's a complete architectural rethinking of how we cool compute at scale.

The Cooling Challenge
Modern AI racks generate 3-8x more heat per square foot than traditional enterprise IT. With rack densities now exceeding 30kW as baseline and specialized configurations reaching 80kW+, air cooling is physically incapable of removing heat fast enough.
80kW+
Peak Rack Density
Maximum supported power per rack with 100% liquid cooling coverage
3-8x
Heat Generation
Multiplier vs. traditional enterprise infrastructure
100%
Liquid Cooling
Only viable solution for AI-scale thermal management
L2D2 vs. Traditional Data Center Cooling
District Cooling System Overview
L2D2's cooling infrastructure operates as a sophisticated two-tiered, high-availability chilled water network that delivers enterprise-grade reliability while supporting extreme thermal loads. The architecture separates facility-scale distribution from rack-level precision, enabling both operational efficiency and tactical flexibility.
Primary Loop (District Loop)
The backbone of the system is a centralized chilled water production plant engineered with N+1 redundancy across all critical components. This facility-scale loop operates as a closed system, delivering consistent thermal capacity across the entire campus.
  • Supply Temperature: 16°C to 18°C (60.8°F to 64.4°F)
  • Return Temperature: 28°C to 30°C (82.4°F to 86°F)
  • Pre-insulated underground distribution with minimal thermal loss
  • Variable speed pumping stations with VFD control
  • Tier III reliability with dual feed paths to each zone
Secondary Loop (Rack-Level Integration)
Where the primary loop handles distribution, the secondary loop provides precision delivery and control at the rack level through manifold-based architecture and intelligent CDUs.
Redundant liquid headers (Loop A & Loop B) ensure continuous operation even during maintenance events. Each rack connects to Coolant Distribution Units (CDUs) that perform critical functions:
  • Interface between chilled water and device-grade coolant
  • Precise heat exchange and temperature regulation
  • Real-time leak detection with autonomous failover
  • Support for multiple cooling modalities
Cold Plate Cooling
Direct liquid cooling blocks mounted to GPUs, CPUs, and memory modules for maximum thermal transfer efficiency
Rear Door Heat Exchangers
High-efficiency heat rejection at the rack exhaust boundary for hybrid cooling scenarios
Direct-to-Chip Liquid
Microchannel cold plates with optimized flow paths for GPU and accelerator cooling
Immersion Cooling
Designated zones support single-phase and two-phase immersion systems for specialized deployments
Key Features & Capabilities
The L2D2 district cooling system integrates advanced mechanical engineering with intelligent control systems to deliver unmatched reliability, scalability, and operational flexibility. Every component is designed with redundancy, serviceability, and future expansion in mind.
Full Liquid Cooling Support
The system architecture supports rack densities up to 80kW across all deployment phases, with thermal capacity reserves for future workload intensification. This isn't theoretical—it's tested and validated infrastructure ready for today's most demanding AI configurations.
Infinite Scalability
Modular pumping stations and CDU farms enable phase-by-phase expansion without disrupting existing operations. As your compute footprint grows, cooling capacity scales in lockstep with zero downtime deployments.
Leak Detection & Isolation
Rack-level moisture sensing with sub-second detection triggers automatic zone shutdown protocols, isolating affected areas while maintaining operation in adjacent zones. Every manifold, header, and quick-connect fitting includes redundant monitoring.
Redundant Design Philosophy
N+1 configuration on primary chillers and pumping infrastructure ensures continuous operation during maintenance or component failure. At the rack level, A/B dual loop architecture means every piece of equipment has two independent cooling paths.
AI-Centric Control System
Intelligent monitoring of flow rate, pressure differential, and delta-T per zone enables predictive maintenance and autonomous optimization. The system learns thermal patterns and adjusts proactively to changing compute loads.
Heat Reuse Infrastructure
The hot water return loop (28-30°C) provides significant thermal energy that can be captured for district heating, industrial processes, or on-site power generation, directly supporting ESG objectives and reducing total facility operating costs.
Architecture & Flow Topology
The L2D2 District Cooling System operates on a multi-stage flow architecture designed for optimal performance, reliability, and energy efficiency. The diagram below visually outlines the complete cooling process, from initial chilling to rack-level heat removal, incorporating key temperature points for each stage.
Chiller Plant (45°F Supply)
Centralized facility with redundant cooling towers and high-efficiency chillers producing the 45°F primary chilled water supply. Optimized for plant capacity of 125,000 RT at full buildout.
Primary Loop (45°F/95°F)
Underground network distributing the 45°F chilled water supply to data hall blocks. It receives 95°F return water, ensuring efficient heat transfer across the entire district.
Secondary Loop (55°F Pump, 59°F Supply)
Each data hall uses dedicated CDUs to interface with the primary loop. A 55°F secondary pump circulates a glycol-water mixture, providing a stable 59°F rack coolant supply for IT equipment.
Rack Integration (59°F/65°F)
Quick-connect liquid ports deliver the 59°F coolant directly to racks for cold plate, rear door, or immersion cooling. Heat-laden coolant returns at 65°F from the IT equipment.
Return Loop (65°F Return)
The 65°F warm water from the racks is collected and returned to the secondary loop, then transferred to the primary return. This ensures continuous, efficient heat removal and prepares water for re-cooling or heat recovery systems.

System Pressure Management
The multi-loop architecture maintains optimal pressure zones: 100-150 PSI in the primary loop, 60-80 PSI in secondary loops, and 20-40 PSI at rack manifolds. This staged pressure reduction protects sensitive equipment while ensuring adequate flow rates for high-density cooling.
Cooling Strategies by Rack Zone
To accommodate diverse compute requirements and optimize performance, the L2D2 system is designed with a zone-based cooling strategy, providing tailored solutions for varying rack densities and cooling needs. Each zone is configured to support specific cooling modalities and ensures robust redundancy for continuous operation.
Zone A
Cold Plate + CDU
30–40kW supported density
N+1 loop cooling redundancy
Zone B
Rear Door + Liquid
45–60kW supported density
Dual-loop CDUs cooling redundancy
Zone C
Direct-to-Chip
60–120kW+ supported density
Direct liquid + CDU cooling redundancy
Zones can be co-certified with NVIDIA to prevalidate for Rubin / Kyber deployments, ensuring rapid qualification and deployment approval for AI infrastructure projects.
This zoned approach allows for maximum flexibility and efficiency, ensuring that each rack receives the optimal cooling method for its specific workload while maintaining overall system reliability and scalability.
Deployment Use Cases
Sovereign AI Lab
A sovereign AI lab can drop 10MW in Zone B, with 59°F coolant delivered, cold-plate ready, no tenant-side infrastructure required.
Financial Institution
A financial institution needing high-density computing for real-time analytics can leverage Zone A's cold plate + CDU configuration for 30-40kW racks with N+1 redundancy.
Research Facility
A research facility with extreme compute demands for scientific simulations can deploy next-generation immersion cooling hardware in Zone C, supporting 60-80kW+ per rack with direct liquid with CDU cooling redunduncy.
Cloud Provider
A cloud provider can flexibly scale its offerings by allocating specific zones to clients based on their power and cooling requirements, from standard air-cooled to advanced liquid-cooled solutions.
Capacity & Buildout Roadmap
L2D2's phased deployment strategy delivers immediate cooling capacity while maintaining a clear path to full-scale campus operations. The infrastructure is designed for rapid activation—each phase becomes operational within 18 months of construction commencement.
This staged approach provides critical flexibility for hyperscale customers who need to align infrastructure delivery with procurement cycles, funding milestones, and market demand. You pay only for the capacity you need today, with guaranteed expansion rights for tomorrow.
Phase 1 (150MW)
Cooling Capacity: 20,000 Refrigeration Tons
Rack Support: ~5,000 AI-optimized racks
Timeline: 18 months from groundbreaking
First phase delivers immediate operational capacity with room for organic growth. Infrastructure includes full primary loop, three CDU farms, and complete redundancy across all critical systems.
Full Buildout (925MW)
Cooling Capacity: 125,000 Refrigeration Tons
Rack Support: ~32,000+ AI racks
Ultimate Density: 80kW average per rack
Complete campus build-out represents one of the largest liquid-cooled data center facilities globally. Modular expansion phases activate in 6-month intervals, enabling rapid scale-up without operational disruption.
Operational Efficiency & Uptime
L2D2's district cooling system doesn't just move heat—it does so with exceptional energy efficiency and near-perfect uptime. Every design decision prioritizes both operational reliability and environmental responsibility, delivering measurable advantages in total cost of ownership.
Energy Efficiency Ratio (EER)
System-wide optimization delivers PUE below 1.25 under typical AI workload conditions. Free cooling economizers activate automatically when ambient conditions permit, reducing compressor load and energy consumption by up to 40% during shoulder seasons.
Variable speed drives on all pumps and fans continuously adjust flow rates to match instantaneous demand, eliminating the waste inherent in fixed-speed systems. This results in 30-50% energy savings compared to traditional cooling architectures.
Automated Failover
CDU zones incorporate redundant sensors and control logic that detect anomalies and execute failover protocols in under 5 seconds. When a leak is detected or flow rates drop below threshold, the affected zone automatically isolates while backup systems assume the thermal load.
This autonomous response eliminates the need for manual intervention during critical events, ensuring compute continuity even during off-hours when staff may not be on-site.
Real-Time Monitoring
Comprehensive dashboards provide visibility into thermal performance, pressure differentials, humidity levels, and flow rates across every zone and rack row. Predictive analytics identify degradation trends before they impact operations, enabling proactive maintenance scheduling.
Integration with DCIM platforms allows tenants to monitor their allocated cooling capacity in real-time, with granular visibility down to individual rack thermal performance.
Hot-Swappable Serviceability
All headers, CDUs, valves, sensors, and manifolds are designed for maintenance without downtime. Isolation valves at every major junction enable component replacement while adjacent systems remain operational.
Quick-connect fittings use proven aerospace-grade technology that maintains seal integrity through thousands of connection cycles. Mean time to repair for any cooling component: under 2 hours.
Strategic Advantages
Beyond the technical specifications, L2D2's district cooling system delivers strategic business value that directly impacts deployment speed, operational costs, and competitive positioning. This infrastructure isn't just functional—it's a competitive differentiator.
L2D2 maintains NVIDIA-approved infrastructure specifications for Rubin, Kyber, MGX, and other accelerated compute platforms. Designated zones are pre-qualified for NVIDIA DGX SuperPOD and HGX configurations, eliminating design validation delays.
This certification accelerates procurement approval and reduces technical risk for customers deploying NVIDIA-based AI clusters at scale.
Tenant CapEx Savings
Customers avoid millions in upfront cooling infrastructure investment. No need to procure, install, or manage CDUs, manifolds, pumping systems, or monitoring infrastructure—it's all included in the facility.
This shifts cooling from a capital expense to an operational expense, improving cash flow and reducing deployment timelines by 6-12 months.
Speed-to-Deploy
Liquid-ready zones delivered in under 18 months from contract execution. Pre-built cooling infrastructure eliminates the longest-lead items in traditional data center deployments, enabling customers to activate compute capacity synchronized with hardware availability.
Quick-connect rack interfaces mean IT equipment installation proceeds at the same pace as air-cooled deployments—no specialized labor or extended integration timelines.
ESG-Aligned
Heat reuse capabilities, reduced water consumption through closed-loop design, and low-PUE operation directly support corporate sustainability commitments. Detailed thermal accounting enables accurate Scope 2 emissions reporting.
Optional integration with on-site renewable energy and thermal storage systems further reduces carbon footprint and enhances environmental reporting metrics.
Hot water return from AI workloads can be reclaimed for nearby municipal heating, boiler pre-warming, or sold to adjacent commercial users. L2D2's loop design supports >85% thermal recapture efficiency, supporting ESG mandates and offering future carbon credit monetization.
Cooling Strategy Flexibility
The infrastructure supports diverse cooling modalities—cold plate, rear door, immersion, or hybrid—on a per-tenant or per-hall basis. Customers aren't locked into a single approach; they can optimize cooling strategy based on specific workload characteristics, equipment vendors, or operational preferences.
This flexibility future-proofs investments as cooling technology continues to evolve.
Summary
L2D2's district cooling system represents a fundamental rethinking of how data centers deliver thermal management at AI scale. This isn't merely an infrastructure layer—it's a strategic enabler that unlocks deployment velocity, operational efficiency, and economic performance that simply cannot be achieved with traditional approaches.
By abstracting cooling complexity from tenants, L2D2 allows customers to focus entirely on compute architecture and workload optimization rather than mechanical engineering challenges. The result: faster time-to-revenue, lower total cost of ownership, and infrastructure that scales seamlessly as AI workloads intensify.
For hyperscale operators and AI infrastructure architects, the question isn't whether liquid cooling is necessary, that debate is settled. The question is which partner can deliver it with the reliability, efficiency, and flexibility that AI-scale compute demands. L2D2's district cooling system provides a definitive answer.

925MW
Total Capacity
Campus-scale power and cooling at full buildout
<1.25
Target PUE
Industry-leading energy efficiency for AI workloads
18mo
Deployment Speed
From contract to operational liquid-cooled capacity
80kW
Rack Density
Maximum supported power per rack with full liquid cooling
"L2D2's district cooling infrastructure delivers sustainable, scalable thermal performance from day one—abstracting complexity, accelerating NVIDIA-aligned deployments, and providing the foundation for AI compute at unprecedented scale."