Vantage Frontier: $25B Texas AI Data Center Campus
Vantage Data Center plans a 1.4GW campus in Shackelford County, Texas, framing the next phase of AI-era infrastructure at hyperscale.
Key specs: 1.4GW, 1,200 acres, 10 buildings
Vantage will invest more than $25 billion to build Frontier, a 1,200-acre, 10-building campus totaling roughly 3.7 million square feet near Abilene, about 120 miles west of Dallas-Fort Worth. The site is designed for ultra-high-density racks of 250kW and above, paired with liquid cooling for next-generation GPU systems. Construction has started, with the first delivery targeted for the second half of 2026. Vantage expects more than 5,000 jobs through construction and operations. This is the company’s largest project to date and underscores its acceleration beyond a global footprint of 36 campuses delivering nearly 2.9GW of critical IT load. Vantage is a portfolio company of DigitalBridge Group.
Why Texas: power, scale, ERCOT access
AI infrastructure demand is outpacing traditional data center development on power, density, and speed to market. Texas offers scale, land, and access to the ERCOT power market, plus a pro-investment policy climate. Training clusters tolerate longer-distance latency, so proximity to the DFW metro is less critical than power availability and build velocity. The location provides a path to massive capacity while easing pressure on constrained urban grids.
Economic impact for Abilene and Texas
Frontier will be an economic anchor for Shackelford County and the Abilene region via jobs, tax base, and local supply chains. Vantage plans local hiring, training programs, and scholarships for students over the lifecycle of the project. Statewide, the investment strengthens Texas positioning as a national AI infrastructure hub.
AI-Scale Design: Power, Density, Liquid Cooling
The campus blueprint is optimized for GPU-heavy clusters that push thermal and electrical limits beyond traditional cloud designs.
1.4GW capacity and 250kW+ racks explained
At 1.4GW of planned capacity, Frontier approaches the scale of a utility. Rack densities north of 250kW point to immersion or direct-to-chip liquid cooling and high-capacity power distribution. This is consistent with the shift to multi-megawatt GPU pods tied by low-latency fabrics and high-throughput storage tiers. It also implies advanced power topologies, larger electrical rooms, and robust harmonic filtering to handle non-linear IT loads.
Liquid cooling aligned to ASHRAE and OCP
Vantage will deploy liquid cooling to support next-gen GPU loads, which aligns with industry guidance from ASHRAE TC9.9 and Open Compute Project work on Advanced Cooling Facilities. Tenants should plan for warm-water loops, CDU placement, leak detection, and serviceability workflows. Hardware roadmaps should consider OCP and Open19 specifications, facility coolant compatibility, and lifecycle refresh cycles tied to 800G and 1.6T Ethernet transitions.
Delivery timeline (H2 2026) and project risks
First capacity lands in H2 2026, which is aggressive for power, cooling, and interconnection at this scale. Key risks include transformer and switchgear lead times, substation interconnect schedules, and liquid-cooling supply chains. Early design freezes and structured material commitments will be table stakes for on-time turn-up.
ERCOT Energy Strategy, Water Use, and ESG
Power procurement and resource stewardship will define both execution risk and tenant perception.
Grid interconnection, PPAs, and carbon hedging
ERCOT offers competitive wholesale pricing and fast growth in wind and solar, but interconnection queues and transmission constraints remain real. A multi-phase plan will require staged energization, likely with large on-site substations and long-duration PPAs or virtual PPAs to hedge price and carbon exposure. Tenants should seek transparency on capacity reservations, renewable matching, and hourly carbon accounting.
Water-efficient cooling and LEED targets
Frontier will use a highly efficient closed-loop chiller system with minimal water usage, which is critical in water-stressed regions. Vantage expects meaningful savings versus evaporative systems and is targeting LEED certification. For buyers with ESG targets, this reduces water intensity risk and supports reporting under frameworks like GRI and CDP.
Resiliency for long AI training workloads
AI training workloads have long run times and checkpointing overheads, so uptime and grid volatility matter. Expect N+ redundancy at scale, diverse feeders where available, and potential for on-site backup generation and energy storage. Tenants should validate fault domains, maintenance windows, and ride-through strategies across power events.
Network: Long-haul Backhaul, Interconnect, Optics
AI-scale campuses reshape regional transport demand and data center fabrics.
Long-haul and metro fiber growth to DFW
Frontier will drive new long-haul and regional fiber builds between Abilene and DFW and to other national interconnect hubs. Carriers and wholesalers will pursue diverse routes, regen sites, and protected services to meet multi-terabit requirements. Dark fiber and spectrum services will be in focus for hyperscalers and large AI tenants.
400G/800G waves and routed optical (ZR/ZR+)
Demand will shift to 400G and 800G wavelengths with open line systems and ZR/ZR+ pluggables in routed optical designs. Operators should plan for 400ZR today and 800ZR trials, along with flexible grid ROADM deployments. Time-sensitive training pipelines will benefit from deterministic latency guarantees and automated restoration policies.
Intra-campus AI fabric: 800G to 1.6T Ethernet
Inside the data centers, 800G Ethernet is mainstreaming with 51.2T switches, on a path to 1.6T and 102.4T systems. RoCEv2-based AI fabrics and improved congestion control will be critical for job completion times. Power and thermal budgets must account for higher-speed optics and the move toward linear-drive and co-packaged optics later in the decade.
Texas in the AI Data Center Triangle
Texas is consolidating its role alongside Northern Virginia and the Mountain West in the AI capacity race.
Policy advantage and ERCOT power mix
Streamlined permitting, access to ERCOT, and rapid renewable buildouts give Texas an execution edge. The policy stance is favorable to large digital infrastructure and ancillary investment in transmission and workforce development. This is attracting capital at a pace few markets can match.
How Frontier complements Vantageโs portfolio
Frontier complements Vantages ongoing build in San Antonio and a recently announced multibillion-dollar campus in Nevada, signaling a multi-region AI strategy. For tenants, this enables distributed training and disaster recovery patterns with regional diversity. It also supports proximity to major cloud regions without being inside congested metros.
Tenant considerations: training vs inference
Training, fine-tuning, and batch inference fit well in West Texas; latency-sensitive inference may still sit nearer to end users. A hub-and-spoke approachtraining at Frontier, inference near DFW and other metrosbalances cost, power, and performance. Cross-region bandwidth reservations and consistent security postures will be essential.
Next Steps and Buyer Actions
Procurement, interconnect, and sustainability details will determine how quickly this capacity becomes usable for AI at scale.
Actions for cloud and AI tenants
Lock in power-dense halls and liquid-cooling options early, with clear SLAs on rack density, coolant distribution, and service windows. Align optics and fabric roadmaps with facility timelines. Pursue granular renewable matching and transparent carbon reporting to meet internal targets.
Actions for carriers and fiber builders
Accelerate diverse long-haul and regional routes into Shackelford County and DFW, design for 400/800G services, and enable ZR/ZR+ at scale. Offer protected paths, spectrum services, and deterministic latency SLAs tailored to AI pipelines. Engage now on meet-me design and conduit rights.
Actions for enterprises and integrators
Evaluate colocation versus cloud GPU economics with 24 year runway assumptions, factoring in optics, networking, and cooling OPEX. Standardize on liquid-cooled reference architectures and plan for staged upgrades to 1.6T Ethernet. Build multi-site data mobility plans that harness Frontiers scale without locking into a single region.