Axe Compute Inc. (NASDAQ:AGPU) saw its shares jump 100% after revealing a $260 million, three-year enterprise infrastructure contract to provide a dedicated GPU cluster comprised of 2,304 NVIDIA B300 GPUs. The arrangement is scheduled to run for 36 months and represents the largest enterprise engagement the company has secured to date.
The contract covers both GPU compute and AI-optimized high-speed storage, all to be installed at a single Tier 3 data center facility within the United States. It includes options for renewal beyond the initial three-year term and allows for the addition of ancillary services under enterprise-grade service levels.
Under the terms, Axe Compute will deploy infrastructure intended to support foundation model training, fine-tuning and domain adaptation, and high-throughput inference workloads. The planned cluster will adhere to NVIDIA reference architecture while delivering dedicated, committed resources to the customer.
Power provisioning for the deployment is specified at 4.8 megawatts of dedicated capacity on an N+1 redundant basis. The company has targeted the third quarter of 2026 for the deployment window.
Details of the commercial structure include a deposit, a prepayment, and monthly advance payments on a take-or-pay basis. The contract offers enterprise-grade service levels and leaves room for additional ancillary service offerings as required by the customer.
Christopher Miglino, Chief Executive Officer of Axe Compute Inc., framed the agreement as a broader signal about enterprise infrastructure preferences, saying, "This agreement is a signal. Enterprise AI customers are no longer willing to adapt their infrastructure roadmaps to the capacity constraints of legacy hyperscalers. A 2,304-GPU B300 deployment, contracted, dedicated, U.S.-based, and priced to compete, is what purpose-built AI infrastructure looks like. We intend to replicate this commercial structure at scale."
Key points
- Contract value and scale - $260 million over 36 months to deploy 2,304 NVIDIA B300 GPUs at a single U.S. Tier 3 data center.
- Infrastructure and capabilities - Designed to support foundation model training, fine-tuning, domain adaptation, and high-throughput inference; maintains NVIDIA reference architecture and dedicated committed resources.
- Market and operational impact - The deal signals demand for purpose-built, dedicated AI infrastructure, affecting AI infrastructure providers, data center operations, and GPU supply chains.
Risks and uncertainties
- Timing risk - Deployment is targeted for the third quarter of 2026; any delays could affect delivery and revenue recognition for the contracted period.
- Concentration risk - The deployment is planned at a single Tier 3 data center facility, which concentrates operational execution and could expose the delivery to location-specific issues.
- Contractual payment structure - The agreement uses a mix of deposit, prepayment, and monthly advance, take-or-pay payments; financial and cashflow implications depend on adherence to these terms.
The transaction underscores demand from enterprise AI customers for dedicated, competitively priced infrastructure outside of legacy hyperscaler capacity constraints. Axe Compute has indicated an intention to replicate this commercial structure at scale, while the contract terms ensure committed resources and enterprise-grade service levels for the customer.